id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2304.09447
A relative Toponogov comparison theorem
We present a relative form of the Toponogov comparison theorem.
Jianming Wan
2023-04-19T06:34:47Z
http://arxiv.org/abs/2304.09447v2
# A relative Toponogov comparison theorem ###### Abstract. We present a relative form of the Toponogov comparison theorem. Key words and phrases:Toponogov comparison theorem, curvature bounded below 2010 Mathematics Subject Classification: 53C20,53C23 The author is supported by Natural Science Foundation of Shaanxi Province of China (No.2022JM-010) and Shaanxi Fundamental Science Research Project for Mathematics and Physics (No.22JSY025). ## 2. a proof of main result Let \(\gamma_{3}\) be a geodesic segment from \(\gamma_{1}(b)\) to \(\gamma_{2}(t)\) and \(\bar{\gamma}_{3}\) be the geodesic segment from \(\bar{\gamma}_{1}(b)\) to \(\bar{\gamma}_{2}(t)\). Write \[\beta=\measuredangle(\gamma_{2}^{{}^{\prime}}(t),\gamma_{3}^{{}^{\prime}}(r(t)),\ \bar{\beta}=\measuredangle(\bar{\gamma_{2}^{{}^{\prime}}}(t),\bar{\gamma_{3}^{{}^{ \prime}}}(r(t)).\] If \(\gamma_{2}(t)\) is not a cut point of \(\gamma_{1}(b)\), then there exists \(\delta>0\) such that \(r(t)\) is smooth on \((t-\delta,t+\delta)\). The first variation formula yields \[\dot{r}^{{}^{\prime}}(t)=\cos\beta,\ \bar{r}^{{}^{\prime}}(t)=\cos\bar{\beta}.\] In case \(k=0,1\), we would show \[(\frac{r(t)}{\bar{r}(t)})^{{}^{\prime}}=\frac{\dot{r}^{{}^{\prime}}\bar{r}-r \bar{r}^{{}^{\prime}}}{\bar{r}^{2}}=\frac{1}{\bar{r}^{2}}(\bar{r}\cos\beta-r \cos\bar{\beta})\leq 0.\] Equivalently, \[r\cos\bar{\beta}-\bar{r}\cos\beta\geq 0.\] And in case \(k=-1\), we would show \[\psi^{{}^{\prime}}(t)=\bar{r}^{{}^{\prime}}(t)-\dot{r}^{{}^{\prime}}(t)=\cos \bar{\beta}-\cos\beta\geq 0.\] **(1) Case \(k=0\).** The law of Cosines gives \[\cos\beta=\frac{\bar{r}^{2}+t^{2}-b^{2}}{2\bar{r}t}.\] From Toponogov comparison theorem, we have \(b^{2}\leq r^{2}+t^{2}-2rt\cos\beta\). This implies \[\cos\beta\leq\frac{r^{2}+t^{2}-b^{2}}{2rt}.\] Then \[r\cos\bar{\beta}-\bar{r}\cos\beta\geq\frac{\bar{r}^{2}-r^{2}}{2r\bar{r}t}(b^ {2}-t^{2})\geq 0,\] when \(t\leq b\). **(2) Case \(k=1\).** The law of Cosines gives \[\cos\bar{\beta}=\frac{\cos b-\cos\bar{r}\cos t}{\sin\bar{r}\sin t}.\] From Toponogov comparison theorem, we have \(\cos b\geq\cos r\cos t+\sin r\sin t\cos\beta\). This implies \[\cos\beta\leq\frac{\cos b-\cos r\cos t}{\sin r\sin t}.\] Then \[r\cos\bar{\beta}-\bar{r}\cos\beta \geq \frac{1}{\sin\bar{r}\sin r\sin t}[(\bar{r}\sin\bar{r}\cos r-r\sin r \cos\bar{r})\cos t+(r\sin r-\bar{r}\sin\bar{r})\cos b]\] \[\geq \frac{\cos b}{\sin\bar{r}\sin r\sin t}(\bar{r}\sin\bar{r}\cos r-r \sin r\cos\bar{r}+r\sin r-\bar{r}\sin\bar{r})\] \[= \frac{r\bar{r}\cos b}{\sin t}(\frac{1-\cos\bar{r}}{\bar{r}\sin \bar{r}}-\frac{1-\cos r}{r\sin r})\] \[\geq 0.\] when \(t\leq b\leq\pi/2\). The second " \(\geq\) " holds because the function \(\phi(\bar{r})=\bar{r}\sin\bar{r}\cos r-r\sin r\cos\bar{r}\geq 0\) for \(0<r\leq\bar{r}\leq\pi\). To see this, we write \[\phi(\bar{r})=r\bar{r}\sin r\sin\bar{r}(\frac{\cos r}{r\sin r}-\frac{\cos\bar {r}}{\bar{r}\sin\bar{r}}).\] Since the function \(\frac{\cos t}{t\sin t}\) is decreasing for \(0<t\leq\pi\), we have \(\phi(\bar{r})\geq 0\). The third "\(\geq\) " holds because \(f(t)=\frac{1-\cos t}{t\sin t}\) is increasing (\(f^{{}^{\prime}}(t)>0\)) for \(0<t<\pi\). **(3) Case \(k=-1\).** The law of Cosines gives \[\cos\bar{\beta}=\frac{\cosh\bar{r}\cosh t-\cosh b}{\sinh\bar{r}\sinh t}.\] From Toponogov comparison theorem, we have \(\cosh b\leq\cosh r\cosh t-\sinh r\sinh t\cos\beta\). This implies \[\cos\beta\leq\frac{\cosh r\cosh t-\cosh b}{\sinh r\sinh t}.\] Then \[\cos\bar{\beta}-\cos\beta \geq \frac{1}{\sinh\bar{r}\sinh r\sinh t}[\sinh(r-\bar{r})\cosh t+( \sinh\bar{r}-\sinh r)\cosh b]\] \[\geq \frac{\cosh b}{\sinh\bar{r}\sinh r\sinh t}[\sinh(r-\bar{r})+\sinh \bar{r}-\sinh r],\] when \(t\leq b\). One is easy to see that the function \[f(\bar{r})=\sinh(r-\bar{r})+\sinh\bar{r}-\sinh r\] satisfies \(f(r)=0\), \(\frac{df}{d\bar{r}}\geq 0\) for \(\bar{r}\geq r\). So \(f(\bar{r})\geq 0\). Hence \[\cos\bar{\beta}-\cos\beta\geq 0.\] Unfortunately, we can not obtain the conclusion (A) when \(k=-1\). In this situation, \[r\cos\bar{\beta}-\bar{r}\cos\beta\geq\frac{r\bar{r}\cosh b}{\sinh t}(\frac{ \cosh\bar{r}-1}{\bar{r}\sinh\bar{r}}-\frac{\cosh r-1}{r\sinh r}).\] Since \(\frac{\cosh t-1}{t\sinh t}\) is decreasing, the right hand is less than \(0\). If \(\gamma_{2}(t)\) is a cut point of \(\gamma_{1}(b)\), there would be more than one geodesic segment from \(\gamma_{1}(b)\) to \(\gamma_{2}(t)\). From Petersen [3] (Page 224, Exercise 5.9.28.), the right-hand derivative \[r_{+}^{{}^{\prime}}(t)=\min\cos\beta\] and left-hand derivative \[r_{-}^{{}^{\prime}}(t)=\max\cos\beta.\] By the calculation of above three cases, we have \[(\frac{r(t)}{\bar{r}(t)})_{+}^{{}^{\prime}}\leq 0,\ (\frac{r(t)}{\bar{r}(t)})_{-} ^{{}^{\prime}}\leq 0\] and \[\psi_{+}^{{}^{\prime}}(t)\geq 0,\ \psi_{-}^{{}^{\prime}}(t)\geq 0.\] To sum up, whether or not \(r(t)\) is smooth, we always have \((\frac{r(t)}{\bar{r}(t)})_{+}^{{}^{\prime}}\leq 0\), \((\frac{r(t)}{\bar{r}(t)})_{-}^{{}^{\prime}}\leq 0\) and \(\psi_{+}^{{}^{\prime}}(t)\geq 0,\ \psi_{-}^{{}^{\prime}}(t)\geq 0\). Then we can complete the proof of Theorem 1.1 from the fact (see Miller-Vyborny [2]): Let \(f\) be a continuous function on \([a,b]\). If for each \(x\in(a,b)\) one of the one-sided derivative \(f_{+}^{{}^{\prime}}\) or \(f_{-}^{{}^{\prime}}\) exists, and is nonnegative (possibly \(+\infty\)), then \(f\) is monotonic increasing. _Remark 2.1_.: One may think that the restriction \(t\leq b\) in Theorem 1.1 is not necessary. But the proof shows that non-decreasing of \(\psi(t)\) is equivalent to \(\beta\geq\bar{\beta}\). It seems that no reason makes this true globally. _Remark 2.2_.: If \(t>b\), we can compare along \(\gamma_{1}\). We denote \(r(t,s)=d_{M}(\gamma_{1}(t),\gamma_{2}(s))\) and \(\bar{r}(t,s)=d_{S^{n}_{1}}(\bar{\gamma}_{1}(t),\bar{\gamma}_{2}(s))\). Then conclusion (A) in Theorem 1.1 can be written as \[\frac{r(t,s_{1})}{\bar{r}(t,s_{1})}\geq\frac{r(t,s_{2})}{\bar{r}(t,s_{2})}\] when \(s_{1}<s_{2}\leq t\) and \[\frac{r(t_{1},s)}{\bar{r}(t_{1},s)}\geq\frac{r(t_{2},s)}{\bar{r}(t_{2},s)}\] when \(t_{1}<t_{2}\leq s\). Denote \(\psi(t,s)=\bar{r}(t,s)-r(t,s)\). Conclusion (B) says \[\psi(t,s_{1})\leq\psi(t,s_{2})\] when \(s_{1}<s_{2}\leq t\) and \[\psi(t_{1},s)\leq\psi(t_{2},s)\] when \(t_{1}<t_{2}\leq s\). So Theorem 1.1 can be flexible for some possible applications. ## 3. start point free case In this section we consider the relative Toponogov comparison theorem when the start point is free. Now we set \(r^{*}(t)=d_{M}(\gamma_{1}(t),\gamma_{2}(t))\) and \(\bar{r}^{*}(t)=d_{S^{n}_{1}}(\bar{\gamma}_{1}(t),\bar{\gamma}_{2}(t))\). Here \(\gamma_{1}(t),\gamma_{2}(t),\bar{\gamma}_{1}(t),\bar{\gamma}_{2}(t)\) are same to that in Section 1. Then we have **Theorem 3.1**.: _(A) The distance ratio_ \[t\mapsto\frac{r^{*}(t)}{\bar{r}^{*}(t)}\] _is a non-increasing function for \(i\)): \(t\geq 0\) when \(k=0\); ii): \(t\leq\pi/2\) when \(k=1\)._ _(B) The distance difference_ \[\psi^{*}(t)=\bar{r}^{*}(t)-r^{*}(t)\] _is a non-decreasing function for \(i\)): \(t\geq 0\) when \(k=0,-1\); ii): \(t\leq\pi/2\) when \(k=1\)._ The proof is similar to that of Theorem 1.1. In addition, we write \(\gamma=\angle(\gamma^{{}^{\prime}}_{1}(t),-\gamma^{{}^{\prime}}_{3}(0)\) and \(\bar{\gamma}=\angle(\bar{\gamma}^{{}^{\prime}}_{1}(t),-\bar{\gamma}^{{}^{ \prime}}_{3}(0))=\bar{\beta}\). If \(\gamma_{2}(t)\) is not a cut point of \(\gamma_{1}(t)\), then \(r^{*}(t)\) is smooth, the first variation formula yields \[r^{*^{*}}(t)=\cos\beta+\cos\gamma,\ \bar{r}^{*^{\prime}}(t)=2\cos\bar{\beta}.\] We can show \[(\frac{r^{*}(t)}{\bar{r}^{*}(t)})^{{}^{\prime}}=\frac{r^{*^{\prime}}\bar{r}^{ *}-r^{*}\bar{r}^{*^{\prime}}}{\bar{r}^{*2}}=\frac{\bar{r}^{*}(\cos\beta+\cos \gamma)-2r^{*}\cos\bar{\beta}}{\bar{r}^{*2}}\leq 0.\] when \(k=0,1\) and \[\psi^{*^{\prime}}(t)=\bar{r}^{*^{\prime}}(t)-r^{*^{\prime}}(t)=2\cos\bar{ \beta}-(\cos\beta+\cos\gamma)\geq 0.\] when \(k=-1\). **(1) Case \(k=0\).** Note that \(\cos\bar{\beta}=\frac{\bar{r}^{*}}{2t},\ \cos\beta\leq\frac{\bar{r}^{*}}{2t},\ \cos\gamma\leq\frac{\kappa}{2t}\). Then \[2r^{*}\cos\bar{\beta}-\bar{r}^{*}(\cos\beta+\cos\gamma)\geq 0.\] **(2) Case \(k=1\).** Note that \[\cos\bar{\beta}=\frac{\cos t(1-\cos\bar{r}^{*})}{\sin\bar{r}^{*}\sin t}\] \[\cos\beta,\ \cos\gamma\leq\frac{\cos t(1-\cos r^{*})}{\sin r^{*}\sin t}.\] Then \[2r^{*}\cos\bar{\beta}-\bar{r}^{*}(\cos\beta+\cos\gamma) \geq \frac{2r^{*}\bar{r}^{*}\cos t}{\sin t}(\frac{1-\cos\bar{r}^{*}}{ \bar{r}^{*}\sin\bar{r}^{*}}-\frac{1-\cos r^{*}}{r^{*}\sin r^{*}})\] \[\geq 0.\] when \(t\leq\pi/2\). **(3) Case \(k=-1\).** Note that \[\cos\bar{\beta}=\frac{(\cosh\bar{r}^{*}-1)\cosh t}{\sinh\bar{r}^{*}\sinh t}\] and \[\cos\beta,\ \cos\gamma\leq\frac{(\cosh r^{*}-1)\cosh t}{\sinh r^{*} \sinh t}.\] Then \[2\cos\bar{\beta}-(\cos\beta+\cos\gamma) \geq \frac{2\cosh t}{\sinh\bar{r}^{*}\sinh r^{*}\sinh t}[\sinh(r^{*}- \bar{r}^{*})+\sinh\bar{r}^{*}-\sinh r^{*}]\] \[\geq 0.\] If \(\gamma_{2}(t)\) is a cut point of \(\gamma_{1}(t)\), there would be more than one geodesic segment from \(\gamma_{1}(t)\) to \(\gamma_{2}(t)\). Using similar arguments in Petersen [3] (Page 224, Exercise 5.9.28.), we can show \[r_{+}^{*^{\prime}}(t)=\min(\cos\beta+\cos\gamma),\ r_{-}^{*^{\prime}}(t)=\max (\cos\beta+\cos\gamma).\] So \[(\frac{r^{*}(t)}{\bar{r}^{*}(t)})_{+}^{{}^{\prime}}\leq 0,\ (\frac{r^{*}(t)}{\bar{r}^{*}(t)})_{-}^{{}^{ \prime}}\leq 0\] and \[\psi_{+}^{*^{\prime}}(t)\geq 0,\ \psi_{-}^{*^{\prime}}(t)\geq 0.\] By the result of Miller-Vyborny [2], we complete the proof of Theorem 3.1. When \(M\) has nonnegative sectional curvature. From Theorem 1.1 and 3.1, we obtain **Corollary 3.2**.: _(1) Let \(d=d_{M}(\gamma_{1}(b),\gamma_{2}(b))\) and \(\angle(\gamma_{1}^{{}^{\prime}}(0),\gamma_{2}^{{}^{\prime}}(0))=\alpha\). Then_ \[r(t)\geq d\cos\frac{\alpha}{2}.\] _(2) Let \(d_{i}=d_{M}(\gamma_{1}(l_{i}),\gamma_{2}(l_{i})),i=1,2,l_{1}<l_{2}\). Then_ \[d_{1}\geq\frac{l_{1}}{l_{2}}d_{2}.\] Proof.: (1): By (A) of Theorem 1.1, \(r\geq\frac{\bar{r}}{d}d\geq d\sin\beta=d\cos\frac{\alpha}{2}\). (2): By (A) of Theorem 3.1, \(d_{1}\geq\frac{\bar{d}_{1}}{\bar{d}_{2}}d_{2}=\frac{l_{1}}{l_{2}}d_{2}\).
2307.00655
Revisiting Arnold's topological proof of the Morse index theorem
We give an exposition of the Morse Index Theorem in the Riemannian case in terms of the Maslov index, following and expanding upon Arnold's seminal paper. We emphasize the symplectic arguments in the proof and aim to be as self-contained as possible.
Eduardo V. Sodré
2023-07-02T20:10:28Z
http://arxiv.org/abs/2307.00655v1
# Revisiting Arnold's topological ###### Abstract. We give an exposition of the Morse Index Theorem in the Riemannian case in terms of the Maslov index, following and expanding upon Arnold's seminal paper. We emphasize the symplectic arguments in the proof and aim to be as self-contained as possible. Key words and phrases:Morse index, Maslov index, Lagrangian Grassmannian 2020 Mathematics Subject Classification: 58E10 (Primary), 53D12 (Secondary) ## 1. Introduction Carl Gustav Jacobi in 1842 [12] seems to have been the first to investigate whether the principle of least action in the calculus of variations always yields minima as opposed to other kinds of stationary points, and most of his work focused on geodesics in two-dimensional surfaces. Marston Morse gave the first clear general statements in the 1920s and 1930s, leading to what is known today as Morse theory [17]. In particular, his celebrated Index Theorem roughly states that the number of essentially shorter routes near a given geodesic can be computed as the number of conjugate points along the geodesic (counting multiplicity). Perhaps as a sign of the depth of this statement, one can find in the literature countless variations, extensions and generalizations of Morse's index theorem, together with methods of proofs of different flavors. As Bott put it in [3], "in all properly posed variational problems there is some kind of index theorem". Beautiful as finite-dimensional Morse theory can be, the real goal of Morse was the infinite-dimensional calculus of variations setting of Morse theory. He considers a functional \[J(\sigma)=\int_{a}^{b}F(\sigma,\dot{\sigma})\,dt\] in some space of paths \(\Omega\), subject to a certain nondegeneracy condition and admissible boundary conditions. Here the "tangent space" to an extremal \(\sigma\) of \(J\) is the set of vector fields along it, and the "Hessian" of \(J\) at \(\sigma\) is given by the second variation of \(J\). By choosing a frame along \(\sigma\) a vector field along \(\sigma\) is identified with an \(\mathbb{R}^{n}\)-valued function of the parameter \(t\) along \(\sigma\) and, upon integration by parts, that Hessian takes the form \[\int_{a}^{b}\langle Lx,x\rangle\,dt,\] where \(x(t)\) represents a vector field along \(\sigma\) and \(\langle,\rangle\) denotes the pointwise inner product, for a self-adjoint second order linear differential operator \(L\). The Sturm-Liouville eigenvalue problem \[Lx=\lambda x\] subject to boundary conditions turns out to be well-posed and thus has a finite-dimensional solution space for \(\lambda\leq 0\). Morse proceeds to define the index and nullity for \(\sigma\) respectively as the dimension of the space of solutions of \(Lx=0\) and the dimension of the space of solutions of \(Lx=\lambda x\) with \(\lambda<0\). In Riemannian geometry, for \(J\) one takes the energy functional on a complete Riemannian manifold \(M\), given by \[E(\gamma)=\int_{a}^{b}||\dot{\gamma}||^{2}\,dt\] and defined on the space \(\Omega\) of piecewise smooth curves parametrized by \(t\in[a,b]\) proportional to arc-length, and the boundary conditions are the fixed endpoint conditions \[\gamma(a)=p,\quad\gamma(b)=q,\] for fixed \(p\), \(q\in M\). The extremals are exactly the geodesics, and here the eigenvalue problem presents itself as \[-Y^{\prime\prime}+R(\dot{\gamma},X)\dot{\gamma}=\lambda X,\qquad Y(a)=Y(b)=0,\] where the prime denotes covariant differentiation of the vector field \(Y\) along \(\gamma\) and \(R\) denotes the curvature tensor. The index of \(\gamma\) manifests itself as the obstruction to the geodesic being a local minimum of the energy, as subspaces on which the Hessian is negative definite are the directions on which we can perturb \(\gamma\) and obtain shorter paths. The solutions in case \(\lambda=0\) are called _Jacobi fields_, and points \(p\) and \(q\) are called _conjugate along \(\gamma\)_ in case there is a nonzero Jacobi field vanishing at \(a\) and \(b\); in this case the _multiplicity_ of such a conjugate pair is the dimension of the space of such Jacobi fields. He then succeeds in proving the Morse inequalities for any nondegenerate \(J\), and arrives at his beautiful Index Theorem: _the index of an extremal \(\gamma\) in the fixed endpoints case equals the number of conjugate points of one endpoint in the interior of \(\gamma\), counting multiplicity_. Morse himself applied the Index Theorem to obtain deep results about existence of geodesics in the 2-sphere with an arbitrary metric, and Bott [4] was led by a similar analysis to his celebrated Periodicity Theorem. Morse's original proof of the Index Theorem has been expounded by Ambrose [1] and made concise by Osborn [18], and has been generalized to PDEs by Smale [24] and minimal submanifolds by Simons [23]. Uhlenbeck [25] gave a proof based on Hilbert spaces and applied it to minimal submanifolds as well. It has also been presented accesibly in book form by Milnor [15] and further divulged by do Carmo [5]. On a different vein, the Morse theory in Hilbert spaces has been developed by Palais [19] (see also the textbook by Klingenberg [13]). The Index Theorem represent a natural extension of the classical Sturm-Liouville theory of differential equations and, as such, it has been considered by Edwards [7] and Zhu [27] by adapting it to higher order systems. It was also applied in pseudo-Riemannian geometry by Helfer [11] and by Giannoni, Masiello, Piccione and Tausk [9, 20], such as in the case of conjugate points along spacelike geodesics. For a \(K\)-theoretic approach to Morse's Index Theorem, see [26]. Similar ideas are used in a generalization of Morse theory called Floer theory [8, 22]. We were particularly attracted to Arnold's original paper [2], displaying an ingenuity and simplicty so characteristic of him, and this text is our attempt to present his arguments from our point of view. From a modern perspective, and considering the self-adjointness of the Jacobi operator, we want to make evident the almost inevitably of the appearence of symplectic methods, revolving around the Maslov-Arnold index. Rephrasing the result in a topological way, in terms of intersections of Lagragian subspaces, opens up new venues and vistas. This is an approach also taken by Duistermaat [6] and Lytchak [14], and by Piccione and Tausk [21]. Our aim has been to follow a most "natural" path (not always the shortest one), based on elementary arguments and simplified constructions, and to be as transparent as possible. For this reason we also restrict the discussion to the most basic case, that is, Riemannian geodesics with fixed endpoints. We now sketch the main issues involved in our exposition of Arnold's ideas. In the Riemannian case, consider a geodesic \(\gamma\) defined on an interval \([a,b]\), and denote by \(H_{t}\) the Hessian of the energy functional defined on the space of vector fields along \(\gamma|_{[a,t]}\) vanishing at \(a\) and \(t\). We like to think of the Index Theorem as the following chain of equalities: \[\operatorname{ind}(H_{b})=\sum_{\lambda<0}\operatorname{nul}(H_{b}-\lambda I) =\sum_{\lambda\in(\lambda_{0},0)}\operatorname{nul}(H_{b}-\lambda I)=\sum_{t \in(a,b)}\operatorname{nul}(H_{t}),\] where \(\lambda_{0}\) is some negative number. The first equality would be clear in finite dimensions, but requires some discussion in infinite dimensions. The second equality is due to the fact that the corresponding Sturm-Liouville problem has eigenvalues bounded below. Indeed, it follows from standard Sturm-Liouville theory that there are finitely many negative eigenvalues, but we circumvent this extra background in our topological approach. The last equality is at the core of our discussion, and is obtained from interpreting the relevant nullities as intersection numbers of a certain 1-cycle with the canonical Maslov cycle in the Lagrangian Grassmannian. This 1-cycle is a homologically trivial curve of Lagrangian subspaces constructed from the Jacobi equation, hence the total intersection number must vanish, from which we derive the Index Theorem. ## 2. Riemannian Geometry With the notation used in the introduction, let \(\Gamma\) the set of smooth vector fields along a geodesic \(\gamma:[a,b]\to M\) and \(\Gamma_{0}\subset\Gamma\) those vector fields which vanish at the endpoints. The index form \(H_{b}:\Gamma_{0}\times\Gamma_{0}\to\mathbb{R}\), arising from the second variation of the energy, is given by \[H_{b}(X,Y) =\int_{a}^{b}\langle X^{\prime},Y^{\prime}\rangle+\langle R(\dot{ \gamma},X)\dot{\gamma},Y\rangle ds \tag{2}\] \[=\int_{a}^{b}\langle-X^{\prime\prime}+R(\dot{\gamma},X)\dot{ \gamma},Y\rangle ds. \tag{1}\] It is bilinear and symmetric, and we naturally consider those \(X\in\Gamma\) that satisfy \[-X^{\prime\prime}+R(\dot{\gamma},X)\dot{\gamma}=0,\] being the Jacobi fields. Note that \(R(t)\coloneqq R(\dot{\gamma}(t),\cdot)\dot{\gamma}(t)\) is a self-adjoint operator on \(T_{\gamma(t)}M\) due to the symmetries of the curvature tensor. By choosing a parallel orthonormal frame \((E_{1},\dots,E_{n})\) along \(\gamma\), the Jacobi fields \(X(t)=x^{i}(t)E_{i}(t)\) correspond to solutions of a homogenous second order linear system of ODEs. They are smooth and form a vector space \(\mathcal{J}\subset\Gamma\) of dimension \(2n\), being uniquely defined by any prescribed pair of values \((X(t),X^{\prime}(t))\) for \(t\in[a,b]\), in particular the initial conditions \((X(a),X^{\prime}(a))\). It is also easily seen from (2) that the kernel of \(H_{b}\) as a symmetric bilinear form is exactly \(\mathcal{J}\cap\Gamma_{0}\), that is, the set of Jacobi fields which vanish at the endpoints \(a\) and \(b\). We say that \(t\in(a,b]\) is a _conjugate value_ to \(a\) along \(\gamma\), and that \(\gamma(t)\) is its respective _conjugate point_, if there exists a non-zero Jacobi field \(X\) along \(\gamma|_{[a,t]}\) such that \(X(a)=X(t)=0\). This field can naturally be extended to a Jacobi field defined on the whole interval \([a,b]\). Recall that the index of a symmetric bilinear form is the maximal dimension of a subspace on which it is negative definite. This dimension can, in principle, be infinite. We also know that the kernel of the index forms \(H_{t}\) for \(t\in[a,b]\) are the Jacobi fields that vanish at \(a\) and \(t\). The Morse Index Theorem, as stated previously, asserts that the index of \(H_{b}\) is equal to the number of conjugate values to \(a\) in \((a,b)\) along \(\gamma\) counted with their multiplicity: **Theorem 2.1** (The Morse Index Theorem).: \[\operatorname{ind}(H_{b})=\sum_{\lambda<0}\operatorname{nul}(H_{b}-\lambda I )=\sum_{\lambda\in(\lambda_{0},0)}\operatorname{nul}(H_{b}-\lambda I)=\sum_{ t\in(a,b)}\operatorname{nul}(H_{t}).\] The first identity affirms that the index of \(H_{b}\) corresponds to the number of negative eigenvalues of the Sturm-Liouville problem \[\begin{cases}L_{\lambda}[X]=-X^{\prime\prime}+(R-\lambda I)X=0,\\ X(a)=X(b)=0\end{cases} \tag{3}\] counted with multiplicity, and the third identity represents the equivalence with the conjugate values with multiplicity. At first, we don't necessarily know whether the index, the number of negative eigenvalues, and the number of conjugate values are finite, but we can promptly prove the second identity: **Theorem 2.2**.: _The eigenvalues of the Sturm-Liouville problem_ \[\begin{cases}L_{\lambda}[X]=-X^{\prime\prime}+(R-\lambda I)X=0,\\ X(a)=X(t)=0\end{cases}\] _are bounded below by some \(\lambda_{0}\) that does not depend on \(t\in(a,b]\)._ Proof.: If \(X\) is a solution, then \(0=\langle L_{\lambda}[X],X\rangle\). We take the integral over \([a,t]\), integrate by parts and use that \(X(a)=X(t)=0\) to obtain \[0=\int_{a}^{t}\|X^{\prime}\|^{2}+\langle(R-\lambda I)X,X\rangle ds.\] For \(-\lambda\) sufficiently large, the self-adjoint operator \(R(t)-\lambda I\) will have only positive eigenvalues for all \(t\in[a,b]\). If \(\mu\) is the infimum of the lowest eigenvalue of \(R(t)-\lambda I\) for \(t\in[a,b]\), then \[0=\int_{a}^{t}\|X^{\prime}\|^{2}+\langle(R-\lambda I)X,X\rangle ds\geq\int_{a }^{t}\|X^{\prime}\|^{2}+\mu\langle X,X\rangle ds\geq 0,\] showing that \(X^{\prime}\equiv 0\) and, in turn, that \(X\equiv 0\). We call \(X\in\Gamma\) a \(\lambda\)_-Jacobi field_ if it satisifies the equation \[-X^{\prime\prime}+RX=\lambda X, \tag{4}\] for a given real parameter \(\lambda\). Analogously to Jacobi fields, they form a real vector space \(\mathcal{J}_{\lambda}\subset\Gamma\) of dimension \(2n\), given uniquely by the initial conditions \((X(a),X^{\prime}(a))\) and \(\mathcal{J}_{0}=\mathcal{J}\). It is readily verifiable that if \(X,Y\in\mathcal{J}_{\lambda}\), the expression \[\omega(X,Y)=-\langle X(t),Y^{\prime}(t)\rangle+\langle X^{\prime}(t),Y(t)\rangle \tag{5}\] does not depend on \(t\in[a,b]\) and defines a bilinear non-degenerate antisymmetric form on \(\mathcal{J}_{\lambda}\), that is, a symplectic form. Given \(t\in[a,b]\), the set \[\mathcal{J}_{\lambda}^{t}\coloneqq\{X\in\mathcal{J}_{\lambda}\mid X(t)=0\}\] is a Lagrangian subspace of \(\mathcal{J}_{\lambda}\). So, in a certain sense, to study the \(\lambda\)-Jacobi fields that satisfy \(X(a)=X(t)=0\) is to study the intersections of specific Lagragian subspaces of a real symplectic vector space. With a parallel orthonormal frame \((E_{1},\ldots,E_{n})\) of smooth vector fields along \(\gamma\), the vector fields \(X\in\Gamma\) along the geodesic are represented by \[X(t)=x^{i}(t)E_{i}(t)\longmapsto(x^{1}(t),\ldots,x^{n}(t)),\] and to each vector field \(X\in\Gamma\) we associate the curve \(Y:[a,b]\to\mathbb{R}^{2n}\) \[Y(t)=(x^{1}(t),\ldots,x^{n}(t),(x^{1})^{\prime}(t),\ldots,(x^{n})^{\prime}(t)).\] This is convenient because the \(\lambda\)-Jacobi equation is second order, and it becomes equivalent to the system of ODEs \[Y^{\prime}(t)=A(t,\lambda)Y(t),\quad A(t,\lambda)=\begin{bmatrix}0&I\\ R(t)-\lambda I&0\end{bmatrix} \tag{6}\] where \(Y:[a,b]\to\mathbb{R}^{2n}\) is a curve and \(R(t)\) is a curve of symmetric bilinear forms on \(\mathbb{R}^{n}\). This implies that \(Y\) is of the form \(Y(t)=\begin{bmatrix}X(t)&X^{\prime}(t)\end{bmatrix}^{\mathsf{T}}\), and \(X(t)=(x^{1}(t),\ldots,x^{n}(t))\) produces a \(\lambda\)-Jacobi field along \(\gamma\). Also, since \(R(t)-\lambda I\) is self-adjoint, \(A\) is in the Lie algebra \(\mathfrak{sp}(2n,\mathbb{R})\) of linear symplectic maps of \(\mathbb{R}^{2n}\), so that the flow preserves the canonical symplectic form on \(\mathbb{R}^{2n}\). Let \(\sigma=\{0\}\times\mathbb{R}^{n}\) be fixed as a Lagrangian subspace, corresponding the initial condition of the \(\lambda\)-Jacobi field being \(0\) at \(t=a\). If we consider \(\sigma_{\lambda}(t)\) to be the flow of \(\sigma\) at time \(t\) with respect to the system of ODEs above, that is, \[\sigma_{\lambda}(t)=\{Y(t)\in\mathbb{R}^{2n}\mid Y(a)\in\sigma,\ Y^{\prime}(t )=A(t,\lambda)Y(t)\}, \tag{7}\] which is also a Lagrangian subspace of \(\mathbb{R}^{2n}\), then the intersection \(\sigma_{\lambda}(t)\cap\sigma\) corresponds exactly to the \(\lambda\)-Jacobi fields such that \(X(a)=X(t)=0\). In particular, \[\operatorname{\mathrm{nul}}(H_{t}-\lambda I)=\dim(\sigma_{\lambda}(t)\cap \sigma),\] and the last equality in the Morse Index Theorem 2.1 is equivalent to \[\sum_{t\in(a,b)}\dim(\sigma_{0}(t)\cap\sigma)-\sum_{\lambda\in(\lambda_{0},0) }\dim(\sigma_{\lambda}(b)\cap\sigma)=0. \tag{8}\] By rephrashing the statement of the Morse Index Theorem in terms of intersections of Lagrangian subspaces with a given Lagrangian, we can use known topological methods to prove the equality. More specifically, we will view the above equality as the intersection number of a curve with a given subset of the moduli space of Lagrangians on \(\mathbb{R}^{2n}\). ## 3. The Lagrangian Grassmannian Let \(\mathbb{R}^{2n}\cong\mathbb{C}^{n}\) be equipped with its usual complex strucutre \(J\), inner product \(\langle\cdot,\cdot\rangle\) and symplectic form \(\omega\) as a \(2n\)-dimensional real vector space, with coordinates \[(q,p)=(q^{1},\ldots,q^{n},p^{1},\ldots,p^{n})=q+ip.\] The Lagrangian Grassmannian \(\Lambda=\Lambda(n)\), that is, the set of all Lagrangian subspaces of \(\mathbb{R}^{2n}\), is an embedded compact submanifold of the Grassmannian \(G_{n}(\mathbb{R}^{2n})\) of \(n\)-dimensional subspaces of \(\mathbb{R}^{2n}\). For example, since every line passing through the origin in \(\mathbb{R}^{2}\cong\mathbb{C}\) is Lagrangian, we have \(\Lambda(1)\cong\mathbb{R}\mathrm{P}^{2}\). To identity a set of charts for \(\Lambda(n)\), let \(\sigma=\{0\}\times\mathbb{R}^{n}\cong i\mathbb{R}^{n}\) and consider the chart for \(G_{n}(\mathbb{R}^{2n})\) given by \[\begin{array}{ccc}\phi:&\mathrm{M}(n,\mathbb{R})&\longrightarrow&G_{m}^{0}( \sigma)\\ &S&\longmapsto&\lambda_{S}\coloneqq\{(q,Sq)\}\end{array}\] which takes an \(n\times n\) real matrix \(S\) to its graph, an \(n\)-dimensional subspace of \(\mathbb{R}^{2n}\) transversal to \(\sigma\). Note that transversality is an open condition in \(G_{n}(\mathbb{R}^{2n})\). We check that \(\lambda_{S}\) is Lagrangian if and only if \(S\) is symmetric, as \[\omega((q,Sq),(r,Sr))=-\langle q,Sr\rangle+\langle Sq,r\rangle=\langle q,(S^{ \mathsf{T}}-S)r\rangle=0,\] for all \(q,r\in\mathbb{R}^{n}\), so \(S=S^{\mathsf{T}}\). This provides a chart for the set \(\Lambda^{0}(\sigma)\) of Lagrangians \(\lambda\) transversal to \(\sigma\), that is, such that \(\dim(\lambda\cap\sigma)=0\): \[\begin{array}{ccc}\varphi:&\mathrm{Sym}(n,\mathbb{R})&\longrightarrow& \Lambda^{0}(\sigma)\\ &S&\longmapsto&\lambda_{S}\coloneqq\{(q,Sq)\}.\end{array} \tag{9}\] If \(K\subseteq\{1,\ldots,n\}\) is a set of indices, we also construct the unitary transformations \(J_{K}:\mathbb{R}^{2n}\to\mathbb{R}^{2n}\) given by \[J_{K}(q^{i},p^{i})=\begin{cases}(-p^{i},q^{i}),&\text{ if }i\in K;\\ (q^{i},p^{i}),&\text{ if }i\notin K,\end{cases}\] corresponding to multiplication by \(i\) on the \(K\) coordinates of \(\mathbb{C}^{n}\), and the Lagrangian subspaces \(\sigma_{K}=J_{K}\sigma\), given by \[\sigma_{K}=\{(q,p)\mid p^{i}=0,\forall i\in K,\ q^{j}=0,\forall j\notin K\}.\] Since \(J_{K}\sigma=\sigma_{K}\), we have that \(J_{K}\Lambda^{0}(\sigma)=\Lambda^{0}(\sigma_{K})\), the set of Lagrangians transversal to \(\sigma_{K}\), and we construct the maps \[\begin{array}{ccc}\varphi_{K}=J_{K}\varphi:&\mathrm{Sym}(n,\mathbb{R})& \longrightarrow&\Lambda^{0}(\sigma_{K})\\ &S&\longmapsto&J_{K}\lambda_{S}.\end{array} \tag{10}\] **Lemma 3.1**.: _If \(\lambda\in\Lambda\) is such that \(\dim(\lambda\cap\sigma)=k\), there exists a set of indices \(K\subseteq\{1,\ldots,n\}\) such that \(|K|=k\) and \(\lambda\) is transversal to \(\sigma_{K}\), that is, \(\lambda\in\Lambda^{0}(\sigma_{K})\)._ Proof.: If \(\lambda_{0}=\lambda\cap\sigma\), We show first that \(\lambda_{0}\cap\sigma_{K}=\{0\}\). If \(\{v_{1},\ldots,v_{k}\}\) is a basis of \(\lambda_{0}\), we complete it to a basis of \(\sigma\) with canonical vectors \(e_{i_{1}},\ldots,e_{i_{n-k}}\). Being \(I=\{i_{1},\ldots,i_{n-k}\}\) this set of indices, we can choose \(K=I\), with \(\sigma_{K}\) satisfying the transversality condition with \(\lambda\) in \(\sigma\). Now let \(\tau=\sigma_{K}\cap\sigma\). As \(\tau\cap\lambda_{0}=\{0\}\), we have the direct sum \(\lambda_{0}\oplus\tau=\sigma\). Considering the symplectic form \(\omega\), we also have that \(\omega(\lambda,\lambda_{0})=0\) and \(\omega(\sigma_{K},\tau)=0\), since \(\lambda_{0}\subseteq\lambda\) and \(\tau\subseteq\sigma_{K}\) and \(\lambda,\sigma_{K}\) are Lagrangian subspaces. Then \(\omega(\lambda\cap\sigma_{K},\sigma)=0\), which implies that \(\lambda\cap\sigma_{K}\subseteq\sigma\); but since they are transversal in \(\sigma\), it must be that \(\sigma_{K}\cap\lambda=\{0\}\) **Theorem 3.2**.: _The set \(\Lambda^{k}(\sigma)=\{\lambda\in\Lambda\mid\dim(\lambda\cap\sigma)=k\}\) is covered by the \(\binom{n}{k}\) charts \(\varphi_{K}\), and on each such chart, the coordinates \(S=\varphi_{K}^{-1}\lambda\) for \(\Lambda^{k}(\sigma)\) are given by \(S_{\mu\nu}=0\), \(\forall\mu,\nu\in K\)._ Proof.: Without loss of generality, we may assume that \(K=\{1,\ldots,k\}\) by relabeling the coordinate axes on \(\mathbb{R}^{n}\). The Lagrangian \(\lambda_{S}=\{(q,Sq)\}\) is realized as the column space of the matrix \(\begin{bmatrix}I&S\end{bmatrix}^{\mathsf{T}}\), and therefore \(J_{K}\lambda_{S}=\varphi_{K}(S)\) is the column space of \[J_{K}\begin{bmatrix}I\\ S\end{bmatrix}=J_{K}\begin{bmatrix}I_{k\times k}&0\\ 0&I_{(n-k)\times(n-k)}\\ S_{1}&S_{2}\\ S_{3}&S_{4}\end{bmatrix}=\begin{bmatrix}-S_{1}&-S_{2}\\ 0&I\\ I&0\\ S_{3}&S_{4}\end{bmatrix}. \tag{11}\] Since the column vectors are linearly independent, the column space of the last \(n-k\) vectors always has trivial intersection with \(\sigma\). If \(S_{1}=0\), then the first \(k\) column vectors form a basis for the intersection \(\sigma\cap\lambda\), and conversely, if \(\dim(\lambda\cap\sigma)=k\), it must be the case that \(S_{1}=0\). More generally, we see that, for \(l\leq k\) and \(\lambda\in\Lambda^{0}(\sigma_{K})\), \[\lambda\in\Lambda^{l}(\sigma)\iff\dim\ker S_{1}=l.\] This shows that every \(\lambda\in\Lambda\) belongs to some chart \(\varphi_{K}(\operatorname{Sym}(n,\mathbb{R}))\) for some \(K\subseteq\{1,\ldots,n\}\), so they cover \(\Lambda\). It is easy to see that they are compatible, indeed showing that these maps form an atlas for an embedded submanifold of the Grassmannian of \(n\)-planes of \(\mathbb{R}^{2n}\). We also conclude that the subsets \(\Lambda^{k}(\sigma)\) form embedded submanifolds of codimension \(k(k+1)/2\). ## 4. The Intersection Number From the original question of understanding curves \(\lambda(t)\) of lagrangian subspaces and when does \(\dim(\lambda(t)\cap\sigma)>0\), we are naturally led to consider intersections of \(\lambda(t)\) with the set \(\Lambda^{\geq 1}(\sigma)=\bigcup_{k\geq 1}\Lambda^{k}(\sigma)\). We will show that the intersection number of an oriented curve with this subset [10, Chapter 3] is well defined, as \(\Lambda^{\geq 1}(\sigma)\) is a two-sided cycle of codimension \(1\), and we relate this index of intersection to a canonical cohomology class with integer coefficients in order to provide explicit calculations. The unitary group \(\operatorname{U}(n)\) acts smoothly and transitively on the Lagrangian subspaces of \(\mathbb{R}^{2n}\), and the isotropy subgroup of \(\mathbb{R}^{n}\times\{0\}\cong\mathbb{R}^{n}\subset\mathbb{C}^{n}\) is the orthogonal group \(\operatorname{O}(n)\). This implies that the Lagrangian Grasmmannian can be realized as the homogeneous manifold \(\operatorname{U}(n)/\operatorname{O}(n)\). The map \(\det^{2}:\operatorname{U}(n)\to S^{1}\) is well defined on the quotient, resulting on the induced map \[\operatorname{Det}^{2}:\operatorname{U}(n)/\operatorname{O}(n)\longrightarrow S ^{1}. \tag{12}\] **Proposition 4.1**.: _For \(\lambda_{S}\in\Lambda^{0}(\sigma)\), we have that_ \[\operatorname{Det}^{2}\lambda_{S}=\det\frac{I+iS}{I-iS}. \tag{13}\] Proof.: With the identification \(\mathbb{R}^{2n}\cong\mathbb{C}^{n}\), the map \(I+iS\) takes the Lagrangian subspace \(\mathbb{R}^{n}=\{(q,0)\}\) to \(\lambda_{S}=\{(q,Sq)\}\). It may not be unitary, but \((I+iS)/\sqrt{I+S^{2}}\) is; and since \(\sqrt{I+S^{2}}\) preserves \(\mathbb{R}^{n}\), this map also takes \(\mathbb{R}^{n}\) to \(\lambda_{S}\). Therefore \[\operatorname{Det}^{2}\lambda_{S}=\det{}^{2}\left(\frac{I+iS}{\sqrt{I+S^{2}}} \right)=\det\frac{(I+iS)^{2}}{I+S^{2}}=\det\frac{I+iS}{I-iS}.\] Consider also the set \(\operatorname{S\Lambda}(n)\) of all Lagrangian subspaces \(\lambda\) such that \(\operatorname{Det}^{2}\lambda=1\). Then \(\operatorname{SU}(n)\) acts transitively on \(\operatorname{S\Lambda}(n)\) with stabilizer \(\operatorname{SO}(n)\), so that \(\operatorname{S\Lambda}(n)\cong\operatorname{SU}(n)/\operatorname{SO}(n)\). The map \(\operatorname{Det}^{2}\) in fact induces an isomorphism \(\pi_{1}(\Lambda)\cong\pi_{1}(S^{1})\) between the fundamental groups. This can be seen through the exact homotopy sequences of the six fibrations of the following commutative diagram: More explicitly, \(\operatorname{S\Lambda}(n)\) and \(\Lambda(n)\) are both connected, being continuous images of \(\operatorname{SU}(n)\) and \(\operatorname{U}(n)\), the long exact sequence \[\cdots\to\pi_{1}(\operatorname{SO}(n))\to\pi_{1}(\operatorname{SU}(n))\to\pi_ {1}(\operatorname{S\Lambda}(n))\to\pi_{0}(\operatorname{SO}(n))\to\cdots\] gives us \(\pi_{1}(\operatorname{S\Lambda}(n))=0\), and the long exact sequence \[\cdots\to\pi_{1}(\operatorname{S\Lambda}(n))\to\pi_{1}(\Lambda(n))\to\pi_{1}(S ^{1})\to\pi_{0}(\operatorname{S\Lambda}(n))\to\cdots\] gives us the aforementioned isomorphism. Recall that \(\deg:\pi_{1}(S^{1})\to\mathbb{Z}\) is an isomorphism, and in this case, the Hurewicz map \(\pi_{1}(\Lambda)\to H_{1}(\Lambda;\mathbb{Z})\) given by the abelianization of the fundamental group is also an isomorphism. This allows us to conclude that \[H^{1}(\Lambda;\mathbb{Z})\cong\operatorname{Hom}(H_{1}(\Lambda;\mathbb{Z}), \mathbb{Z})\cong H_{1}(\Lambda;\mathbb{Z})\cong\mathbb{Z},\] as \(\mathbb{Z}\) is abelian. Consider \(\alpha\in H^{1}(\Lambda;\mathbb{Z})\cong\operatorname{Hom}(\pi_{1}(\Lambda),\mathbb{Z})\) to be the cohomology class given by \[\alpha(\gamma)=\deg(\operatorname{Det}^{2}\circ\gamma), \tag{14}\] where \(\gamma\) is a closed curve given up to homotopy. Then \(\alpha\) coincides with the pullback of the angle \(1\)-form \(d\theta\) on \(S^{1}\) by \(\operatorname{Det}^{2}\), where \(\alpha\) is evaluated on smooth closed curves belonging to the same homotopy class. In certain contexts \(\alpha\) is referred to as the Maslov index of the Lagrangian Grassmannian \(\Lambda\). It is readily verifiable that \(\alpha\) is a generator for \(H^{1}(\Lambda;\mathbb{Z})\) through the following diagram of isomorphisms: Fixing \(\sigma=\{0\}\times\mathbb{R}^{n}\) as before, we shall prove that \(\alpha\) is equal to the index of intersection of an oriented closed curve with \(\Lambda^{\geq 1}(\sigma)\), that is, the set of Lagrangians which have non-trivial intersection with \(\sigma\). Note that \(\Lambda(n)\) can be regarded as an algebraic manifold, so that the closure \(\overline{\Lambda^{1}(\sigma)}\), being equal to the union \(\bigcup_{k=1}^{n}\Lambda^{k}(\sigma)=\Lambda^{\geq 1}(\sigma)\), determines an algebraic submanifold of codimension \(1\). Since the higher strata of \(\Lambda^{\geq 1}(\sigma)\) correspond to the boundary \(\partial\Lambda^{\geq 1}(\sigma)=\bigcup_{k=2}^{n}\Lambda^{k}(\sigma)= \Lambda^{\geq 2}(\sigma)\), this singularity is of codimension \(2(2+1)/2=3\) in \(\Lambda(n)\), which means that the homological boundary of \(\overline{\Lambda^{1}(\sigma)}\) is \(0\). Consequently, \(\overline{\Lambda^{1}(\sigma)}\) is a cycle of codimension \(1\). **Lemma 4.2**.: \(\overline{\Lambda^{1}(\sigma)}\) _is a two-sided cycle in \(\Lambda(n)\)._ Proof.: We must show that there exists a non-vanishing continuous vector field along \(\Lambda^{1}(\sigma)\) transversal to it. The flow \(\lambda\mapsto e^{it}\lambda\) for \(t\in\mathbb{R}\) on \(\Lambda\) produces an infinitesimal generator which, along \(\Lambda^{1}(\sigma)\), will be the desired vector field. On a chart \(\varphi_{K}(\operatorname{Sym}(n,\mathbb{R}))\) we have \[\lambda(t)=J_{K}\lambda_{S(t)}=e^{it}J_{K}\lambda_{S(0)}\implies\lambda_{S(t) }=e^{it}\lambda_{S(0)}.\] This means that the column vectors of the \(2n\times n\) matrices \[\begin{bmatrix}I\\ S(t)\end{bmatrix},\quad\begin{bmatrix}\cos tI&-\sin tI\\ \sin tI&\cos tI\end{bmatrix}\begin{bmatrix}I\\ S(0)\end{bmatrix}\] span the same subspace, hence there exists a curve \(G(t)\in\operatorname{GL}(n,\mathbb{R})\) such that \[\begin{bmatrix}I\\ S(t)\end{bmatrix}=\begin{bmatrix}\cos tI&-\sin tI\\ \sin tI&\cos tI\end{bmatrix}\begin{bmatrix}I\\ S(0)\end{bmatrix}G(t).\] This in turn implies \[S(t)=\frac{\sin tI+\cos tS(0)}{\cos tI-\sin tS(0)}, \tag{15}\] so that \(S^{\prime}(0)=I+S(0)^{2}=I+S(0)S(0)^{t}\). If \(\lambda(0)\in\Lambda^{1}(\sigma)\) and \(K=\{\kappa\}\), then \(S^{\prime}_{\kappa\kappa}(0)\geq 1\), so that the flow is indeed transversal to \(\Lambda^{1}(\sigma)\). Hence \(\Lambda^{\geq 1}(\sigma)\) is two-sided and a positive orientation can be given by the flow \(e^{it}\lambda\) With this, we can properly define the index of intersection \(\operatorname{Ind}(\gamma)\) of an oriented curve \(\gamma:[a,b]\to\Lambda\) with the cycle \(\Lambda^{\geq 1}(\sigma)\) when \(\gamma(a),\gamma(b)\in\Lambda^{0}(\sigma)\), which is invariant up to homotopy of \(\gamma\) fixing its endpoints. This is because we can complete \(\gamma\) to a closed curve by joining \(\gamma(b)\) to \(\gamma(a)\) through any path in \(\Lambda^{0}(\sigma)\), since it is a simply connected open set. We show that the index of intersection and \(\alpha\) coincide: **Proposition 4.3**.: \(\operatorname{Ind}(\gamma)=\alpha(\gamma)\) _for all \([\gamma]\in\pi_{1}(\Lambda)\)._ Proof.: It suffices to prove the equality for a specific closed curve, since \(\alpha\) is a generator for \(H^{1}(\Lambda;\mathbb{Z})\). We take \(\gamma\) to be the closed curve \(e^{it}\lambda\) for \(0\leq t\leq\pi\), where \(\lambda\in\Lambda\) is to be chosen. For almost all \(\lambda\in\Lambda\), the curve \(e^{it}\lambda\) does not pass through \(\overline{\Lambda^{2}(\sigma)}\). So we may take such \(\lambda=\lambda_{S}\in\Lambda^{0}(\sigma)\) and such that \(S\) has nonzero and pairwise distinct eigenvalues. Then \(S(0)=S(\pi)=S\), and at the points where \(e^{it}\lambda\) intersects \(\Lambda^{1}(\sigma)\), it will do so transversally and positively. By (15), these points of intersection correspond to the values of \(t\in(0,\pi)\) for which \[\det(\cos tI-\sin tS)=(-\sin t)^{n}\det(S-\cot tI)=0.\] For \(t\in(0,\pi)\), \(\cot t\) parametrizes \(\mathbb{R}\) once, so the determinant vanishes exactly for the \(n\) distinct real eigenvalues of \(S\). This means that \(\operatorname{Ind}(\gamma)=n\). As for \(\alpha(\gamma)\), we have \[\operatorname{Det}^{2}e^{it}\lambda_{S}=e^{2nit}\operatorname{Det}^{2}\lambda _{S},\] which winds around the circle \(n\) times for \(0\leq t\leq\pi\). So \(\alpha(\gamma)=n=\operatorname{Ind}(\gamma)\), and \(\alpha=\operatorname{Ind}\) for general closed curves. ## 5. On The Symplectic Flow We return to the curves on \(\Lambda(n)\) given by a symplectic flow of the form of the Jacobi equation, so that we may calculate their intersection numbers with \(\Lambda^{\geq 1}(\sigma)\). It is important to note that these intersections will not in general be transversal, that is, ocurring transversally at the principal stratum \(\Lambda^{1}(\sigma)\); and even though we can perturb the curve to a homotopic one that does so, we may not necessarily know the information on the multiplicities of the intersections. Fortunately, these intersections will still be non-degenerate in a precise sense, where we can adequately describe their contributions to the index of intersection. For \(t\in[a,b]\) and any Lagrangian subspace \(\tau\in\Lambda\), let \[\lambda(t)\coloneqq\{v(t)\in\mathbb{R}^{2n}\mid v(a)\in\tau,\ v^{\prime}(t)=A (t)v(t)\}, \tag{16}\] where \[A(t)=\begin{bmatrix}0&I\\ R(t)&0\end{bmatrix}\in\mathfrak{sp}(2n),\ R(t)\in\operatorname{Sym}(n,\mathbb{ R}).\] **Lemma 5.1**.: \(\lambda(t)\) _intersects \(\Lambda^{\geq 1}(\sigma)\) finitely many times._ Proof.: Suppose that, at a time \(t_{0}\in[a,b]\), we have that \(\dim(\lambda(t_{0})\cap\sigma)=k\), that is, \(\lambda(t_{0})\in\Lambda^{k}(\sigma)\). Then by lemma 3.1 there exists \(K\subseteq\{1,\ldots,n\}\), which we may assume to be \(\{1,\ldots,k\}\), such that \(\lambda(t_{0})\in\Lambda^{0}(\sigma_{K})\). Since it is an open set, we may assume also that for \(t\) close to \(t_{0}\) we have \(\lambda(t)\) contained in the chart \(\Lambda^{0}(\sigma_{K})\). We define the curve of matrices \(S(t)=\varphi_{K}^{-1}\lambda(t)\in\operatorname{Sym}(n,\mathbb{R})\), where explicitly \(\lambda(t)=J_{K}\lambda_{S(t)}=\operatorname{colsp}L(t)\) for \[L(t)=\begin{bmatrix}-S_{1}(t)&-S_{2}(t)\\ 0&I\\ I&0\\ S_{3}(t)&S_{4}(t)\end{bmatrix},\] and such that \(S_{1}(t_{0})=0_{k\times k}\). Let \(\Phi(t)\) be the fundamental matrix for the flow \(v^{\prime}(t)=A(t)v(t)\) such that \[\begin{cases}\Phi^{\prime}(t)=A(t)\Phi(t),\\ \Phi(t_{0})=I_{2n\times 2n};\end{cases} \tag{17}\] then \[\gamma(t)=\operatorname{colsp}(L(t))=\operatorname{colsp}\left(\Phi(t)L(t_{0} )\right)=\Phi(t)\gamma(t_{0}).\] Since \(L(t)\) and \(\Phi(t)L(t_{0})\) have the same column space, there exists a curve \(G(t)\) in \(\operatorname{GL}_{n}(\mathbb{R})\) such that \(G(t_{0})=I_{n\times n}\) and \[L(t)=\Phi(t)L(t_{0})G(t). \tag{18}\] We differentiate the expression above at \(t=t_{0}\): \[L^{\prime}(t_{0}) =\Phi^{\prime}(t_{0})L(t_{0})G(t_{0})+\Phi(t_{0})L(t_{0})G^{ \prime}(t_{0})\] \[=A(t_{0})L(t_{0})+L(t_{0})G^{\prime}(t_{0}). \tag{19}\] By also considering \[R(t) =\begin{bmatrix}R_{1}(t)_{k\times k}&R_{2}(t)_{k\times(n-k)}\\ R_{3}(t)_{(n-k)\times k}&R_{4}(t)_{(n-k)\times(n-k)}\end{bmatrix},\] \[G(t) =\begin{bmatrix}G_{1}(t)_{k\times k}&G_{2}(t)_{k\times(n-k)}\\ G_{3}(t)_{(n-k)\times k}&G_{4}(t)_{(n-k)\times(n-k)}\end{bmatrix},\] we may expand the equation (19) in matrix form: \[\begin{bmatrix}-S^{\prime}_{1}&-S^{\prime}_{2}\\ 0&0\\ S^{\prime}_{3}&S^{\prime}_{4}\\ 0&0\end{bmatrix}=\begin{bmatrix}I-S_{2}G^{\prime}_{3}&-S_{2}G^{\prime}_{4}\\ S_{3}+G^{\prime}_{3}&S_{4}+G^{\prime}_{4}\\ G^{\prime}_{1}&-R_{1}S_{2}+R_{2}+G^{\prime}_{2}\\ S_{3}G^{\prime}_{1}+S_{4}G^{\prime}_{3}&-R_{3}S_{2}+R_{4}+S_{3}G^{\prime}_{2}+ S_{4}G^{\prime}_{4}\end{bmatrix},\] where all the matrices above are evaluated at \(t_{0}\). Then \(G^{\prime}_{3}(t_{0})=-S_{3}(t_{0})=-S_{2}(t_{0})^{\intercal}\) and \[S^{\prime}_{1}(t_{0})=-I-S_{2}(t_{0})S_{2}(t_{0})^{\intercal},\] which is negative definite, and all of its eigenvalues are \(\leq-1\). Since \(S_{1}(t_{0})=0\), this implies that for \(t\) close to \(t_{0}\) and \(t<t_{0}\), the eigenvalues of \(S_{1}(t)\) are all positive, and for \(t>t_{0}\), they are all negative. In particular, for \(t\neq t_{0}\), \(S_{1}(t)\) has trivial kernel, so \(\lambda(t)\in\Lambda^{0}(\sigma)\). Thus intersections of the curve with \(\Lambda^{\geq 1}(\sigma)\) are discrete, and since the interval is compact, they are finite. **Theorem 5.2**.: _On the conditions of the previous lemma, if we also assume that \(\lambda(a),\lambda(b)\in\Lambda^{0}(\sigma)\), then_ \[\operatorname{Ind}(\lambda)=\sum_{t\in(a,b)}\dim(\lambda(t)\cap\sigma), \tag{20}\] _where the sum above has finitely many non-zero terms._ Proof.: It suffices to show that at each intersection of \(\lambda\) with \(\Lambda^{\geq 1}(\sigma)\), the contribution to the intersection number is given by the \(k\) such that the intersection is on the stratum \(\Lambda^{k}(\sigma)\). As before, if \(\lambda(t_{0})\in\Lambda^{k}(\sigma)\), then locally \(\lambda(t)\in\Lambda^{0}(\sigma_{K})\) for \(K\subseteq\{1,\dots,n\}\), which we may assume to be \(\{1,\dots,k\}\), and \(S(t)=\varphi_{K}^{-1}(\lambda(t))\). Since, for \(t_{1}<t_{0}\), \(S(t_{1})\in\operatorname{Sym}(n,\mathbb{R})\cap\varphi_{K}^{-1}(\Lambda^{0}( \sigma))\) and all its eigenvalues are positive, we can find a path joining \(S(t_{1})\) to the matrix \[E_{1}=\begin{bmatrix}I&0\\ 0&0\end{bmatrix}\] that avoids \(\varphi_{K}^{-1}(\Lambda^{\geq 1}(\sigma))\). This is done considering a diagonalization \(S_{1}(t_{1})=MDM^{-1}\), where \(M\in\operatorname{O}(n)\) and \(D\) is diagonal with positive eigenvalues, and simultaneously deforming \(D\) to the above matrix and \(M\) either to the identity or to a simple reflection about the \(x^{1}\) axis, whether the determinant of \(M\) is \(1\) or \(-1\). All the other entries are taken to be \(0\) through a linear homotopy. Similarly, for \(t_{2}>t_{0}\), we can find a path joining \(S(t_{2})\) to the matrix \[E_{2}=-E_{1}=\begin{bmatrix}-I&0\\ 0&0\end{bmatrix}\] which avoids \(\varphi_{K}^{-1}(\Lambda^{\geq 1}(\sigma))\). Now we consider the curve \(\eta:[-1,1]\to\Lambda(n)\) given by \(\eta(t)=\varphi_{K}(T(t))\), where \[T(t)=\begin{bmatrix}-tI&0\\ 0&0\end{bmatrix}. \tag{21}\] This curve will intersect \(\Lambda^{\geq 1}(\sigma)\) only at the value \(t=0\). Furthermore, the Lagrangian subspaces at the endpoints of the curve are \[\eta(-1)=J_{K}\operatorname{cosp}\begin{bmatrix}I&0\\ 0&I\\ I&0\\ 0&0\end{bmatrix}=\operatorname{cosp}\begin{bmatrix}-I&0\\ 0&I\\ I&0\\ 0&0\end{bmatrix}=\operatorname{cosp}\begin{bmatrix}I&0\\ 0&I\\ -I&0\\ 0&0\end{bmatrix}=\varphi(E_{2}),\] and analogously \(\eta(1)=\varphi(E_{1})\). We connect the Lagrangian subspaces \(\eta(1)\) and \(\eta(-1)\) through the same parametrization in (21), but on \(\varphi^{-1}(\Lambda^{0}(\sigma))\), a different chart. The curve \(\mu(t)=\varphi(T(t))\) defined on \([-1,1]\) is such that \(\mu(-1)=\eta(1)\), \(\mu(1)=\eta(-1)\) and \(\mu\) is contained in \(\Lambda^{0}(\sigma)\). Concatenating both curves at their endpoints, we form a simple closed curve that intersects \(\Lambda^{\geq 1}(\sigma)\) at only one point, and we can use the parametrizations to calculate its index of intersection. Explicitly, by the formula in proposition 4.1, \[\operatorname{Det}^{2}\eta(t)=\operatorname{Det}^{2}J_{K}\lambda_{T(t)}=i^{2k }\left(\frac{1-it}{1+it}\right)^{k},\] which winds around the circle \(k/2\) times for \(t\in[-1,1]\), and \[\operatorname{Det}^{2}\mu(t)=\operatorname{Det}^{2}\lambda_{T(t)}=\left(\frac{ 1-it}{1+it}\right)^{k},\] which further winds around the circle \(k/2\) times. Then \(\alpha(\mu*\eta)=k\), and it coincides with the index of intersection with \(\Lambda^{\geq 1}(\sigma)\). More importantly, it does not depend on how we complete the curve \(\eta\) through \(\Lambda^{0}(\sigma)\), and is invariant under homotopies. Since \(\Lambda^{0}(\sigma_{K})\) is simply connected, we can find a homotopy between the curve \(\lambda(t)\) on \([t_{1},t_{2}]\) and \(\eta\) which preserves the intersection number, as the path joining \(\lambda(t_{i})\) to \(\eta(E_{i})\) does not intersect \(\Lambda^{\geq 1}(\sigma)\). Finally, this implies that each intersection point \(\lambda(t_{0})\in\Lambda^{k}(\sigma)\) contributes exactly \(k\) to the intersection number of the curve \(\lambda(t)\) with \(\Lambda^{\geq 1}(\sigma)\). ## 6. Final Steps We return to the Lagrangian subspaces \(\sigma_{\lambda}(t)\) as defined in (7) in order to prove the equality (8). We may consider \(\lambda\in[\lambda_{0},0]\) for some \(\lambda_{0}<0\), where for \(\mu\leq\lambda_{0}\), we have \(\sigma_{\mu}(t)\cap\sigma=\{0\}\). The map \(\sigma_{\lambda}(t)\) is smooth on both variables, and the image of \(\sigma:[a,b]\times[\lambda_{0},0]\) forms a homological rectangle: In principle, we know how to calculate the intersection number of the horizontal and vertical sides of this rectangle with \(\Lambda^{\geq 1}(\sigma)\), since they are given by a symplectic flow, varying either the term \(R(t)\) or \(-\lambda I\) in (6). However, \(\sigma_{0}(a)\notin\Lambda^{0}(\sigma)\), and possibly \(\sigma_{0}(b)\notin\Lambda^{0}(\sigma)\). To proceed, we consider curves homotopic to these edges for which we can apply theorem 5.2. Note that \(\sigma_{\lambda}(a)=\sigma\) for all \(\lambda\in[\lambda_{0},0]\). Since \(\sigma\in\Lambda^{n}(\sigma)\), we have that \(\sigma_{\lambda}(t)\in\Lambda^{0}(\sigma_{N})\) for all \(\lambda\) and for \(t\) close to \(a\), where \(N=\{1,\ldots,n\}\). Then \(\sigma_{\lambda}(t)=J_{N}\lambda_{S_{\lambda(t)}}=J\lambda_{S_{\lambda(t)}}\), and for \(t\) closer to \(a\), the matrices \(S_{\lambda}(t)\) all have negative eigenvalues. This means that we can find \(a^{\prime}\in(a,b)\) such that, for \(t\in(a,a^{\prime}]\) and all \(\lambda\), we have \(\sigma_{\lambda}(t)\in\Lambda^{0}(\sigma)\). The edge \(\sigma_{.}(a)\equiv\sigma\) is then homotopic to \(\sigma_{.}(a^{\prime})\) and has the same intersection number, which is \(0\). Similarly, if \(\sigma_{\lambda}(b)\in\Lambda^{k}(\sigma)\) for some \(k\geq 1\), then for \(t\) close to \(b\) we have \(\sigma_{0}(t)\in\Lambda^{0}(\sigma)\), and in the chart which \(\sigma_{0}(b)\) belongs to, the corresponding \(k\times k\) matrix has all negative eigenvalues. This is the same for \(\sigma_{\lambda}(b)\) when \(\lambda\) is close to \(0\), so we may take \(b^{\prime}\in(a,b)\) and \(\lambda^{\prime}\in(\lambda_{0},0)\) such that \(\sigma_{0}(b^{\prime})\) and \(\sigma_{\lambda^{\prime}}(b)\) are joined by a homotopic path in \(\Lambda^{0}(\sigma)\). This new loop \(\eta\) is still contractible, so \(\alpha(\eta)=\operatorname{Ind}(\eta)=0\), and we can calculate its intersection number with \(\Lambda^{\geq 1}(\sigma)\), being \[\sum_{t\in(a^{\prime},b^{\prime})}\dim(\sigma_{0}(t)\cap\sigma)-\sum_{\lambda \in(\lambda_{0},\lambda^{\prime})}\dim(\sigma_{\lambda}(b)\cap\sigma)=0.\] Since the sum indexed over \((a^{\prime},b^{\prime})\) and \((\lambda_{0},\lambda^{\prime})\) is the same as over \((a,b)\) and \(\lambda<0\), we have that \[\sum_{t\in(a,b)}\dim(\sigma_{0}(t)\cap\sigma)=\sum_{\lambda<0}\dim(\sigma_{ \lambda}(b)\cap\sigma). \tag{22}\] In particular, we now know that the number of negative eigenvalues with multiplicites of the Sturm-Liouville problem (3) is finite, given by (22). We finally prove this number is the index of \(H_{b}\): **Proposition 6.1**.: \[\operatorname{ind}(H_{b})=\sum_{\lambda<0}\operatorname{nul}(H_{b}-\lambda I).\] Proof.: The solutions of (3) in \(\Gamma_{0}\) for different \(\lambda\) are \(H_{b}\)-orthogonal, and the direct sum of the eigenspaces for the negative eigenvalues form a subspace on which \(H_{b}\) is negative definite. If \(\operatorname{ind}(H_{b})\) were bigger than the number of negative eigenvalues, there would be a finite-dimensional subspace \(V\subseteq\Gamma_{0}\) on which \(H_{b}\) is negative-definite and whose dimension is greater than this number. On it, the bilinear symmetric form \[\langle\langle X,Y\rangle\rangle\coloneqq\int_{a}^{b}\langle X,Y\rangle ds\] defines an inner product, so there exists a self-adjoint linear operator \(P:V\to V\) such that \[H_{b}(X,Y)=\langle\langle PX,Y\rangle\rangle=\int_{a}^{b}\langle PX,Y\rangle ds.\] Evidently \(P\) coincides with \(L_{0}[X]=-X^{\prime\prime}+RX\) on \(V\), and by diagonalizing \(P\) in \(V\), we have a basis of orthonormal eigenvectors with corresponding negative eigenvalues. But this would imply that there are more negative eigenvalues counted with multiplicity than previously accounted for, a contradiction, so the equality in the proposition holds. With this last step, we have proved all the identities in theorem 2.1, obtaining the desired result.
2304.00305
Predictive Heterogeneity: Measures and Applications
As an intrinsic and fundamental property of big data, data heterogeneity exists in a variety of real-world applications, such as precision medicine, autonomous driving, financial applications, etc. For machine learning algorithms, the ignorance of data heterogeneity will greatly hurt the generalization performance and the algorithmic fairness, since the prediction mechanisms among different sub-populations are likely to differ from each other. In this work, we focus on the data heterogeneity that affects the prediction of machine learning models, and firstly propose the \emph{usable predictive heterogeneity}, which takes into account the model capacity and computational constraints. We prove that it can be reliably estimated from finite data with probably approximately correct (PAC) bounds. Additionally, we design a bi-level optimization algorithm to explore the usable predictive heterogeneity from data. Empirically, the explored heterogeneity provides insights for sub-population divisions in income prediction, crop yield prediction and image classification tasks, and leveraging such heterogeneity benefits the out-of-distribution generalization performance.
Jiashuo Liu, Jiayun Wu, Bo Li, Peng Cui
2023-04-01T12:20:06Z
http://arxiv.org/abs/2304.00305v1
# Predictive Heterogeneity: Measures and Applications ###### Abstract As an intrinsic and fundamental property of big data, data heterogeneity exists in a variety of real-world applications, such as precision medicine, autonomous driving, financial applications, etc. For machine learning algorithms, the ignorance of data heterogeneity will greatly hurt the generalization performance and the algorithmic fairness, since the prediction mechanisms among different sub-populations are likely to differ from each other. In this work, we focus on the data heterogeneity that affects the prediction of machine learning models, and firstly propose the _usable predictive heterogeneity_, which takes into account the model capacity and computational constraints. We prove that it can be reliably estimated from finite data with probably approximately correct (PAC) bounds. Additionally, we design a bi-level optimization algorithm to explore the usable predictive heterogeneity from data. Empirically, the explored heterogeneity provides insights for sub-population divisions in income prediction, crop yield prediction and image classification tasks, and leveraging such heterogeneity benefits the out-of-distribution generalization performance. P 1 Footnote 1: dagger\). Corresponding Author. Footnote 2: footnotetext: \(\dagger\). Corresponding Author. ## 1 Introduction Big Data provides great opportunities for the growth and advancement of Artificial Intelligence (AI) systems. Nowadays, AI has emerged as a ubiquitous tool that permeates almost every aspect of the contemporary technological landscape, making it an indispensable asset in various fields and industries, such as scientific discoveries, policy-making, healthcare, drug discovery, and so on. However, along with the widespread deployment of AI systems, the reliability, fairness, and stability of AI algorithms have been increasingly doubted. For example, in sociological research (Tipton et al., 2020), studies have shown that even for carefully designed randomized trials, there are huge selection biases, making scientific discoveries unreliable; in disease diagnosis, studies (Wynants et al., 2020; Roberts et al., 2021) have found hundreds of existing AI algorithms fail to detect and prognosticate for COVID-19 using chest radiographs and CT scans; in social welfare, decision support AI systems for credit loan applications are found to exhibit biases against certain demographic groups (Hardt et al., 2016; Verma, 2019); in various machine learning tasks, algorithms are faced with severely poor generalization performances under distributional shifts (Shen et al., 2021), etc. Another well-known example is Simpson's paradox, which brings false discoveries to the social research (Wagner, 1982; Hernan et al., 2011). In order to mitigate the barriers that inhibit the deployment of AI systems in crucial, high-stakes applications, numerous researchers have taken recourse to the established research paradigm of model-centric AI, whereby they endeavor to develop innovative algorithms aimed at addressing these challenges. However, in contemporary discourse about machine learning, it is increasingly evident that the challenges faced by algorithms extend beyond their intrinsic properties and extend to the nature of the data utilized in training these models. Specifically, the heterogeneity of data employed has emerged as a pivotal factor underlying these issues. The concept of data heterogeneity encompasses the _diversity_ that exists within data, including _variations in data sources, generation mechanisms, sub-populations_, and _data structures_. Failure to account for such diversity in AI systems can lead to overemphasis on patterns found only in dominant sub-populations or groups, thereby resulting in false scientific discoveries, unreliable and inequitable decision-making, and poor generalization performance when confronted with new data. Given the high-stakes scenarios in which trustworthy AI is required, addressing the problem of data heterogeneity - an inherent property of big data - should receive increased attention. Moreover, in the current era of big models, where model development is approaching its limits, _researchers have huge opportunities to explore the intricacies of big data_, thereby facilitating the development of AI in parallel with the advancement of AI models and algorithms. Despite its widespread existence, due to its complexity, data heterogeneity has not converged to a uniform formulation so far, and has different meanings among different fields. Li and Reynolds (1995) define the heterogeneity in _ecology_ based on the system property and complexity or variability. Rosenbaum (2005) views the uncertainty of the potential outcome as unit heterogeneity in observational studies in _economics_. More recently, in machine learning, several works of _causal learning_(Peters et al., 2016; Arjovsky et al., 2019; Koyama and Yamaguchi, 2020; Liu et al., 2021; Creager et al., 2021) and _robust learning_(Sagawa et al., 2019; Liu et al., 2022) leverage heterogeneous data from multiple environments to improve the out-of-distribution generalization ability. However, previous works have not provided a precise definition or sound quantification. In this work, targeting at the prediction task in machine learning, from the perspective of _prediction power_, we propose the predictive heterogeneity, a _new type_ of data heterogeneity. From a machine learning perspective, a major concern is the potential adverse effects of data heterogeneity on prediction accuracy. In this study, we propose predictive heterogeneity, which refers to the heterogeneity of data that impacts the performance of machine learning models. Our goal is to facilitate the development of machine learning systems by addressing this issue. To this end, we introduce a precise definition of predictive heterogeneity that quantifies the maximal additional predictive information that can be obtained by dividing the entire data distribution into sub-populations. This measure takes into account the model capacity and computational constraints and can be accurately estimated from finite samples with probably approximately correct (PAC) bounds. We conduct a theoretical analysis of the properties of this measure and examine it under typical scenarios of data heterogeneity. In addition, we propose the information maximization (IM) algorithm to empirically explore the predictive heterogeneity within data. Through our empirical investigations, we find that the explored heterogeneity is interpretable and provides valuable insights for sub-population divisions in various fields, such as agriculture, sociology, object recognition, and healthcare. Moreover, the identified sub-populations can be utilized to identify features related to Covid-19 mortality and enhance the out-of-distribution generalization performance of machine learning models. This has been confirmed through experiments with both simulated and real-world data. In conclusion, our study contributes to the development of machine learning systems by providing a precise definition of predictive heterogeneity and a reliable measure for its estimation. Our findings demonstrate the potential of the IM algorithm for exploring predictive heterogeneity, assisting scientific discoveries and improving the generalization performance of machine learning models in real-world applications. ## 2 Preliminaries on Mutual Information and Predictive \(\mathcal{V}\)-Information In this section, we briefly introduce the mutual information and predictive \(\mathcal{V}\)-information (Xu et al., 2020) which are the preliminaries of our proposed predictive heterogeneity. **Notations.** For a probability triple \((\mathbb{S},\mathcal{F},\mathbb{P})\), define random variables \(X:\mathbb{S}\to\mathcal{X}\) and \(Y:\mathbb{S}\to\mathcal{Y}\) where \(\mathcal{X}\) is the covariate space and \(\mathcal{Y}\) is the target space. Accordingly. \(x\in\mathcal{X}\) denotes the covariates, and \(y\in\mathcal{Y}\) denotes the target. Denote the set of random categorical variables as \(\mathcal{C}=\{C:\mathbb{S}\to\mathbb{N}|\text{supp}(C)\text{ is finite}\}\). Additionally, \(\mathcal{P}(\mathcal{X}),\mathcal{P}(\mathcal{Y})\) denote the set of all probability measures over the Borel algebra on the spaces \(\mathcal{X},\mathcal{Y}\) respectively. \(H(\cdot)\) denotes the Shannon entropy of a discrete random variable and the differential entropy of a continuous variable, and \(H(\cdot|\cdot)\) denotes the conditional entropy of two random variables. In information theory, the mutual information of two random variables \(X\), \(Y\) measures the dependence between the two variables, which quantifies the reduction of entropy for one variable when observing the other: \[\mathbb{I}(X;Y)=H(Y)-H(Y|X). \tag{1}\] It is known that the mutual information is associated with the predictability of \(Y\)(Cover Thomas and Thomas Joy, 1991). While the standard definition of mutual information unrealistically assumes the unbounded computational capacity of the predictor, rendering it hard to estimate especially in high dimensions. To mitigate this problem, Xu et al. (2020) raise the predictive \(\mathcal{V}\)-information under realistic computational constraints, where the predictor is only allowed to use models in the predictive family \(\mathcal{V}\) to predict the target variable \(Y\). **Definition 1** (Predictive Family (Xu et al., 2020)): _Let \(\Omega=\{f:\mathcal{X}\cup\{\emptyset\}\to\mathcal{P}(\mathcal{Y})\}\). We say that \(\mathcal{V}\subseteq\Omega\) is a predictive family if it satisfies:_ \[\forall f\in\mathcal{V},\ \ \forall P\in\mathrm{range}(f),\ \ \exists f^{\prime}\in \mathcal{V},\ \ \ \text{s.t.}\ \forall x\in\mathcal{X},f^{\prime}[x]=P,f^{\prime}[\emptyset]=P. \tag{2}\] A predictive family contains all predictive models that are allowed to use, which forms computational or statistical constraints. The additional condition in Equation 2 means that the predictor can always ignore the input covariates (\(x\)) if it chooses to (only use \(\emptyset\)). [Predictive \(\mathcal{V}\)-information (Xu et al., 2020)] Let \(X,Y\) be two random variables taking values in \(\mathcal{X}\times\mathcal{Y}\) and \(\mathcal{V}\) be a predictive family. The predictive \(\mathcal{V}\)-information from \(X\) to \(Y\) is defined as: \[\mathbb{I}_{\mathcal{V}}(X\to Y)=H_{\mathcal{V}}(Y|\emptyset)-H_{\mathcal{V}}( Y|X), \tag{3}\] where \(H_{\mathcal{V}}(Y|\emptyset)\), \(H_{\mathcal{V}}(Y|X)\) are the predictive conditional \(\mathcal{V}\)-entropy defined as: \[H_{\mathcal{V}}(Y|X) =\inf_{f\in\mathcal{V}}\mathbb{E}_{x,y\sim X,Y}[-\log f[x](y)]. \tag{4}\] \[H_{\mathcal{V}}(Y|\emptyset) =\inf_{f\in\mathcal{V}}\mathbb{E}_{y\sim Y}[-\log f[\emptyset](y)]. \tag{5}\] Notably that \(f\in\mathcal{V}\) is a mapping: \(\mathcal{X}\cup\{\emptyset\}\rightarrow\mathcal{P}(\mathcal{Y})\), so \(f[x]\in\mathcal{P}(\mathcal{Y})\) is a probability measure on \(\mathcal{Y}\), and \(f[x](y)\in\mathbb{R}\) is the density evaluated on \(y\in\mathcal{Y}\). \(H_{\mathcal{V}}(Y|\emptyset)\) is also denoted as \(H_{\mathcal{V}}(Y)\). Compared with the mutual information, the predictive \(\mathcal{V}\)-information restricts the computational power and is much easier to estimate in high-dimensional cases. When the predictive family \(\mathcal{V}\) contains all possible models, i.e. \(\mathcal{V}=\Omega\), it is proved that \(\mathbb{I}_{\mathcal{V}}(X\to Y)=\mathbb{I}(X;Y)\)(Xu et al., 2020). ## 3 Predictive Heterogeneity In this paper, from the machine learning perspective, we quantify the data heterogeneity that affects decision making, named Predictive Heterogeneity, which is easy to integrate with machine learning algorithms and could help analyze big data and build more rational algorithms. ### Interaction Heterogeneity To formally define the predictive heterogeneity, we begin with the formulation of the interaction heterogeneity. The _interaction heterogeneity_ is defined as: [Interaction Heterogeneity] Let \(X\), \(Y\) be random variables taking values in \(\mathcal{X}\times\mathcal{Y}\). Denote the set of random categorical variables as \(\mathcal{C}\), and take its subset \(\mathscr{E}\subseteq\mathcal{C}\). Then \(\mathscr{E}\) is an environment set iff there exists \(\mathcal{E}\in\mathscr{E}\) such that \(X,Y\perp\!\!\!\perp\mathcal{E}\). \(\mathcal{E}\in\mathscr{E}\) is called an environment variable. The interaction heterogeneity between \(X\) and \(Y\) w.r.t. the environment set \(\mathscr{E}\) is defined as: \[\mathcal{H}^{\mathscr{E}}(X,Y)=\sup_{\mathcal{E}\in\mathscr{E}}\mathbb{I}(Y;X| \mathcal{E})-\mathbb{I}(Y;X). \tag{6}\] Each environment variable \(\mathcal{E}\) represents a stochastic 'partition' of \(\mathcal{X}\times\mathcal{Y}\), and the condition for the environment set implies that there exists such a stochastic partition that the joint distribution of \(X,Y\) could be preserved in each environment. In information theory, \(\mathbb{I}(Y;X|\mathcal{E})-\mathbb{I}(Y;X)\) is called the _interaction information_, which measures the influence of the environment variable \(\mathcal{E}\) on the amount of information shared between the target \(Y\) and the covariate \(X\). And the _interaction heterogeneity_ defined in Equation 6 quantifies the _maximal_ additional information that can be gained from involving or uncovering the environment variable \(\mathcal{E}\). Intuitively, large \(\mathcal{H}^{\mathscr{E}}(X,Y)\) indicates that the predictive power from \(X\) to \(Y\) is enhanced by \(\mathcal{E}\), which means that uncovering the latent sub-population associated with the environment partition \(\mathcal{E}\) will benefit the \(X\to Y\) prediction. ### Predictive Heterogeneity Based on the mutual information, the computation of the interaction heterogeneity is quite hard, since the standard mutual information is notoriously difficult to estimate especially in big data scenarios. Also, even if the mutual information could be accurately estimated, the prediction model may not be able to make good use of it. Inspired by Xu et al. (2020), we raise the _Predictive Heterogeneity_, which measures the interaction heterogeneity that can be captured under computational constraints and affects the prediction of models within the specified predictive family. To begin with, we propose the _Conditional Predictive \(\mathcal{V}\)-information_, which generalizes the predictive \(\mathcal{V}\)-information. **Definition 4** (Conditional Predictive \(\mathcal{V}\)-information): _Let \(X,Y\) be two random variables taking values in \(\mathcal{X}\times\mathcal{Y}\) and \(\mathcal{E}\) be an environment variable. For a predictive family \(\mathcal{V}\), the conditional predictive \(\mathcal{V}\)-information is defined as:_ \[\mathbb{I}_{\mathcal{V}}(X\to Y|\mathcal{E})=H_{\mathcal{V}}(Y|\emptyset, \mathcal{E})-H_{\mathcal{V}}(Y|X,\mathcal{E}), \tag{7}\] _where \(H_{\mathcal{V}}(Y|\emptyset,\mathcal{E})\) and \(H_{\mathcal{V}}(Y|X,\mathcal{E})\) are defined as:_ \[H_{\mathcal{V}}(Y|X,\mathcal{E}) =\mathbb{E}_{e\sim\mathcal{E}}\left[\inf_{f\in\mathcal{V}} \mathbb{E}_{x,y\sim X,Y|\mathcal{E}=e}[-\log f[x](y)]\right]. \tag{8}\] \[H_{\mathcal{V}}(Y|\emptyset,\mathcal{E}) =\mathbb{E}_{e\sim\mathcal{E}}\left[\inf_{f\in\mathcal{V}} \mathbb{E}_{y\sim Y|\mathcal{E}=e}[-\log f[\emptyset](y)]\right]. \tag{9}\] Intuitively, the conditional predictive \(\mathcal{V}\)-information measures the weighted average of predictive \(\mathcal{V}\)-information among environments. And here we are ready to formalize the predictive heterogeneity measure. **Definition 5** (Predictive Heterogeneity): _Let \(X\), \(Y\) be random variables taking values in \(\mathcal{X}\times\mathcal{Y}\) and \(\mathscr{E}\) be an environment set. For a predictive family \(\mathcal{V}\), the predictive heterogeneity for the prediction \(X\to Y\) with respect to \(\mathscr{E}\) is defined as:_ \[\mathcal{H}^{\mathscr{E}}_{\mathcal{V}}(X\to Y)=\sup_{\mathcal{E}\in \mathscr{E}}\mathbb{I}_{\mathcal{V}}(X\to Y|\mathcal{E})-\mathbb{I}_{\mathcal{ V}}(X\to Y), \tag{10}\] _where \(\mathbb{I}_{\mathcal{V}}(X\to Y)\) is the predictive \(\mathcal{V}\)-information following from Definition 2._ Leveraging the predictive \(\mathcal{V}\)-information, the predictive heterogeneity defined in Equation 10 characterizes the _maximal additional information_ that _can be used_ by the prediction model when involving the environment variable \(\mathcal{E}\). It restricts the prediction models in \(\mathcal{V}\) and the explored additional information could benefit the prediction performance of the model \(f\in\mathcal{V}\), for which it is named predictive heterogeneity. Next, we present some basic properties of the interaction heterogeneity and predictive heterogeneity. **Proposition 6** (Basic Properties of Predictive Heterogeneity): _Let \(X\), \(Y\) be random variables taking values in \(\mathcal{X}\times\mathcal{Y}\), \(\mathcal{V}\) be a function family, and \(\mathscr{E}\), \(\mathscr{E}_{1}\), \(\mathscr{E}_{2}\) be environment sets._ 1. Monotonicity_: If \(\mathscr{E}_{1}\subseteq\mathscr{E}_{2}\), \(\mathcal{H}_{\mathcal{V}}^{\mathscr{E}_{1}}(X\to Y)\leq\mathcal{H}_{\mathcal{V }}^{\mathscr{E}_{2}}(X\to Y)\)._ 2. Nonnegativity_: \(\mathcal{H}_{\mathcal{V}}^{\mathscr{E}}(X\to Y)\geq 0\)._ 3. Boundedness_: For discrete \(Y\), \(\mathcal{H}_{\mathcal{V}}^{\mathscr{E}}(X\to Y)\leq H_{\mathcal{V}}(Y|X)\)._ 4. Corner Case_: If the predictive family \(\mathcal{V}\) is the largest possible predictive family that includes all possible models, i.e. \(\mathcal{V}=\Omega\), we have \(\mathcal{H}^{\mathscr{E}}(X,Y)=\mathcal{H}_{\Omega}^{\mathscr{E}}(X\to Y)\)._ Proofs can be found at Appendix A. For further theoretical properties of predictive heterogeneity, in Section 3.3, we derive its explicit forms under _endogeneity_, a common reflection of data heterogeneity. And we demonstrate in Section 3.4 that our proposed predictive heterogeneity can be empirically estimated with guarantees if the complexity of \(\mathcal{V}\) is bounded (e.g., its Rademacher complexity). ### Theoretical Properties in Linear Cases In this section, we conduct a theoretical analysis of the predictive heterogeneity in multiple linear settings. Specifically, we consider two scenarios: (1) a homogeneous case with independent noises and (2) heterogeneous cases with endogeneity arising from selection bias and hidden variables. By examining these typical settings, we approximate the analytical forms of the proposed measure and draw insightful conclusions that can be generalized to more complex scenarios. Firstly, under a homogeneous case with no data heterogeneity, Theorem 7 proves that our measure is bounded by the scale of label noises (which is usually small) and reduces to 0 in linear case under mild assumptions. It indicates that the predictive heterogeneity is insensitive to independent noises. Notably that in the linear case we only deal with the environment variable satisfying \(X\perp\epsilon|\mathcal{E}\), since in common prediction tasks, the independent noises are unknown and unrealistic to be exploited for the prediction. **Theorem 7** (Homogeneous Case with Independent Noises): _For a prediction task \(X\to Y\) where \(X\), \(Y\) are random variables taking values in \(\mathbb{R}^{n}\times\mathbb{R}\), consider the data generation process as \(Y=g(x)+\epsilon,\epsilon\sim\mathcal{N}(0,\sigma^{2})\) where \(g:\mathbb{R}^{n}\to\mathbb{R}\) is a measurable function. 1) For a function class \(\mathcal{G}\) such that \(g\in\mathcal{G}\), define the function family as \(\mathcal{V}_{\mathcal{G}}=\{f|f[x]=\mathcal{N}(\phi(x),\sigma_{V}^{2}),\phi\in \mathcal{G},\sigma_{V}\in\mathbb{R}^{+}\}\). With an environment set \(\mathscr{E}\), we have \(\mathcal{H}_{\mathcal{V}_{\mathcal{G}}}^{\mathscr{E}}(X\to Y)\leq\pi\sigma^{2}\). 2) Take \(n=1\) and \(g(x)=\beta x\),\(\beta\in\mathbb{R}\). Without loss of generality, assume \(\mathbb{E}[X]=0\) and \(\mathbb{E}[X^{2}]\) exists. Given the function family \(\mathcal{V}_{\sigma}=\{f|f[x]=\mathcal{N}(\theta x,\sigma^{2}),\theta\in \mathbb{R},\sigma\text{ fixed }\}\) and the environment set \(\mathscr{E}=\{\mathcal{E}|\mathcal{E}\in\mathcal{C},|\text{supp}(\mathcal{E})| =2,X\perp\epsilon|\mathcal{E}\}\). We have \(\mathcal{H}_{\mathcal{V}_{\sigma}}^{\mathscr{E}}(X\to Y)=0\). Proofs can be found at Appendix B._ Secondly, we examine the proposed measure under _two typical cases of data heterogeneity_(Fan et al., 2014), named _endogeneity by selection bias_(Heckman, 1979; Winship and Mare, 1992; Cui and Athey, 2022) and _endogeneity with hidden variables_(Fan et al., 2014; Arjovsky et al., 2019). To begin with, in Theorem 8, we consider the prediction task \(X\to Y\) with \(X\), \(Y\) taking values in \(\mathbb{R}^{2}\times\mathbb{R}\). Let \(X=[S,V]^{T}\). The predictive family is specified as: \[\mathcal{V}=\{f|f[x]=\mathcal{N}(\theta_{S}S+\theta_{V}V,\sigma^{2}),\quad \theta_{S},\theta_{V}\in\mathbb{R},\sigma=1\}. \tag{11}\] And the data distribution \(P(X,Y)\) is a mixture of latent sub-populations, which could be formulated by an environment variable \(\mathcal{E}^{*}\in\mathcal{C}\) such that \(P(X,Y)=\sum_{e\in\text{supp}(\mathcal{E}^{*})}P(\mathcal{E}^{*}=e)P(X,Y| \mathcal{E}^{*}=e)\). For each \(e\in\text{supp}(\mathcal{E}^{*})\), \(P(X,Y|\mathcal{E}^{*}=e)\) is the distribution of a homogeneous sub-population. Note that the prediction task is to predict \(Y\) with covariates \(X\), and the sub-population structure is latent. That is, \(P(\mathcal{E}^{*}|X,Y)\) is _unknown_ for models. In the following, we derive the analytical forms of our measure under the one typical case. **Theorem 8** (Endogeneity with Selection Bias): _For the prediction task \(X=[S,V]^{T}\to Y\) with a latent environment variable \(\mathcal{E}^{*}\), the data generation process with selection bias is defined as:_ \[Y=\beta S+f(S)+\epsilon_{Y},\epsilon_{Y}\sim\mathcal{N}(0,\sigma_{Y}^{2}); \quad V=r(\mathcal{E}^{*})f(S)+\sigma(\mathcal{E}^{*})\cdot\epsilon_{V}, \epsilon_{V}\sim\mathcal{N}(0,1), \tag{12}\] _where \(f:\mathbb{R}\to\mathbb{R}\) and \(r,\sigma:\text{supp}(\mathcal{E}^{*})\to\mathbb{R}\) are measurable functions. \(\beta\in\mathbb{R}\). Assume that \(\mathbb{E}[S^{2}]\) is finite, \(\mathbb{E}[f(S)S]=0\) and there exists \(L>1\) such that \(L\sigma^{2}(\mathcal{E}^{*})<r^{2}(\mathcal{E}^{*})\mathbb{E}[f^{2}]\). For the predictive family defined in equation 11 and the environment set \(\mathcal{E}=\mathcal{C}\), the predictive heterogeneity of the prediction task \([S,V]^{T}\to Y\) approximates to:_ \[\mathcal{H}_{\mathcal{V}}^{\mathcal{C}}(X\to Y)\approx\frac{\text{Var}(r_{e}) \mathbb{E}[f^{2}]+\mathbb{E}[\sigma^{2}(\mathcal{E}^{*})]}{\mathbb{E}[r_{e}^ {2}]\mathbb{E}[f^{2}]+\mathbb{E}[\sigma^{2}(\mathcal{E}^{*})]}\mathbb{E}[f^{2 }(S)],\text{error bounded by }\frac{1}{2}\max(\sigma_{Y}^{2},R(r,\sigma,f)). \tag{13}\] _And further we have_ \[R(r(\mathcal{E}^{*}),\sigma(\mathcal{E}^{*}),f) =\mathbb{E}[(\frac{1}{\frac{r^{2}\mathbb{E}[f^{2}]}{\sigma^{2}}+ 1})^{2}]\mathbb{E}[f^{2}]+\mathbb{E}_{\mathcal{E}^{*}}[(\frac{1}{\frac{r}{ \sigma}+\frac{\sigma}{r\mathbb{E}[f^{2}]}})^{2}] \tag{14}\] \[<\mathbb{E}[f^{2}](\frac{1}{(L+1)^{2}}+\frac{1}{L+2+\frac{1}{L}}).\] _Proofs can be found at Appendix C._ Intuitively, the data generation process in Theorem 8 introduces the spurious correlation between the spurious feature \(V\) and the target \(Y\), which varies across different sub-populations (i.e. \(r(\mathcal{E}^{*})\) and \(\sigma(\mathcal{E}^{*})\) varies) and brings about data heterogeneity. Here \(\mathbb{E}[f(S)S]=0\) indicates a model misspecification since there is a nonlinear term \(f(S)\) that could not be inferred by the linear predictive family with the stable feature \(S\). The constant \(L\) characterizes the strength of the spurious correlation between \(V\) and \(Y\). Larger \(L\) means \(V\) could provide more information for prediction. From the approximation in Equation 13, we can see that our proposed predictive heterogeneity is dominated by two terms: (1) \(\text{Var}[r(\mathcal{E}^{*})]/\mathbb{E}[r^{2}(\mathcal{E}^{*})]\) characterizes the variance of \(r(\mathcal{E}^{*})\) among sub-populations; (2) \(\mathbb{E}[f^{2}(S)]\) reflects the strength of model misspecifications. These two components account for two sources of the data heterogeneity under selection bias, which validates the rationality of our proposed measure. Based on the theorem, it can be inferred that the degree of predictive heterogeneity increases with greater variability of \(r(\mathcal{E}^{*})\) among sub-populations and stronger model misspecifications. In other words, when the sub-populations differ significantly from each other and the model is not accurately specified, the predictive heterogeneity is likely to be larger. Additionally, in Theorem 9, we analyze our measure under endogeneity with hidden variables. In Theorem 9, an anti-causal covariate \(V\) is generated via the causal diagram like \(Y\to V\leftarrow\mathcal{E}^{*}\) with a hidden environment variable \(\mathcal{E}^{*}\). However, since \(\mathcal{E}^{*}\) is omitted from the prediction models, the relationship between \(V\) and \(Y\) is biased, which inhibits the prediction power. **Theorem 9** (Endogeneity with Hidden Variables): _For the prediction task \([S,V]^{T}\to Y\) with a latent environment variable \(\mathcal{E}^{*}\), the data generation process with hidden variables is defined as:_ \[Y=\beta S+f(S)+\epsilon_{Y},\epsilon_{Y}\sim\mathcal{N}(0,\sigma_{Y}^{2}); \quad V=r(\mathcal{E}^{*})(f(S)+\epsilon_{Y})+\sigma(\mathcal{E}^{*})\epsilon_ {V},\epsilon_{V}\sim\mathcal{N}(0,1), \tag{15}\] _where \(f:\mathbb{R}\rightarrow\mathbb{R}\) and \(r,\sigma:\text{supp}(\mathcal{E}^{*})\rightarrow\mathbb{R}\) are measurable functions. \(\beta\in\mathbb{R}\). Assume that \(\mathbb{E}[f(S)S]=0\) and there exists \(L>1\) such that \(L\sigma^{2}(\mathcal{E}^{*})<r^{2}(\mathcal{E}^{*})(\mathbb{E}[f^{2}]+\sigma_ {Y}^{2})\). For the predictive family defined in equation 11 and the environment set \(\mathscr{E}=\mathcal{C}\), the predictive heterogeneity of the prediction task \([S,V]^{T}\to Y\) approximates to:_ \[\mathcal{H}_{V}^{C}(X\to Y)\approx\frac{\text{Var}(r_{e})( \mathbb{E}[f^{2}]+\sigma_{Y}^{2})+\mathbb{E}[\sigma^{2}(\mathcal{E}^{*})]}{ \mathbb{E}[r_{e}^{2}](\mathbb{E}[f^{2}]+\sigma_{Y}^{2})+\mathbb{E}[\sigma^{2} (\mathcal{E}^{*})]}(\mathbb{E}[f^{2}(S)]+\sigma_{Y}^{2}), \tag{16}\] \[\text{error bounded by }\frac{1}{2}\max(\sigma_{Y}^{2},R(r, \sigma,f)).\] _And further we have:_ \[R(r(\mathcal{E}^{*}),\sigma(\mathcal{E}^{*}),f) =\mathbb{E}[(\frac{1}{\frac{r^{2}(\mathbb{E}[f^{2}]+\sigma_{Y}^{ 2})}{\sigma^{2}}+1})^{2}](\mathbb{E}[f^{2}]+\sigma_{Y}^{2})+\mathbb{E}_{ \mathcal{E}^{*}}[(\frac{1}{\frac{r}{\sigma}+\frac{\sigma}{r(\mathbb{E}[f^{2}] +\sigma_{Y}^{2})}})^{2}] \tag{17}\] \[<(\mathbb{E}[f^{2}]+\sigma_{Y}^{2})(\frac{1}{(L+1)^{2}}+\frac{1}{ L+2+\frac{1}{L}}).\] _Proofs can be found at Appendix C._ Intuitively, the data generation process in Theorem 9 introduces the _biased_ anti-causal relationship between the spurious feature \(V\) and the target \(Y\), which varies across different sub-populations (i.e. \(r(\mathcal{E}^{*})\) and \(\sigma(\mathcal{E}^{*})\) varies) and brings about data heterogeneity. Here, similar as Theorem 8, \(\mathbb{E}[f(S)S]=0\) indicates model misspecification and the constant \(L\) characterizes the strength of the biased anti-causal relationship between \(V\) and \(Y\), where larger \(L\) means more information that \(V\) could provide for predicting \(Y\) when \(\mathcal{E}^{*}\) is missing. From the approximation in Equation 16, we can see that our proposed predictive heterogeneity is dominated by two terms: (1) \(\text{Var}[r(\mathcal{E}^{*})]/\mathbb{E}[r^{2}(\mathcal{E}^{*})]\) characterizes the variance of \(r(\mathcal{E}^{*})\) among sub-populations; (2) \(\mathbb{E}[f^{2}(S)]+\sigma_{Y}^{2}\) reflects the maximal additional information that could be provided by \(V\). In the broader context, Theorem 1, 2, and 3 suggest that our proposed predictive heterogeneity measure is equipped with remarkable properties, namely its insensitivity to homogeneous cases and its ability to account for the latent heterogeneity arising from typical sources of data heterogeneity. These findings highlight the efectiveness of our measure in accurately characterizing predictive heterogeneity in various machine learning tasks. ### PAC Guarantees for Predictive Heterogeneity Estimation Defined under explicit computation constraints, our Predictive Heterogeneity could be empirically estimated with guarantees if the complexity of the model family \(\mathcal{V}\) is bounded. In this work, we provide finite sample generalization bounds with the Rademacher complexity. First, we describe the definition of the empirical predictive heterogeneity, the explicit formula for which could be found in Definition 10. The dataset \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{|\mathcal{D}|}\) is independently and identically drawn from the population \(X,Y\). Given a function family \(\mathcal{V}\) and an environment set \(\mathscr{E}_{K}\) such that for \(\mathcal{E}\in\mathscr{E}_{K}\), \(\text{supp}(\mathcal{E})=\{(e_{k})_{k=1}^{K}\}\)., let \(\mathcal{Q}\) be the set of all probability distributions of \(X\),\(Y\),\(\mathcal{E}\) where \(\mathcal{E}\in\mathscr{E}_{K}\). The empirical predictive heterogeneity \(\hat{\mathcal{H}}_{\mathcal{V}}^{\mathscr{E}_{K}}(X\to Y;\mathcal{D})\) is given by: \[\hat{\mathcal{H}}_{\mathcal{V}}^{\mathscr{E}_{K}}(X\to Y; \mathcal{D}) =\sup_{\mathcal{E}\in\mathscr{E}_{K}}\hat{\mathbb{I}}_{\mathcal{V }}(X\to Y|\mathcal{E};\mathcal{D})-\hat{\mathbb{I}}_{\mathcal{V}}(X\to Y; \mathcal{D}) \tag{18}\] \[=\sup_{\hat{Q}\in\mathcal{Q}}\sum_{k=1}^{K}\Big{[}\hat{Q}( \mathcal{E}=e_{k})\hat{H}_{\mathcal{V}}(Y|\mathcal{E}=e_{k};\mathcal{D})-\hat {Q}(\mathcal{E}=e_{k})\hat{H}_{\mathcal{V}}(Y|X,\mathcal{E}=e_{k};\mathcal{D}) \Big{]}\] (19) \[\qquad-[\hat{H}_{\mathcal{V}}(Y;\mathcal{D})-\hat{H}_{\mathcal{V }}(Y|X;\mathcal{D})]. \tag{20}\] Specifically, \[\hat{Q}(\mathcal{E}=e_{k})\hat{H}_{\mathcal{V}}(Y|X,\mathcal{E}=e _{k};\mathcal{D}) \tag{21}\] \[=\inf_{f\in\mathcal{V}}\hat{Q}(\mathcal{E}=e_{k})\sum_{x_{i},y_{i }\in\mathcal{D}}-\log f[x_{i}](y_{i})\frac{\hat{Q}(x_{i},y_{i}|\mathcal{E}=e_{ k})}{\sum_{x_{j},y_{j}\in\mathcal{D}}\hat{Q}(x_{j},y_{j}|\mathcal{E}=e_{k})}\] (22) \[=\inf_{f\in\mathcal{V}}\hat{Q}(\mathcal{E}=e_{k})\sum_{x_{i},y_{i }\in\mathcal{D}}-\log f[x_{i}](y_{i})\frac{\hat{Q}(\mathcal{E}=e_{k}|x_{i},y_{ i})\hat{Q}(x_{i},y_{i})}{\sum_{x_{j},y_{j}\in\mathcal{D}}\hat{Q}(\mathcal{E}=e_{k}|x_{j},y _{j})\hat{Q}(x_{j},y_{j})}\] (23) \[=\inf_{f\in\mathcal{V}}\hat{Q}(\mathcal{E}=e_{k})\sum_{x_{i},y_{i }\in\mathcal{D}}-\log f[x_{i}](y_{i})\frac{\hat{Q}(\mathcal{E}=e_{k}|x_{i},y_{ i})\hat{Q}(x_{i},y_{i})}{\hat{Q}(\mathcal{E}=e_{k})}\] (24) \[=\inf_{f\in\mathcal{V}}\sum_{x_{i},y_{i}\in\mathcal{D}}-\log f[x_{ i}](y_{i})\hat{Q}(\mathcal{E}=e_{k}|x_{i},y_{i})\hat{Q}(x_{i},y_{i})\] (25) \[=\inf_{f\in\mathcal{V}}\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in \mathcal{D}}-\log f[x_{i}](y_{i})\hat{Q}(\mathcal{E}=e_{k}|x_{i},y_{i}). \tag{26}\] The explicit formula for \(\hat{Q}(\mathcal{E}=e_{k})\hat{H}_{\mathcal{V}}(Y|\mathcal{E}=e_{k};\mathcal{D})\), \(\hat{H}_{\mathcal{V}}(Y|X;\mathcal{D})\) and \(\hat{H}_{\mathcal{V}}(Y;\mathcal{D})\) could be similarly derived. Here we are ready to formally define the empirical predictive heterogeneity. **Definition 10** (Empirical Predictive Heterogeneity): _For the prediction task \(X\to Y\) with \(X\), \(Y\) taking values in \(\mathcal{X}\times\mathcal{Y}\), a dataset \(\mathcal{D}\) is independently and identically drawn from the population such that \(\mathcal{D}=\{(x_{i},y_{i})_{i=1}^{N}\sim X,Y\}\). Given the predictive family \(\mathcal{V}\) and the environment set \(\mathscr{E}_{K}=\{\mathcal{E}|\mathcal{E}\in\mathcal{C},|\text{supp}(\mathcal{ E})|=K\}\) where \(K\in\mathbb{N}\). Without loss of generality, we specify that \(\text{supp}(\mathcal{E})=\{(e_{k})_{k=1}^{K}\}\) where \(e_{k}\) denotes a single environment. Let \(\mathcal{Q}\) be the set of all probability distributions of \(X\),\(Y\),\(\mathcal{E}\) where \(\mathcal{E}\in\mathscr{E}_{K}\). The empirical predictive heterogeneity \(\hat{\mathcal{H}}_{\mathcal{V}}^{\mathscr{E}_{K}}(X\to Y;\mathcal{D})\) with respect to \(\mathcal{D}\) is defined as:_ \[\hat{\mathcal{H}}_{\mathcal{V}}^{\mathscr{E}_{K}}(X\to Y; \mathcal{D}) =\sup_{\hat{Q}\in\mathcal{Q}}\sum_{k=1}^{K}\Big{[}\hat{Q}(\mathcal{E}=e_{k}) \hat{H}_{\mathcal{V}}(Y|\mathcal{E}=e_{k};\mathcal{D})-\hat{Q}(\mathcal{E}=e_{k })\hat{H}_{\mathcal{V}}(Y|X,\mathcal{E}=e_{k};\mathcal{D})\Big{]}\] \[\qquad-[\hat{H}_{\mathcal{V}}(Y;\mathcal{D})-\hat{H}_{\mathcal{V }}(Y|X;\mathcal{D})], \tag{27}\] _where_ \[\hat{Q}(\mathcal{E}=e_{k})\hat{H}_{\mathcal{V}}(Y|X,\mathcal{E}=e_{k}; \mathcal{D}) =\inf_{f\in\mathcal{V}}\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in \mathcal{D}}-\log f[x_{i}](y_{i})\hat{Q}(\mathcal{E}=e_{k}|x_{i},y_{i}). \tag{28}\] \[\hat{Q}(\mathcal{E}=e_{k})\hat{H}_{\mathcal{V}}(Y|\mathcal{E}=e_{ k};\mathcal{D}) =\inf_{f\in\mathcal{V}}\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i} \in\mathcal{D}}-\log f[\emptyset](y_{i})\hat{Q}(\mathcal{E}=e_{k}|x_{i},y_{i}).\] (29) \[\hat{H}_{\mathcal{V}}(Y|X;\mathcal{D}) =\inf_{f\in\mathcal{V}}\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i} \in\mathcal{D}}-\log f[x_{i}](y_{i}).\] (30) \[\hat{H}_{\mathcal{V}}(Y;\mathcal{D}) =\inf_{f\in\mathcal{V}}\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i} \in\mathcal{D}}-\log f[\emptyset](y_{i}). \tag{31}\] Then we give the PAC bound over the empirical usable predictive heterogeneity in Theorem 11. **Theorem 11** (PAC Bound): _Consider the prediction task \(X\to Y\) where \(X\), \(Y\) are random variables taking values in \(\mathcal{X}\times\mathcal{Y}\). Assume that the predictive family \(\mathcal{V}\) satisfies \(\forall x\in\mathcal{X}\), \(\forall y\in\mathcal{Y}\),\(\forall f\in\mathcal{V}\), \(\log f[x](y)\in[-B,B]\) where \(B>0\). For given \(K\in\mathbb{N}\), the environment set is defined as \(\mathscr{E}_{K}=\{\mathcal{E}|\mathcal{E}\in\mathcal{C},|\text{supp}( \mathcal{E})|=K\}\) where \(K\in\mathbb{N}\). Without loss of generality, we specify that \(\text{supp}(\mathcal{E})=\{(e_{k})_{k=1}^{K}\}\) where \(e_{k}\) denotes a single environment. Let \(\mathcal{Q}\) be the set of all probability distributions of \(X\),\(Y\),\(\mathcal{E}\) where \(\mathcal{E}\in\mathscr{E}_{K}\). Take an \(e\in\text{supp}(\mathcal{E})\) and define a function class \(\mathcal{G}_{\mathcal{V}}=\{g|g(x,y)=\log f[x](y)Q(\mathcal{E}=e|x,y),f\in \mathcal{V},Q\in\mathcal{Q}\}\). Denote the Rademacher complexity of \(\mathcal{G}\) with \(N\) samples by \(\mathscr{R}_{N}(\mathcal{G})\). Then for any \(\delta\in(0,1/(2K+2))\), with a probability over \(1-2(K+1)\delta\), for dataset \(\mathcal{D}\) independently and identically drawn from \(X\), \(Y\), we have:_ \[|\mathcal{H}_{\mathcal{V}}^{\mathscr{E}_{K}}(X\to Y)-\hat{\mathcal{H}}_{ \mathcal{V}}^{\mathscr{E}_{K}}(X\to Y;\mathcal{D})|\leq 4(K+1)\mathscr{R}_{| \mathcal{D}|}(\mathcal{G}_{\mathcal{V}})+2(K+1)B\sqrt{2\log\frac{1}{\delta}/| \mathcal{D}|}, \tag{32}\] _where \(\mathscr{R}_{|\mathcal{D}|}(\mathcal{G}_{\mathcal{V}})=\mathcal{O}(|\mathcal{ D}|^{-\frac{1}{2}})\)(Bartlett and Mendelson, 2002). Proofs can be found at Appendix D._ ## 4 Algorithm To empirically estimate the predictive heterogeneity in Definition 10, we derive the Information Maximization (IM) algorithm from the formal definition in Equation 27 to infer the distribution of \(\mathcal{E}\) that maximizes the empirical predictive heterogeneity \(\hat{\mathcal{H}}_{\mathcal{V}}^{\mathscr{E}_{K}}(X\to Y;\mathcal{D})\). Objective Function.Given dataset \(\mathcal{D}=\{X_{N},Y_{N}\}=\{(x_{i},y_{i})\}_{i=1}^{N}\), denote \(\text{supp}(\mathcal{E})=\{e_{1},\ldots,e_{K}\}\), we parameterize the distribution of \(\mathcal{E}|(X_{N},Y_{N})\) with weight matrix \(W\in\mathcal{W}_{K}\), where \(K\) is the pre-defined number of environments and \(\mathcal{W}_{K}=\{W:W\in\mathbb{R}_{+}^{N\times K}\text{ and }W\mathbf{1}_{K}=\mathbf{1}_{N}\}\) is the allowed weight space. Each element \(w_{ij}\) in \(W\) represents \(P(\mathcal{E}=e_{j}|x_{i},y_{i})\) (the probability of the \(i\)-th data point belonging to the \(j\)-th sub-population). For a predictive family \(\mathcal{V}\), the solution to the supremum problem in the Definition 10 is equivalent to the following objective function: \[\begin{split}\min_{W\in\mathcal{W}_{K}}&\mathcal{R} _{\mathcal{V}}(W,\theta_{1}^{*}(W),\ldots,\theta_{K}^{*}(W))=\left\{\frac{1}{N} \sum_{i=1}^{N}\sum_{j=1}^{K}w_{ij}\ell_{\mathcal{V}}(f_{\theta_{j}^{*}}(x_{i}), y_{i})+U_{\mathcal{V}}(W,Y_{N})\right\},\\ \text{s.t.}&\theta_{j}^{*}(W)\in\arg\min_{\theta} \left\{\mathcal{L}_{\mathcal{V}}^{j}(W,\theta)=\sum_{i=1}^{N}w_{ij}\ell_{ \mathcal{V}}(f_{\theta}(x_{i}),y_{i})\right\},\quad\text{for }j=1,\ldots,K,\end{split} \tag{33}\] where \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\) denotes a predicting function parameterized by \(\theta\), \(\ell_{\mathcal{V}}(\cdot,\cdot):\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}\) represents a loss function and \(U_{\mathcal{V}}(W,Y_{N})\) is a regularizer. Specifically, \(f_{\theta}\), \(\ell_{\mathcal{V}}\) and \(U_{\mathcal{V}}\) are determined by the predictive family \(\mathcal{V}\). Here we provide implementations for two typical and general machine learning tasks, regression and classification. ### Regression For the _regression task_, the predictive family is typically modeled as: \[\mathcal{V}_{1}=\{g:g[x]=\mathcal{N}(f_{\theta}(x),\sigma^{2}),f\text{ is the predicting function and }\theta\text{ is learnable, }\sigma\text{ is a constant}\}. \tag{34}\] The corresponding loss function is \(\ell_{\mathcal{V}_{1}}(f_{\theta}(X),Y)=(f_{\theta}(X)-Y)^{2}\), and \(U_{\mathcal{V}_{1}}(W,Y_{N})\) becomes \[U_{\mathcal{V}_{1}}(W,Y_{N})=\text{Var}_{j\in[K]}(\overline{Y_{N}^{j}})=\sum_ {j=1}^{K}\left(\sum_{i=1}^{N}w_{ij}y_{i}\right)^{2}\frac{1}{N\sum_{i=1}^{N}w_{ ij}}-\left(\frac{1}{N}\sum_{i=1}^{N}y_{i}\right)^{2} \tag{35}\] where \(\overline{Y_{N}^{j}}\) denotes the mean value of the label \(Y\) given \(\mathcal{E}=e_{j}\) and \(U(W,Y_{N})\) calculates the variance of \(\overline{Y_{N}^{j}}\) among sub-populations \(e_{1}\sim e_{K}\). ### Classification For the _classification task_, the predictive family is typically modeled as: \[\mathcal{V}_{2}=\{g:g[x]=f_{\theta}(x)\in\Delta_{c},f\text{ is the classification model and }\theta\text{ is learnable}\}, \tag{36}\] where \(c\) is the class number and \(\Delta_{c}\) denotes the \(c\)-dimensional simplex. Here each model in the predictive family \(\mathcal{V}_{2}\) outputs a discrete distribution in the form of a \(c\)-dimensional simplex. In this case, the corresponding loss function \(\ell_{\mathcal{V}_{2}}(\cdot,\cdot)\) is the cross entropy loss and the regularizer becomes \(U_{\mathcal{V}_{2}}(W,Y_{N})=-\sum_{j=1}^{K}\frac{1}{N}(\sum_{i=1}^{N}w_{ij})H (Y_{N}^{j})\), where \(H(Y_{N}^{j})\) is the entropy of \(Y\) given \(\mathcal{E}=e_{j}\). ### Optimization. The bi-level optimization in Equation 33 can be solved by performing projected gradient descent w.r.t. \(W\). The gradient of \(W\) can be calculated by: (we omit the subscript \(\mathcal{V}\) here) \[\nabla_{W}\mathcal{R} =\nabla_{W}U+\left[\ell(f_{\theta_{j}}(x_{i}),y_{i})\right]_{i,j }^{N\times K}+\sum_{j=1}^{K}\framebox{$\nabla_{\theta_{j}}\mathcal{R}|_{ \theta_{j}^{*}}\nabla_{W}\theta_{j}^{*}$} \tag{37}\] \[\text{where }\framebox{$\nabla_{\theta_{j}}\mathcal{R}|_{\theta_{j}^{*}} \nabla_{W}\theta_{j}^{*}$}\] (38) \[\approx\nabla_{\theta_{j}}\mathcal{R}|_{\theta_{j}^{*}}\frac{ \partial^{2}\mathcal{L}^{j}}{\partial\theta_{j}\partial W^{\text{T}}}\bigg{|} _{\theta_{j}^{*-1}}\qquad\text{, for }j=1,\ldots,K. \tag{39}\] where \(\ell(f_{\theta_{j}}(x_{i}),y_{i})]_{i,j}^{N\times K}\) is an \(N\times K\) matrix in which the \((i,j)\)-th element is \(\ell(f_{\theta_{j}}(x_{i}),y_{i})\). Here Equation 38 approximates \(\theta_{j}^{*}\) by \(\theta_{j}^{t}\) from \(t\) steps of inner loop gradient descent and Equation 39 takes \(t=1\) and performs _1-step truncated backpropagation_ (Shaban et al., 2019; Zhou et al., 2022). Our information maximization algorithm updates \(W\) by projected gradient descent as: \[W\leftarrow\text{Proj}_{\mathcal{W}_{K}}\left(W-\eta\nabla_{W}\mathcal{R}\right), \quad\eta\text{ is the learning rate of }W. \tag{40}\] Then we prove that minimizing Equation 33 exactly finds the supremum w.r.t. \(\mathcal{E}\) in the Definition 10 (formal) of the empirical predictive heterogeneity. **Theorem 12** (Justification of the IM Algorithm): _For the regression task with predictive family \(\mathcal{V}_{1}\) and classification task with \(\mathcal{V}_{2}\), the optimization of Equation 33 is equivalent to the supremum problem of the empirical predictive heterogeneity \(\hat{\mathcal{H}}_{\mathcal{V}_{1}}^{\mathscr{E}_{K}}(X\to Y; \mathcal{D})\), \(\hat{\mathcal{H}}_{\mathcal{V}_{2}}^{\mathscr{E}_{K}}(X\to Y; \mathcal{D})\) respectively in Equation 27 with the pre-defined environment number \(K\) (i.e. \(|\text{supp}(\mathcal{E})|=K\)). Proofs can be found at Appendix E._ **Remark 13** (Difference from Expectation Maximization): _The expectation maximization (EM) algorithm is to infer latent variables of a statistic model to achieve the **maximum likelihood**. Our proposed information maximization (IM) algorithm is to infer the latent variables \(W\) which brings the **maximal predictive heterogeneity** associated with the maximal information. Due to the regularizer \(U_{\mathcal{V}}\) in our objective function, the EM algorithm cannot efficiently solve our problem, and therefore we adopt bi-level optimization techniques._ ### Approximation Accuracy Here we provide some additional numerical results of our linear examples in Section 3.3. In the left sub-figure of Figure 1, we plot the estimated predictive heterogeneity under the setting of Theorem 7 where the analytical solution is equal to \(0\). From the results, we can see that with the growing of sample size, the estimated value of our IM algorithm is approaching to \(0\) (note that the \(y\)-axis is \(\ln(\text{estimated value})\)). In the middle sub-figure, for the setting in Theorem 8, we plot the theoretical approximation, empirical approximation (finite sample case) and the estimated value of the predictive heterogeneity under different ratios between the majority and the minority (which controls the \(\text{Var}[r(\mathcal{E}^{*})]\) in Equation 13). And the Figure 1: Numerical results of the toy examples in Section 3.3. The left sub-figure plots the estimated predictive heterogeneity under the setting of Theorem 7, the middle sub-figure plots the theoretical approximation, empirical approximation and our results under the setting of Theorem 8, and the right one is under the setting of Theorem 9. right sub-figure plots the same values under the setting in Theorem 9. From these two figures, we can see that (1) the empirical approximation under finite samples lies closely to the theoretical approximation, which is supported by our generalization bounds in Theorem 11; (2) the estimated value of our IM algorithm is closely to the theoretical approximation,, which demonstrates the accuracy of our approximation algorithm in Equation 38 and 39. Also, as the ratio changes from \(4:1\) to \(1:1\), the data heterogeneity is increasing, and our predictive heterogeneity is also increasing, which is controlled by the term \(\text{Var}(r(\mathcal{E}^{*}))\) in Equation 13 and 16. ## 5 Experiments ### Reveal Explainable Sub-population Structures The predictive heterogeneity could provide valuable insights for the sub-population division and support decision-making across various fields, including agricultural and sociological research, as well as object recognition. Our illustrative examples below reveal that the learned sub-population divisions are highly explainable and relevant for decision-making purposes. **Example: Agriculture** It is known that the climate affects crop yields and crop suitability (Lobell et al., 2008). We utilize the data from the NOAA database which contains daily weather from weather stations around the world. Following Zhao et al. (2021), we extracted summary statistics from the weather sequence of the year 2018, including the average yearly temperature, humidity, wind speed and rainy days. The task is to predict the _crop yield_ in each place with _weather summary statistics_ and _location covariates (i.e. longitude and latitude)_ of the place. For easy illustration, we focus on the places with crop types of wheat or rice. Notably, our input covariates do _not_ contain the crop type information. We use MLP models in this task and set \(K=2\) for our IM algorithm. Given that crop yield prediction mechanisms are closely related to crop type, which is unknown in the prediction task, we believe this causes data heterogeneity in the entire dataset, and the recognized predictive heterogeneity should relate to it. To demonstrate the rationality of our measure, we plot the real distribution map of wheat and rice planting areas in Figure 2(a) and the learned two sub-populations of our IM algorithm in Figure Figure 2: Results on the crop yield data. We color each region according to its main crop type, and the shade represents the proportion of the main crop type after smoothing via \(k\)-means (\(k=3\)). 2(b). The division given by our algorithm is quite similar to the real division of the two crops, indicating the rationality of our measure. We observe some discrepancies in areas such as the Tibet Plateau in Asia, which we attribute to the absence of significant features such as population density and altitude that significantly affect crop yields. **Example: Sociology** We use the UCI Adult dataset (Kohavi and Becker, 1996), which is widely used in the study of algorithmic fairness and derived from the 1994 Current Population Survey conducted by the US Census Bureau. The task is to predict whether the income of a person is greater or less than 50k US dollars based on personal features. We use linear models in this task and set \(K=2\). In this example, we aim to investigate whether _sub-population structures_ within data affect the learning of machine learning models. In Figure 3 (a), we plot summary statistics for the two sub-populations, revealing a key difference in capital gain. In Figure 3 (b), we present the feature importance given by linear models for the two sub-populations, and find that for individuals with high capital gain, the prediction model mainly relies on capital gain, which is fair. However, for individuals with low capital gain, models also consider sensitive attributes such as sex and marital status, which have been known to cause discrimination. Our results are consistent with those found in (Zhao et al., 2021) and can help identify potential inequalities in decision-making. For example, our findings suggest potential discrimination towards individuals with low capital gain, which could motivate algorithmic design and improve policy fairness. **Example: Object Recognition** Finally, we utilize the Waterbird dataset (Sagawa et al., 2019), which is widely used as a benchmark in the field of robust learning, to investigate the impact of spurious correlations on machine learning models. The task is to recognize waterbirds or landbirds, but the images contain _spurious correlations_ between the background and the target label. For the majority of images, waterbirds are located on water and landbirds on land, whereas for a minority of images, this correlation is reversed. Therefore, the spurious correlation leads to predictive heterogeneity in this dataset, which could significantly affect the performance of machine learning models. In this example, we use the ResNet18 and set \(K=2\) in our IM algorithm. Our method successfully captures the spurious correlation and identifies two sub-populations of images with inverse correlations between the object and the background. To demonstrate the effectiveness of our method, we randomly sample 50 images for each class and each learned sub-population and plot them in Figure 4. In sub-population 1, the majority of landbirds are on the ground and waterbirds are in the water, while in sub-population 2, the majority of landbirds are in the water and waterbirds are on the ground. Our findings suggest that the proposed approach can be leveraged by robust learning methods (Sagawa et al., 2019; Koyama and Yamaguchi, 2020) to improve the generalization ability of machine learning models. By eliminating the influence of spurious correlations, our method could significantly enhance the robustness and reliability of machine learning models. Overall, our study highlights the importance of addressing predictive heterogeneity in image classification tasks and provides a practical solution for achieving robust learning performance. ### Assist Scientific Discovery: Uncover Factors Related to Mortality In order to fully demonstrate the efficacy of our predictive heterogeneity, we focus on the application of healthcare, utilizing the COVID-19 dataset of Brazilian patients. This dataset comprises 6882 COVID-positive patients from Brazil, whose data was recorded between February 27th and May 4th, 2020. The dataset includes a wide range of risk factors, including comorbidities, symptoms, and demographic characteristics. The binary label corresponds to mortality caused by COVID-19. Our aim is to validate the sub-populations learned through our methodology on this dataset, by thoroughly _explaining each group_ and showcasing how our predictive heterogeneity can be employed to _uncover features related to mortality that are otherwise difficult to detect_. #### 5.2.1 Learned Sub-populations. When predicting mortality based on risk factors, it is important to consider that patients with various underlying diseases and demographic characteristics, such as age and sex, may exhibit different probabilities of mortality. Furthermore, it is plausible that the mortality of different individuals can be attributed to distinct factors. In light of these considerations, the predictive heterogeneity for this dataset is caused by the diversity of mechanisms that contribute to mortality among various sub-populations. In this experiment, we use linear models and the loss function is binary cross-entropy loss. We select the sub-population number \(K\in\{2,3,4,5,6\}\) that exhibit the maximal empirical predictive heterogeneity\(\hat{\mathcal{H}}_{\mathcal{V}}^{\mathscr{C}_{K}}(X\to Y;\mathcal{D})\), which results in three distinct subgroups (the optimal \(K=3\)). Besides, we empirically observe that when \(K>3\), the learned sub-populations will shrink to 3 sub-populations. In Figure 5 and 6, to conduct a more thorough examination of the learned subgroups, we analyze the age distribution of each group, as well as the average value of their corresponding risk factors. Our analysis reveals several noteworthy findings: 1. We observe a distinct difference in the age distribution of the learned subgroups. Specifically, Group 0 is primarily composed of individuals over the age of 70, while Group 1 consists of individuals around 60 years old. Group 2, on the other hand, is comprised of middle-aged individuals spanning multiple age groups. 2. The average values of the risk factors reveal notable differences among the various subgroups, indicative of distinct causes of mortality. More specifically, Group 0 exhibits a considerably higher prevalence of underlying diseases, such as renal, neurologic, liver, and immunosuppression, when compared to the other groups. In contrast, Group 1 shows a substantially lower level of underlying diseases in comparison. Interestingly, Group 2 does not exhibit any underlying diseases, yet has a markedly higher level of diarrhea and vomiting. These findings suggest that the learned subgroups may be used to identify specific risk factors associated with mortality, which can inform targeted interventions for individuals with distinct risk profiles. Having identified distinct patterns among the subgroups, we seek to identify the specific risk factors associated with mortality. To further validate our findings, we incorporate the expertise of domain experts. By leveraging their insights, we are able to confirm the reliability of the identified risk factors and the importance of our subgroup analysis. #### 5.2.2 Scientific Findings Based on the learned group, we fit a logistic regression model on each group and pick the top-6 features with the largest coefficients, which are shown in Table 1. Firstly, our analysis reveals that in Group 0 and 1, the top features associated with mortality are primarily SPO2 and underlying diseases, which align with the common risk factors of older individuals. In contrast, Group 2 exhibits a distinct set of top features, including symptoms of COVID-19 such as fever, cough, and vomiting. Notably, Group 2 is composed of middle-aged individuals spanning multiple age groups. Our findings suggest that severe COVID-19 symptoms can lead to mortality regardless of age. Secondly, to further our analysis, we fit a model for the entire dataset and observe that the top features remain SPO2 and underlying diseases, consistent with the top features found for older individuals. However, this may not be beneficial or could even lead to harm for interventions targeted towards younger or middle-aged individuals who generally do not have severe underlying diseases. For instance, doctors may tend to treat younger patients with severe COVID-19 symptoms optimistically and underestimate their mortality risk because they typically do not have underlying diseases. Thus, exploring and leveraging the predictive heterogeneity within the data can lead to more reliable scientific discoveries while avoiding potential harm caused by latent heterogeneity. Thirdly, our analysis reveals two important features in Group 2, namely vomiting and diarrhea, which are rarely considered in traditional analysis. We have reviewed relevant literature on COVID-19 and discovered that various studies have recognized these two symptoms as important indicators of higher risk of mortality caused by COVID-19. Zhong et al. (2020) highlighted the potential mechanisms of gastrointestinal and hepatic injuries in COVID-19 to raise awareness of digestive system injury in COVID-19. Liu et al. (2021b) analyzed 29,393 laboratory-confirmed COVID-19 patients diagnosed before March 21, 2020, in cities outside of Wuhan in mainland China and found that patients with both GI symptoms and fever and patients with fever alone had a significantly higher risk of death, where GI symptoms refer to one of the following symptoms: nausea, vomiting, diarrhea, or abdominal pain. Zeng et al. (2021) also found that gastrointestinal symptoms are associated with the severity of COVID-19, and the severe rate was more than 40% in COVID-19 patients with gastrointestinal symptoms. Ghimire et al. (2021) demonstrated that the presence of diarrhea as a presenting symptom is associated with increased disease severity and likely worse prognosis. Chan et al. (2022) have called for the consideration of COVID-19 in the differential diagnosis for patients who present with abdominal pain and gastrointestinal symptoms typical of gastroenteritis or surgical abdomen, even if they lack respiratory symptoms of COVID-19. These studies validate the reliability of our findings and demonstrate that studies utilizing the proposed predictive heterogeneity can uncover unusual risk factors that do not appear in analysis of the overall dataset. This example serves as an illustration of the potential benefits that our predictive heterogeneity can offer to a wide range of scientific fields. By exploiting the heterogeneity within a dataset, our approach can reveal novel patterns and relationships that may be overlooked in traditional analyses, leading to more reliable and comprehensive scientific discoveries ### Benefit Generalization In this section, we aim to evaluate the efficacy of our IM algorithm in enhancing the out-of-distribution (OOD) generalization performance of machine learning models. To this end, we conduct experiments on both simulated data and real-world colored MNIST data. Our results suggest that the learned sub-population structures by our IM algorithm could significantly benefit the OOD generalization of machine learning models. **Baselines** First, we compare with _empirical risk minimization_ (ERM) and _environment inference for invariant learning_ (EIIL, (Creager et al., 2021)) which infers the environments for learning invariance. Then we compare with the well-known _KMeans_ algorithm, which is the most popular clustering algorithm. For our IM algorithm and KMeans, we involve three algorithms as backbones to leverage the learned sub-populations, including sub-population balancing and invariant learning methods. The sub-population balancing \begin{table} \begin{tabular}{c|l l l l l l} \hline Group ID & \multicolumn{6}{c}{Top Features} \\ \hline 0 & SPO2 & Diabetes & Renal & Neurologic & Pulmonary & Cardiovascular \\ \hline 1 & Diabetes & SPO2 & Neurologic & Cardiovascular & Pulmonary & Renal \\ \hline 2 & **Fever** & **Cough** & Renal & **Vomitting** & **Shortness of breath** & **Dihareea** \\ \hline All & SPO2 & Renal & Neurologic & Diabetes & Pulmonary & Cardiovascular \\ \hline \end{tabular} \end{table} Table 1: Top features of each learned subgroup and overall data on the COVID-19 dataset. simply equally weighs the learned sub-populations. _invariant risk minimization_ (IRM, (Arjovsky et al., 2019)) and _inter-environment gradient alignment_ (IGA, (Koyama and Yamaguchi, 2020)) are typical methods in OOD generalization, which take the sub-populations as input environments to learn the invariant models. #### 5.3.1 Simulation Data of Sample Selection Bias The input features \(X=[S,T,V]^{T}\in\mathbb{R}^{10}\) consist of stable features \(S\in\mathbb{R}^{5}\), noisy features \(T\in\mathbb{R}^{4}\) and the spurious feature \(V\in\mathbb{R}\): \[S\sim\mathcal{N}(0,2\mathbf{I}_{5}),T\sim\mathcal{N}(0,2\mathbf{I}_{4}),Y= \theta_{S}^{T}S+h(S)+\mathcal{N}(0,0.5),V\sim\text{Laplace}(\text{sign}(r)\cdot Y,1/(5\ln|r|)) \tag{41}\] where \(\theta_{S}\in\mathbb{R}^{5}\) is the coefficient and \(h(S)=S_{1}S_{2}S_{3}\) is the nonlinear term. \(|r|>1\) is a factor for each sub-population, and here the data heterogeneity is brought by the _endogeneity with hidden variable_(Fan et al., 2014). \(V\) is the _spurious feature_ whose relationship with \(Y\) is unstable across sub-populations and is controlled by the factor \(r\). Intuitively, \(\text{sign}(r)\) controls whether the spurious correlation between \(V\) and \(Y\) is positive or negative. And \(|r|\) controls the strength of the spurious correlation, i.e. the larger \(|r|\) means the stronger spurious correlation. In _training_, we generate 10000 points, where the major group contains 80% data with \(r=1.9\) (i.e. strong _positive_ spurious correlation) and the minor group contains 20% data with \(r=-1.9\) (i.e. strong _negative_ spurious correlation). In _testing_, we test the performances of the two groups respectively, and we also set \(r=-2.3\) and \(r=-2.7\) \begin{table} \begin{tabular}{|c c|c c|c c|c c|} \hline \multirow{3}{*}{Method} & \multicolumn{4}{c||}{**1. Simulated Data**} & \multicolumn{2}{c|}{**2. Colored MNIST**} \\ & \multicolumn{2}{c|}{**Training Sub-population Error**} & \multicolumn{2}{c|}{**Test Error**} & \multicolumn{2}{c|}{**Train Accuracy**} & \multicolumn{2}{c|}{**Test Accuracy**} \\ & Major (\(r=1.9\)) & Minor (\(r=-1.9\)) & \(r=-2.3\) & \(r=-2.7\) & \multicolumn{2}{c|}{} \\ \hline ERM & 0.255(\(\pm\)0.023) & 0.740(\(\pm\)0.022) & 0.738(\(\pm\)0.035) & 0.737(\(\pm\)0.023) & 0.998(\(\pm\)0.001) & 0.406(\(\pm\)0.019) \\ EIII & **0.164**(\(\pm\)0.014) & 1.428(\(\pm\)0.035) & 1.431(\(\pm\)0.061) & 1.431(\(\pm\)0.046) & 0.812(\(\pm\)0.006) & 0.610(\(\pm\)0.016) \\ \hline \multirow{3}{*}{KMeans} & Balance & 0.231(\(\pm\)0.022) & 0.847(\(\pm\)0.024) & 0.846(\(\pm\)0.039) & 0.845(\(\pm\)0.026) & **0.999**(\(\pm\)0.001) & 0.328(\(\pm\)0.021) \\ & IRM & 0.231(\(\pm\)0.022) & 0.845(\(\pm\)0.024) & 0.844(\(\pm\)0.039) & 0.831(\(\pm\)0.026) & 0.947(\(\pm\)0.004) & 0.259(\(\pm\)0.021) \\ & IGA & 0.235(\(\pm\)0.022) & 0.840(\(\pm\)0.022) & 0.839(\(\pm\)0.038) & 0.838(\(\pm\)0.027) & 0.997(\(\pm\)0.001) & 0.302(\(\pm\)0.021) \\ \hline \multirow{3}{*}{Ours} & Balance & 0.403(\(\pm\)0.041) & **0.423**(\(\pm\)0.016) & **0.416**(\(\pm\)0.022) & **0.416**(\(\pm\)0.014) & 0.749(\(\pm\)0.012) & **0.692**(\(\pm\)0.039) \\ & IRM & 0.391(\(\pm\)0.039) & **0.432**(\(\pm\)0.016) & **0.430**(\(\pm\)0.022) & **0.430**(\(\pm\)0.014) & 0.759(\(\pm\)0.014) & **0.727**(\(\pm\)0.047) \\ \cline{1-1} & IGA & 0.449(\(\pm\)0.007) & **0.426**(\(\pm\)0.017) & **0.417**(\(\pm\)0.022) & **0.417**(\(\pm\)0.014) & 0.759(\(\pm\)0.012) & **0.713**(\(\pm\)0.034) \\ \hline \end{tabular} \end{table} Table 2: Results of the experiments on out-of-distribution generalization, including the simulated data and colored MNIST data. Figure 7: Sub-population division on the simulated data of three methods, where two colors denote two sub-populations. to simulate stronger distributional shifts. We use linear regression and set \(K=2\) for all methods, and we report the mean-square errors (MSE) of all methods. The results over 10 runs are shown in Table 2. From the results in Table 2, for both the simulated and colored MNIST data, the two backbones with our IM algorithm achieve _the best OOD generalization performances_. Also, for the simulated data, the learned predictive heterogeneity enables backbone algorithms to equally treat the majority and minority inside data (i.e. low-performance gap between 'Major' and 'Minor'), and significantly benefits the OOD generalization. Further, we plot the learned sub-populations of our IM algorithm in Figure 7. From Figure 7, compared with KMeans and EIIL, our predictive heterogeneity exploits the spurious correlation between \(V\) and \(Y\), and enables the backbone algorithms to eliminate it. #### 5.3.2 Simulation Data of Hidden Variables Also, we add one more experiment to show that (1) when the chosen \(K\) is smaller than the ground-truth, the performances of our methods will drop but are still better than ERM (2) when the chosen \(K\) is larger, the performances are not affected much. The input features \(X=[S,T,V]\in\mathbb{R}^{10}\) consist of stable features \(S\in\mathbb{R}^{5}\), noisy features \(T\in\mathbb{R}^{4}\) and the spurious feature \(V\in\mathbb{R}\): \[S\sim\mathcal{N}(2,2\mathbb{I}_{5}),\quad T\sim\mathcal{N}(0,2\mathbb{I}_{4}),\quad Y=\theta_{S}^{T}S+S_{1}S_{2}S_{3}+\mathcal{N}(0,0.5),\] and we generate the spurious feature via: \[V=\theta_{V}^{e}Y+\mathcal{N}(0,0.3),\] where \(\theta_{V}^{e}\) varies across sub-populations and is dependent on which sub-population the data point belongs to. In training, we sample 8000 data points from \(e_{1}\) with \(\theta_{V}^{1}=3.0\), 1000 points from \(e_{2}\) with \(\theta_{V}^{2}=-1.0\), 1000 points from \(e_{3}\) with \(\theta_{V}^{3}=-2.0\) and 1000 points from \(e_{4}\) with \(\theta_{V}^{4}=-3.0\). Therefore, the ground-truth number of sub-populations is 4. In testing, we test the performances on \(e_{4}\) with \(\theta_{V}^{4}=-3.0\), which has strong distributional shifts from training data. The average MSE over 10 runs are shown in Figure 8. From the results, we can see that when \(K\) is smaller than the ground-truth, increasing \(K\) benefits the OOD generalization performance, and when \(K\) is larger, the performances are not affected much. For our IM algorithm, we think there are mainly two ways to choose \(K\): * According to the predictive heterogeneity index: When the chosen \(K\) is smaller than the ground-truth, our measure tends to increase quickly when increasing \(K\); and when \(K\) is larger than the ground-truth, the increasing speed will slow down, which could direct people to choose an appropriate \(K\). * According to the prediction model: Since our IM algorithm aims to learn sub-populations with different prediction mechanisms, one could compare the learned model parameters \(\theta_{1},\ldots,\theta_{K}\) to judge whether \(K\) is much larger than the ground-truth, i.e., if two resultant models are quite similar, \(K\) may be too large (divide one sub-population into two). For linear models, one can directly compare the coefficients. For deep models, we think one can calculate the transfer losses across sub-populations. #### 5.3.3 Colored MNIST Following Arjovsky et al. (2019), we design a binary classification task constructed on the MNIST dataset. Firstly, digits \(0\sim 4\) are labeled \(Y=0\) and digits \(5\sim 9\) are labeled \(Y=1\). Secondly, noisy labels \(\tilde{Y}\) are induced by randomly flipping the label \(Y\) with a probability of \(0.2\). Then we sample the colored id \(V\) spurious correlated with \(\tilde{Y}\) as \[V=\left\{\begin{array}{ll}+\tilde{Y},&\mbox{with probability $r$,}\\ -\tilde{Y},&\mbox{with probability $1-r$.}\end{array}\right. \tag{42}\] In fact, \(r\) controls the spurious correlation between \(\tilde{Y}\) and \(V\). In _training_, we randomly sample 10000 data points and set \(r=0.85\), meaning that for \(85\%\) of the data, \(V\) is positively correlated with \(\tilde{Y}\) and for the rest \(15\%\), the spurious correlation becomes negative, which causes data heterogeneity w.r.t. \(V\) and \(\tilde{Y}\). In _testing_, we set \(r=0\) (_strong negative spurious correlation_), bringing strong shifts between training and testing. From the results in Table 2, for both the simulated and colored MNIST data, the two backbones with our IM algorithm achieve _the best OOD generalization performances_. We plot the learned sub-populations of our IM algorithm in Figure 9. From Figure 9, the learned sub-populations of our method also reflect the different directions of the spurious correlation between digit labels \(Y\) and colors (red or green), which helps backbone methods to avoid using colors to predict digits. ## 6 Related Work To the best of our knowledge, data heterogeneity has not converged to a uniform formulation so far, and has different meanings among different fields. Li and Reynolds (1995) define the heterogeneity in _ecology_ based on the system property and complexity or variability. Rosenbaum (2005) views the uncertainty of the potential outcome as unit heterogeneity in observational studies in _economics_. For _graph_ data, the heterogeneity refers to various types of nodes and edges (Wang et al. (2019)). More recently, in machine learning, several works of _causal learning_(Peters et al., 2016; Arjovsky et al., 2019; Koyama and Yamaguchi, 2020; Creager et al., 2021) and _robust learning_(Sagawa et al., 2019) leverage heterogeneous data from multiple environments to improve the out-of-distribution generalization ability. Specifically, invariant learning methods (Arjovsky et al., 2019; Koyama and Yamaguchi, 2020; Creager et al., 2021; Zhou et al., 2022) leverage the heterogeneous environment to learn the invariant predictors that have uniform performances across environments. And in distributionally robust optimization field, Sagawa et al. (2019); Duchi et al. (2022) propose to optimize the worst-group prediction error to guarantee the OOD generalization performance. However, in machine learning, previous works have not provided a precise definition or sound quantification of data heterogeneity, which makes it confusing and hard to leverage to develop more rational machine learning algorithms. As for clustering algorithms, most algorithms only focus on the covariates \(X\), typified by KMeans and Gaussian Mixture Model (GMM, (Reynolds, 2009)). However, the learned clusters by KMeans can only reflect heterogeneous structures in \(P(X)\), which is shown by our experiments. Notably that our predictive heterogeneity could reflect the heterogeneity in \(P(Y|X)\). And the expectation maximization (EM, (Moon, 1996)) can also be used for clustering. However, our IM algorithm has essential differences from EM, for our IM algorithm infers latent variables that maximizes the predictive heterogeneity but EM maximizes the likelihood. Also, there are methods (Creager et al., 2021) from the invariant learning field to infer environments. Though it could benefit the OOD generalization, it lacks the theoretical foundation and only works in some settings. ## 7 Discussion on differences with sub-group discovery Subgroup discovery (SD, (Helal, 2016)) is aimed at extracting "interesting" relations among different variables (\(X\)) with respect to a target variable \(Y\). Coverage and precision of each discovered group is the focus of such method. To be specific, it learns a partition on \(P(X)\) such that some target label \(y\) dominates within each group. The most siginficant gap between subgroup discovery and our predictive heterogeneity lies in the pattern of distributional shift among clusters: for subgroup discovery, \(P(X)\) and \(P(Y)\) varies across subgroups but there is a universal \(P(Y|X)\). While for predictive heterogeneity \(P(Y|X)\) differs across sub-population, which indicates diversified prediction mechanism. It is such disparity of prediction mechanism that inhibits the performance of a universal predictive model on a heterogeneous dataset, which is the emphasis of OOD problem and group fairness. We think sub-group discovery is more applicable for settings where the distributional shift is minor while high explainability is required, since it generates simplified rules that people can understand. Also, sub-group discovery methods is suitable for the settings that only involve tabular data (typically from a relational database), where the input features have clear semantics. And our proposed method could deal with general machine learning settings, including complicated data (e.g., image data) that involves representation learning. Also, when people have to handle settings where data heterogeneity w.r.t. prediciton mechanism exists inside data, our method is more applicable. However, both kinds of methods can be used to help people understand data and make more reasonable decisions. ## 8 Discussion on the Potential for fairness We find combining our measure with algorithmic fairness is an interesting and promising direction and we think our measure has the potential to deal with algorithmic bias. Our method could generate sub-populations with possibly different prediction mechanisms, which could do some help in the following aspects: **Risk feature selection**: we could select features according to our predictive heterogeneity measure to see what features bring the largest heterogeneity. If they are sensitive features, people should avoid their effects, and if they are not, they could direct people to build better machine learning models. **Examine the algorithmic fairness**: we could use the learned sub-populations to examine whether a given algorithm is fair by calculating the performance gap across the sub-populations. ## 9 Conclusion We define the predictive heterogeneity, as the first quantitative formulation of the data heterogeneity that affects the prediction of machine learning models. We demonstrate its theoretical properties and show that it benefits the out-of-distribution generalization performances. ## Appendix A Proof of Proposition 6 [Proof of Proposition 6] 1. _Monotonicity_: Because of \(\mathscr{E}_{1}\subseteq\mathscr{E}_{2}\), \[\mathcal{H}_{\mathcal{V}}^{\mathscr{E}_{1}}(X\to Y) =\sup_{\mathscr{E}\in\mathscr{E}_{1}}\mathbb{I}_{\mathcal{V}}(X \to Y|\mathscr{E})-\mathbb{I}_{\mathcal{V}}(X\to Y)\] (43) \[\leq\sup_{\mathscr{E}\in\mathscr{E}_{2}}\mathbb{I}_{\mathcal{V}}(X \to Y|\mathscr{E})-\mathbb{I}_{\mathcal{V}}(X\to Y)\] (44) \[=\mathcal{H}_{\mathcal{V}}^{\mathscr{E}_{2}}(X\to Y).\] (45) 2. _Nonnegativity_: According to the definition of the environment set, there exists \(\mathcal{E}_{0}\in\mathscr{E}\) such that for any \(e\in\mathrm{supp}(\mathcal{E})\), \(X,Y|\mathcal{E}=e\) is identically distributed as \(X,Y\). Thus, we have \[\mathcal{H}_{\mathcal{V}}^{\mathscr{E}}(X\to Y) =\sup_{\mathscr{E}\in\mathscr{E}}\left[H_{\mathcal{V}}(Y|\emptyset,\mathcal{E})-H_{\mathcal{V}}(Y|X,\mathcal{E})\right]-\left[H_{\mathcal{V}}(Y |\emptyset)-H_{\mathcal{V}}(Y|X)\right]\] (46) \[\geq\left[H_{\mathcal{V}}(Y|\emptyset,\mathcal{E}_{0})-H_{ \mathcal{V}}(Y|X,\mathcal{E}_{0})\right]-\left[H_{\mathcal{V}}(Y|\emptyset)-H_ {\mathcal{V}}(Y|X)\right].\] (47) Specifically, \[H_{\mathcal{V}}(Y|X,\mathcal{E}_{0}) =\mathbb{E}_{e\sim\mathcal{E}_{0}}\left[\inf_{f\in\mathcal{V}} \mathbb{E}_{x,y\sim X,Y|\mathcal{E}=e}[-\log f[x](y)]\right]\] (48) \[=\mathbb{E}_{e\sim\mathcal{E}_{0}}\left[\inf_{f\in\mathcal{V}} \mathbb{E}_{x,y\sim X,Y}[-\log f[x](y)]\right]\] (49) \[=H_{\mathcal{V}}(Y|X).\] (50) Similarly, \(H_{\mathcal{V}}(Y|\emptyset,\mathcal{E}_{0})=H_{\mathcal{V}}(Y|\emptyset)\). Thus, \(\mathcal{H}_{\mathcal{V}}^{\mathscr{E}}(X\to Y)\geq 0\). 3. _Boundedness_: First, we have \[H_{\mathcal{V}}(Y|X,\mathcal{E}) =\mathbb{E}_{e\sim\mathcal{E}}\left[\inf_{f\in\mathcal{V}} \mathbb{E}_{x,y\sim X,Y|\mathcal{E}=e}[-\log f[x](y)]\right]\] (51) \[=\mathbb{E}_{e\sim\mathcal{E}}\left[\inf_{f\in\mathcal{V}} \mathbb{E}_{x\sim X|\mathcal{E}=e}\left[\mathbb{E}_{y\sim Y|x,e}[-\log f[x](y )]\right]\right]\] (52) \[\geq 0,\] (53) by noticing that \(\mathbb{E}_{y\sim Y|x}[-\log f[x](y)]\) is the cross entropy between \(Y|x,e\) and \(f[x]\). Next, \[H_{\mathcal{V}}(Y|\emptyset,\mathcal{E}) =\mathbb{E}_{e\sim\mathcal{E}}\left[\inf_{f\in\mathcal{V}} \mathbb{E}_{y\sim Y|\mathcal{E}=e}[-\log f[\emptyset](y)]\right]\] (54) \[\leq\inf_{f\in\mathcal{V}}\mathbb{E}_{e\sim\mathcal{E}}\left[ \mathbb{E}_{y\sim Y|\mathcal{E}=e}[-\log f[\emptyset](y)]\right]\] (55) \[=\inf_{f\in\mathcal{V}}\mathbb{E}_{y\sim Y}[-\log f[\emptyset](y)]\] (56) \[=H_{\mathcal{V}}(Y|\emptyset),\] (57) where Equation 55 is due to Jensen's inequality. Combing the above inequalities, \[\mathcal{H}_{\mathcal{V}}^{\mathscr{E}}(X\to Y) =\sup_{\mathcal{E}\in\mathscr{E}}\left[H_{\mathcal{V}}(Y|\emptyset, \mathcal{E})-H_{\mathcal{V}}(Y|X,\mathcal{E})\right]-\left[H_{\mathcal{V}}(Y| \emptyset)-H_{\mathcal{V}}(Y|X)\right] \tag{58}\] \[\leq\sup_{\mathcal{E}\in\mathscr{E}}H_{\mathcal{V}}(Y|\emptyset, \mathcal{E})-\left[H_{\mathcal{V}}(Y|\emptyset)-H_{\mathcal{V}}(Y|X)\right]\] (59) \[\leq H_{\mathcal{V}}(Y|\emptyset)-\left[H_{\mathcal{V}}(Y|\emptyset )-H_{\mathcal{V}}(Y|X)\right]\] (60) \[=H_{\mathcal{V}}(Y|X). \tag{61}\] 4. _Corner Case_: According to Proposition 2 in Xu et al. (2020), \[H_{\Omega}(Y|\emptyset) =H(Y).\] (62) \[H_{\Omega}(Y|X) =H(Y|X).\] (63) By taking random variables \(R,S\) identically distributed as \(X,Y|\mathcal{E}=e\) for \(e\in\operatorname{supp}(\mathcal{E})\), we have \[H_{\Omega}(Y|X,\mathcal{E}=e)=H_{\Omega}(S|R)=H(S|R)=H(Y|X, \mathcal{E}=e). \tag{64}\] Thus, \[H_{\Omega}(Y|X,\mathcal{E})=\mathbb{E}_{e\sim\mathcal{E}}[H_{ \Omega}(Y|X,\mathcal{E}=e)]=\mathbb{E}_{e\sim\mathcal{E}}[H(Y|X,\mathcal{E}=e )]=H(Y|X,\mathcal{E}). \tag{65}\] Similarly, we have \(H_{\Omega}(Y|\emptyset,\mathcal{E})=H(Y|\mathcal{E})\). Thus, \[\mathcal{H}_{\Omega}^{\mathscr{E}}(X\to Y) =\sup_{\mathcal{E}\in\mathscr{E}}\left[H_{\Omega}(Y|\emptyset, \mathcal{E})-H_{\Omega}(Y|X,\mathcal{E})\right]-\left[H_{\Omega}(Y|\emptyset) -H_{\Omega}(Y|X)\right] \tag{66}\] \[=\sup_{\mathcal{E}\in\mathscr{E}}\left[H(Y|\mathcal{E})-H(Y|X, \mathcal{E})\right]-\left[H(Y)-H(Y|X)\right]\] (67) \[=\sup_{\mathcal{E}\in\mathscr{E}}\mathbb{I}(Y;X|\mathcal{E})- \mathbb{I}(Y;X)\] (68) \[=\mathcal{H}^{\mathscr{E}}(X,Y). \tag{69}\] ## Appendix B Proof of Theorem 7 [Proof of Theorem 7] 1) \[H_{\mathcal{V}_{\mathcal{G}}}(Y|X) =\inf_{f\in\mathcal{V}_{\mathcal{G}}}\mathbb{E}_{x\sim X}\left[ \mathbb{E}_{y\sim Y|x}[-\log f[x](y)]\right] \tag{70}\] \[\leq\mathbb{E}_{x\sim X}\left[\mathbb{E}_{y\sim Y|x}[-\log\frac{ 1}{\sqrt{2\pi}\cdot\frac{1}{\sqrt{2\pi}}}\exp\left[-\frac{(y-g(x))^{2}}{2\cdot \frac{1}{2\pi}}\right]\right]\] (71) \[=\mathbb{E}_{x\sim X}\left[\mathbb{E}_{y\sim Y|x}[\pi(y-g(x))^{2 }]\right]=\pi\sigma^{2}. \tag{72}\] Equation 71 holds by taking \(f[x]=\mathcal{N}(g(x),\frac{1}{2\pi})\). 2) Given the function family \(\mathcal{V}_{\sigma}=\{f|f[x]=\mathcal{N}(\theta x,\sigma^{2}),\theta\in\mathbb{ R},\sigma\text{ fixed }\}\), by expanding the Gaussian probability density function in the definition of predictive \(\mathcal{V}\)-information, it could be shown that \[\mathbb{I}_{\mathcal{V}_{\sigma}}(X\to Y)\propto\min_{k\in\mathbb{R}}- \mathbb{E}[(Y-kX)^{2}]+\text{Var}(Y), \tag{73}\] where the predictive \(\mathcal{V}\)-information is proportional to Mean Square Error subtracted by the variance of target, by a coefficient completely dependent on \(\sigma\). The minimization problem is solved by \[k=\frac{\mathbb{E}[XY]}{\mathbb{E}[X^{2}]}=1. \tag{74}\] Substituting \(k\) into eq.73, \[\mathbb{I}_{\mathcal{V}_{\sigma}}(X\to Y)\propto(-\mathbb{E}[\epsilon^{2}]+ \text{Var}(X+\epsilon))=\text{Var}(X)=\mathbb{E}[X^{2}]. \tag{75}\] Denote \(\text{supp}(\mathcal{E})=\{\mathcal{E}_{1},\mathcal{E}_{2}\}\). Let \(Q\) be the joint distribution of \((X,\epsilon,\mathcal{E})\). Let \(Q(\mathcal{E}_{1})=\alpha\) and \(Q(\mathcal{E}_{2})=1-\alpha\) be the marginal of \(\mathcal{E}\). Abbreviate \(Q(X,\epsilon|\mathcal{E}=\mathcal{E}_{1})\) by \(P_{1}(X,\epsilon)\) and \(Q(X,\epsilon|\mathcal{E}=\mathcal{E}_{2})\) by \(P_{2}(X,\epsilon)\). Similar to 73, \[\mathbb{I}_{\mathcal{V}_{\sigma}}(X\to Y|\mathcal{E})\propto\min_{k}- \mathbb{E}[(Y-kX)^{2}|\mathcal{E}]+\text{Var}(Y|\mathcal{E}). \tag{76}\] For \(\mathcal{E}=\mathcal{E}_{1}\), the minimization problem is solved by \[k=\frac{\mathbb{E}_{P_{1}}[XY]}{\mathbb{E}_{P_{1}}[X^{2}]}. \tag{77}\] Thus, \[\mathbb{I}_{\mathcal{V}_{\sigma}}(X\to Y|\mathcal{E}=\mathcal{E}_{1}) \propto-\mathbb{E}_{P_{1}}\left[\left(Y-\frac{\mathbb{E}_{P_{1}}[XY]}{ \mathbb{E}_{P_{1}}[X^{2}]}X\right)^{2}\right]+\text{Var}_{P_{1}}(Y) \tag{78}\] \[=-\mathbb{E}_{P_{1}}[Y^{2}]+\frac{\mathbb{E}_{P_{1}}^{2}[XY]}{ \mathbb{E}_{P_{1}}[X^{2}]}+(\mathbb{E}_{P_{1}}[Y^{2}]-\mathbb{E}_{P_{1}}^{2}[ Y])=-\mathbb{E}_{P_{1}}^{2}[Y]+\frac{\mathbb{E}_{P_{1}}^{2}[XY]}{\mathbb{E}_{P_{1}}[ X^{2}]}. \tag{79}\] Similarly, we have \[\mathbb{I}_{\mathcal{V}_{\sigma}}(X\to Y|\mathcal{E}=\mathcal{E}_{2}) \propto-\mathbb{E}_{P_{2}}^{2}[Y]+\frac{\mathbb{E}_{P_{2}}^{2}[XY]}{\mathbb{E }_{P_{1}}[X^{2}]}. \tag{80}\] Notably, \(\mathbb{E}_{P_{1}}[X^{2}]\) and \(\mathbb{E}_{P_{2}}[X^{2}]\) are constrained by \(\alpha\) and \(\mathbb{E}[X^{2}]\). \[\mathbb{E}[X^{2}]=\mathbb{E}[\mathbb{E}[X^{2}|\mathcal{E}]]=\alpha\mathbb{E}_ {P_{1}}[X^{2}]+(1-\alpha)\mathbb{E}_{P_{2}}[X^{2}]. \tag{81}\] Similarly, \[\mathbb{E}[X^{2}]=\mathbb{E}[XY]=\alpha\mathbb{E}_{P_{1}}[XY]+(1-\alpha) \mathbb{E}_{P_{2}}[XY]. \tag{82}\] \[0=\mathbb{E}[Y]=\alpha\mathbb{E}_{P_{1}}[Y]+(1-\alpha)\mathbb{E}_{P_{2}}[Y]. \tag{83}\] The moments of \(P_{2}\) could thereafter be represented by those of \(P_{1}\). \[\mathbb{E}_{P_{2}}[X^{2}]=\frac{\mathbb{E}[X^{2}]-\alpha\mathbb{E}_{P_{1}}[X^{2 }]}{1-\alpha},\mathbb{E}_{P_{2}}[XY]=\frac{\mathbb{E}[X^{2}]-\alpha\mathbb{E}_ {P_{1}}[XY]}{1-\alpha},\mathbb{E}_{P_{2}}[Y]=-\frac{\alpha\mathbb{E}_{P_{1}}[Y ]}{1-\alpha}. \tag{84}\] Substituting to eq.80, \[\mathbb{I}_{\mathcal{V}_{\sigma}}(X\to Y|\mathcal{E}=\mathcal{E}_{2}) \propto-\frac{\alpha^{2}}{(1-\alpha)^{2}}E_{P_{1}}^{2}[Y]+\frac{1}{1-\alpha} \frac{\left(\mathbb{E}[X^{2}]-\alpha\mathbb{E}_{P_{1}}[XY]\right)^{2}}{ \mathbb{E}[X^{2}]-\alpha\mathbb{E}_{P_{1}}[X^{2}]}. \tag{85}\] Thus, \[\mathcal{H}^{\mathcal{E}}_{\mathcal{V}_{\sigma}}(X\to Y) =\sup_{\mathcal{E}\in\mathcal{E}}-\mathbb{I}_{\mathcal{V}_{\sigma }}(X\to Y)+\alpha\mathbb{I}_{\mathcal{V}_{\sigma}}(X\to Y|\mathcal{E}= \mathcal{E}_{1})+(1-\alpha)\mathbb{I}_{\mathcal{V}_{\sigma}}(X\to Y| \mathcal{E}=\mathcal{E}_{2}) \tag{86}\] \[\propto\sup_{\mathcal{E}\in\mathcal{E}}-\mathbb{E}[X^{2}]- \alpha\mathbb{E}_{P_{1}}^{2}[Y]+\alpha\frac{\mathbb{E}_{P_{1}}^{2}[XY]}{ \mathbb{E}_{P_{1}}[X^{2}]}-\frac{\alpha^{2}}{1-\alpha}\mathbb{E}_{P_{1}}^{2}[ Y]+\frac{\left(\mathbb{E}[X^{2}]-\alpha\mathbb{E}_{P_{1}}[XY]\right)^{2}}{ \mathbb{E}[X^{2}]-\alpha\mathbb{E}_{P_{1}}[X^{2}]}\] (87) \[=\sup_{\mathcal{E}\in\mathcal{E}}-\frac{\alpha}{1-\alpha}\mathbb{ E}_{P_{1}}^{2}[X+\epsilon]+\alpha\frac{\mathbb{E}_{P_{1}}^{2}[X\epsilon]}{ \mathbb{E}_{P_{1}}[X^{2}]\left(\mathbb{E}[X^{2}]-\alpha\mathbb{E}_{P_{1}}[X^{ 2}]\right)}\mathbb{E}[X^{2}]. \tag{88}\] Assuming \(X\perp\epsilon\mid\mathcal{E}\), \[\mathcal{H}^{\mathcal{E}}_{\mathcal{V}_{\sigma}}(X\to Y)\propto\sup_{ \mathcal{E}\in\mathcal{E}}-\frac{\alpha}{1-\alpha}\mathbb{E}_{P_{1}}^{2}[X+ \epsilon]\leq 0. \tag{89}\] From Proposition 6, we have \(\mathcal{H}^{\mathcal{E}}_{\mathcal{V}_{\sigma}}(X\to Y)\geq 0\). Thus, \(\mathcal{H}^{\mathcal{E}}_{\mathcal{V}_{\sigma}}(X\to Y)=0\). ## Appendix C Proof of Linear Cases (Theorem 8 and 9) **Proof** [Proof of Theorem 8] For the ease of notion, we denote the \(r(\mathcal{E}^{*})\) as \(r_{e}\), \(\sigma(\mathcal{E}^{*})\) as \(\sigma_{e}\), and \(\sigma(\mathcal{E}^{*})\cdot\epsilon_{v}\) as \(\epsilon_{e}\). And we omit the superscript \(\mathcal{C}\) of \(\mathcal{H}^{\mathcal{C}}_{\mathcal{V}}\). Firstly, we calculate the \(H_{\mathcal{V}}[Y|\emptyset]\) as: \[H_{\mathcal{V}}[Y|\emptyset] =\frac{1}{2\sigma^{2}}\text{Var}(Y)+\log\sigma+\frac{1}{2}\log 2\pi, \tag{90}\] \[H_{\mathcal{V}}[Y|\emptyset,\mathcal{E}^{*}] =\frac{1}{2\sigma^{2}}\mathbb{E}_{\mathcal{E}^{*}}[\text{Var}(Y| \mathcal{E}^{*})]+\log\sigma+\frac{1}{2}\log 2\pi. \tag{91}\] Therefore, we have \[H_{\mathcal{V}}[Y|\emptyset,\mathcal{E}^{*}]-H_{\mathcal{V}}[Y|\emptyset]=- \frac{1}{2\sigma^{2}}\text{Var}(\mathbb{E}[Y|\mathcal{E}^{*}])\leq 0. \tag{92}\] As for \(H_{\mathcal{V}}[Y|X]\), we have \[H_{\mathcal{V}}[Y|X]=\inf_{h_{S},h_{V}}\mathbb{E}_{X,Y}\left[\|Y-(h_{S}S+h_{V}V )\|^{2}\right]\frac{1}{2\sigma^{2}} \tag{93}\] \[=\inf_{h_{S},h_{V}}\mathbb{E}_{\mathcal{E}^{*}}\left[\mathbb{E}[\|f(S)+ \epsilon_{Y}-(h_{S}S+h_{V}(r_{e}f(S)+\epsilon_{e}))\|^{2}|\mathcal{E}^{*}]\right] \frac{1}{2\sigma^{2}}, \tag{94}\] where we let \(h_{S}=h_{S}-\beta\) here. Then we have \[2\sigma^{2}H_{\mathcal{V}}[Y|X] =\inf_{h_{S},h_{V}}\mathbb{E}_{\mathcal{E}^{*}}\left[\mathbb{E}[ \|(1-h_{V}r_{e})f(S)+\epsilon_{Y}-h_{S}S-h_{V}\epsilon_{e}\|^{2}|\mathcal{E}^{* }]\right] \tag{95}\] \[=\inf_{h_{S},h_{V}}\mathbb{E}_{\mathcal{E}^{*}}\left[\mathbb{E}[ \|(1-h_{V}r_{e})f(S)-h_{S}S\|^{2}|\mathcal{E}^{*}]\right]+\sigma_{Y}^{2}+h_{V} ^{2}\mathbb{E}_{\mathcal{E}^{*}}[\sigma_{e}^{2}], \tag{96}\] notably that here for \(e_{i},e_{j}\in\text{supp}(\mathcal{E}^{*})\), we assume \(P^{e_{i}}(S,Y)=P^{e_{j}}(S,Y)\) (we choose such \(\mathcal{E}^{*}\) as one possible split). And the solution of \(h_{S},h_{V}\) is \[h_{S} =\frac{\text{Var}(r_{e})\mathbb{E}[f^{2}(S)]\mathbb{E}[f(S)S]+ \mathbb{E}[\sigma_{e}^{2}]\mathbb{E}[f(S)S]}{\mathbb{E}[r_{e}^{2}]\mathbb{E}[ f^{2}(S)]\mathbb{E}[S^{2}]+\mathbb{E}[\sigma_{e}^{2}]\mathbb{E}[S^{2}]- \mathbb{E}^{2}[r_{e}]\mathbb{E}^{2}[f(S)S]}, \tag{97}\] \[h_{V} =\frac{\mathbb{E}[r_{e}](\mathbb{E}[f^{2}(S)]\mathbb{E}[S^{2}]- \mathbb{E}^{2}[f(S)S])}{\mathbb{E}[r_{e}^{2}]\mathbb{E}[f^{2}(S)]\mathbb{E}[S^ {2}]+\mathbb{E}[\sigma_{e}^{2}]\mathbb{E}[S^{2}]-\mathbb{E}^{2}[r_{e}]\mathbb{ E}^{2}[f(S)S]}. \tag{98}\] According to the assumption that \(\mathbb{E}[f(S)S]=0\), we have \[h_{S}=0,\quad h_{V}=\frac{\mathbb{E}[r(\mathcal{E}^{*})]\mathbb{E}[f^{2}]}{ \mathbb{E}[r^{2}(\mathcal{E}^{*})]\mathbb{E}[f^{2}]+\mathbb{E}[\sigma^{2}( \mathcal{E}^{*})]}. \tag{99}\] Therefore, we have \[2\sigma^{2}H_{\mathcal{V}}[Y|X] =\mathbb{E}_{\mathcal{E}^{*}}[\mathbb{E}[\|(1-h_{V}r_{e})f(S)\|^ {2}|\mathcal{E}^{*}]]+\sigma_{Y}^{2}+h_{V}^{2}\mathbb{E}_{\mathcal{E}^{*}}[ \sigma_{e}^{2}] \tag{100}\] \[=\frac{\text{Var}(r_{e})\mathbb{E}[f^{2}]+\mathbb{E}[\sigma^{2}( \mathcal{E}^{*})]}{\mathbb{E}[r_{e}^{2}]\mathbb{E}[f^{2}]+\mathbb{E}[\sigma^{ 2}(\mathcal{E}^{*})]}\mathbb{E}[f^{2}(S)]+\sigma_{Y}^{2},\] (101) \[2\sigma^{2}H_{\mathcal{V}}[Y|X,\mathcal{E}^{*}] =\sigma_{Y}^{2}+\mathbb{E}[(\frac{1}{\frac{r_{e}^{2}\mathbb{E}[ f^{2}]}{\sigma_{e}^{2}}}+1)^{2}]\mathbb{E}[f^{2}]+\mathbb{E}_{\mathcal{E}^{*}}[( \frac{1}{\frac{\sigma_{e}}{\sigma_{e}}+\frac{\sigma_{e}}{r_{e}\mathbb{E}[f^{2} ]}})^{2}]. \tag{102}\] Note that here we simply set \(\sigma=1\) in the main body. And we have: \[\mathcal{H}_{\mathcal{V}}(X\to Y)\approx\frac{\text{Var}(r_{e})\mathbb{E}[f^{ 2}]+\mathbb{E}[\sigma^{2}(\mathcal{E}^{*})]}{\mathbb{E}[r_{e}^{2}]\mathbb{E}[ f^{2}]+\mathbb{E}[\sigma^{2}(\mathcal{E}^{*})]}\mathbb{E}[f^{2}(S)] \tag{103}\] The approximation error is bounded by \(\frac{1}{2}\max(\sigma_{Y}^{2},R(r(\mathcal{E}^{*}),\sigma(\mathcal{E}^{*}), \mathbb{E}[f^{2}]))\), and \(R(r(\mathcal{E}^{*}),\sigma(\mathcal{E}^{*}),\mathbb{E}[f^{2}])\) is defined as: \[R(r(\mathcal{E}^{*}),\sigma(\mathcal{E}^{*}),\mathbb{E}[f^{2}])=\mathbb{E}[( \frac{1}{\frac{r_{e}^{2}\mathbb{E}[f^{2}]}{\sigma_{e}^{2}}+1})^{2}]\mathbb{E}[ f^{2}]+\mathbb{E}_{\mathcal{E}^{*}}[(\frac{1}{\frac{\sigma_{e}}{\sigma_{e}}+ \frac{\sigma_{e}}{r_{e}\mathbb{E}[f^{2}]}})^{2}] \tag{104}\] \(\blacksquare\) **Proof** [Proof of Theorem 9] Similar as the above proof. \(\blacksquare\) ## Appendix D Proof of the Error Bound for Finite Sample Estimation (Theorem 11) In this section, we will prove the error bound of estimating the predictive heterogeneity with the empirical predictive heterogeneity. Before the proof of Theorem 11 which is inspired by Xu et al. (2020), we will introduce three lemmas. Assume \(\forall x\in\mathcal{X}\),\(\forall y\in\mathcal{Y}\),\(\forall f\in\mathcal{V}\), \(\log f[x](y)\in[-B,B]\) where \(B>0\). Define a function class \(\mathcal{G}_{\mathcal{V}}^{k}=\{g|g(x,y)=\log f[x](y)q(\mathcal{E}=e_{k}|x,y), f\in\mathcal{V},q\in\mathcal{Q}\}\). Denote the Rademacher complexity of \(\mathcal{G}\) with \(N\) samples by \(\mathscr{R}_{N}(\mathcal{G})\). Define \[\hat{f}_{k}=\arg\inf_{f}\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D }}-\log f[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i}). \tag{105}\] Then for any \(q\in\mathcal{Q}\), any \(\delta\in(0,1)\), with a probability over \(1-\delta\), we have \[\left|q(\mathcal{E}=e_{k})H_{\mathcal{V}}(Y|X,\mathcal{E}=e_{k})- \frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D}}-\log\hat{f}_{k}[x_{i} ](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\right| \tag{106}\] \[\leq 2\mathscr{R}_{|\mathcal{D}|}(\mathcal{G}_{\mathcal{V}}^{k})+B \sqrt{\frac{2\log\frac{1}{\delta}}{|\mathcal{D}|}}. \tag{107}\] Apply McDiarmid's inequality to the function \(\Phi(\mathcal{D})\) which is defined as: \[\Phi(\mathcal{D})=\sup_{f\in\mathcal{V},q\in\mathcal{Q}}\left|q(\mathcal{E}=e _{k})\mathbb{E}_{q}\left[-\log f[x](y)|\mathcal{E}=e_{k}\right]-\frac{1}{| \mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D}}-\log f[x_{i}](y_{i})q(\mathcal{E }=e_{k}|x_{i},y_{i})\right|. \tag{108}\] Let \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) be two identical datasets except for one data point \(x_{j}\neq x_{j}^{\prime}\). We have: \[\Phi(\mathcal{D})-\Phi(\mathcal{D}^{\prime}) \tag{109}\] \[\leq \sup_{f\in\mathcal{V},q\in\mathcal{Q}}\left[\left|q(\mathcal{E}=e _{k})\mathbb{E}_{q}\left[-\log f[x](y)|\mathcal{E}=e_{k}\right]-\frac{1}{| \mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D}}-\log f[x_{i}](y_{i})q(\mathcal{E }=e_{k}|x_{i},y_{i})\right|\right.\] (110) \[\qquad\quad-\left|q(\mathcal{E}=e_{k})\mathbb{E}_{q}\left[-\log f [x](y)|\mathcal{E}=e_{k}\right]-\frac{1}{|\mathcal{D}^{\prime}|}\sum_{x_{i}^{ \prime},y_{i}^{\prime}\in\mathcal{D}^{\prime}}-\log f[x_{i}^{\prime}](y_{i}^{ \prime})q(\mathcal{E}=e_{k}|x_{i}^{\prime},y_{i}^{\prime})\right|\right]\] (111) \[\leq \sup_{f\in\mathcal{V},q\in\mathcal{Q}}\left|\frac{1}{|\mathcal{D} |}\sum_{x_{i},y_{i}\in\mathcal{D}}-\log f[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x _{i},y_{i})-\frac{1}{|\mathcal{D}^{\prime}|}\sum_{x_{i}^{\prime},y_{i}^{ \prime}\in\mathcal{D}^{\prime}}-\log f[x_{i}^{\prime}](y_{i}^{\prime})q( \mathcal{E}=e_{k}|x_{i}^{\prime},y_{i}^{\prime})\right|\] (112) \[= \sup_{f\in\mathcal{V},q\in\mathcal{Q}}\frac{1}{|\mathcal{D}|}\left| \log f[x_{j}](y_{j})q(\mathcal{E}=e_{k}|x_{j},y_{j})-\log f[x_{j}^{\prime}](y _{j}^{\prime})q(\mathcal{E}=e_{k}|x_{j}^{\prime},y_{j}^{\prime})\right|\] (113) \[\leq \frac{2B}{|\mathcal{D}|}. \tag{114}\] According to McDiarmid's inequality, for any \(\delta\in(0,1)\), with a probability over \(1-\delta\), we have: \[\Phi(\mathcal{D})\leq\mathbb{E}_{\mathcal{D}}[\Phi(\mathcal{D})]+B\sqrt{\frac{2 \log\frac{1}{\delta}}{|\mathcal{D}|}}. \tag{115}\] Next we derive a bound for \(\mathbb{E}_{\mathcal{D}}[\Phi(\mathcal{D})]\). Consider a dataset \(\mathcal{D}^{\prime}\) independently and identically drawn from \(q(X,Y)=P(X,Y)\) with the same size as \(\mathcal{D}\). We notice that \[q(\mathcal{E}=e_{k})\mathbb{E}_{q}\left[-\log f[x](y)|\mathcal{E}=e_{k}\right] =\mathbb{E}_{\mathcal{D}^{\prime}}\left[-\frac{1}{|\mathcal{D}^{\prime}|}\sum_ {x^{\prime}_{i},y^{\prime}_{i}\in\mathcal{D}^{\prime}}-\log f[x^{\prime}_{i} ](y^{\prime}_{i})q(\mathcal{E}=e_{k}|x^{\prime}_{i},y^{\prime}_{i})\right]. \tag{116}\] Thus, \(\mathbb{E}_{\mathcal{D}}[\Phi(\mathcal{D})]\) could be reformulated as: \[\mathbb{E}_{\mathcal{D}}[\Phi(\mathcal{D})] =\mathbb{E}_{\mathcal{D}}\left[\sup_{f\in\mathcal{V},q\in\mathcal{ Q}}\left|\mathbb{E}_{\mathcal{D}^{\prime}}\left[-\frac{1}{|\mathcal{D}^{ \prime}|}\sum_{x^{\prime}_{i},y^{\prime}_{i}\in\mathcal{D}^{\prime}}-\log f[x ^{\prime}_{i}](y^{\prime}_{i})q(\mathcal{E}=e_{k}|x^{\prime}_{i},y^{\prime}_{ i})\right]\right. \tag{117}\] \[\left.-\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D}}- \log f[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\right|\right]\] (118) \[\leq\mathbb{E}_{\mathcal{D}}\left[\sup_{f\in\mathcal{V},q\in \mathcal{Q}}\mathbb{E}_{\mathcal{D}^{\prime}}\left|-\frac{1}{|\mathcal{D}^{ \prime}|}\sum_{x^{\prime}_{i},y^{\prime}_{i}\in\mathcal{D}^{\prime}}-\log f[x^ {\prime}_{i}](y^{\prime}_{i})q(\mathcal{E}=e_{k}|x^{\prime}_{i},y^{\prime}_{i})\right.\] (119) \[\left.-\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D}}- \log f[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\right|\right]\] (120) \[\leq\mathbb{E}_{\mathcal{D},\mathcal{D}^{\prime}}\left[\sup_{f \in\mathcal{V},q\in\mathcal{Q}}\frac{1}{|\mathcal{D}|}\left|\sum_{x_{i},y_{i} \in\mathcal{D}}\log f[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\right.\] (121) \[\left.-\sum_{x^{\prime}_{i},y^{\prime}_{i}\in\mathcal{D}^{\prime} }\log f[x^{\prime}_{i}](y^{\prime}_{i})q(\mathcal{E}=e_{k}|x^{\prime}_{i},y^{ \prime}_{i})\right|\right]\] (122) \[\leq\mathbb{E}_{\mathcal{D},\sigma}\left[\sup_{f\in\mathcal{V},q \in\mathcal{Q}}\frac{1}{|\mathcal{D}|}\left|\sum_{x_{i},y_{i}\in\mathcal{D}} \sigma_{i}\log f[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\right|\right]\] (124) \[\quad+\mathbb{E}_{\mathcal{D}^{\prime},\sigma}\left[\sup_{f\in \mathcal{V},q\in\mathcal{Q}}\frac{1}{|\mathcal{D}^{\prime}|}\left|\sum_{x^{ \prime}_{i},y^{\prime}_{i}\in\mathcal{D}^{\prime}}\sigma_{i}\log f[x^{\prime} _{i}](y^{\prime}_{i})q(\mathcal{E}=e_{k}|x^{\prime}_{i},y^{\prime}_{i})\right|\right]\] (125) \[=2\mathscr{R}_{|\mathcal{D}|}(\mathcal{G}^{k}_{\mathcal{V}}), \tag{126}\] where \(\sigma_{i}\) are independent Rademacher variables. Equation 121 follows from Jensen's inequality and the convexity of sup. Equation 123 holds due to the symmetry of \(\log f[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})-\log f[x^{\prime}_{i}](y^ {\prime}_{i})q(\mathcal{E}=e_{k}|x^{\prime}_{i},y^{\prime}_{i})\) and the argument that Radamacher variables preserve the expected sum of symmetric random variables with a convex mapping (Ledoux and Talagrand (1991), Lemma 6.3). Substituting Equation 126 to Equation 115, we have for any \(\delta\in(0,1)\), with a probability over \(1-\delta\), \(\forall f\in\mathcal{V}\), \(\forall q\in\mathcal{Q}\), the following holds: \[\left|q(\mathcal{E}=e_{k})\mathbb{E}_{q}\left[-\log f[x](y)| \mathcal{E}=e_{k}\right]-\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D }}-\log f[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\right| \tag{127}\] \[\leq 2\mathscr{R}_{|\mathcal{D}|}(\mathcal{G}_{\mathcal{V}}^{k})+B \sqrt{\frac{2\log\frac{1}{\delta}}{|\mathcal{D}|}}. \tag{128}\] Let \(\tilde{f}_{k}=\arg\inf_{f}\{q(\mathcal{E}=e_{k})\mathbb{E}_{q}\left[-\log f[x ](y)|\mathcal{E}=e_{k}\right]\}\). Let \(\hat{f}_{k}=\arg\inf_{f}\{\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{ D}}-\log f[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\}\). Now we have \[q(\mathcal{E}=e_{k})\mathbb{E}_{q}\left[-\log\tilde{f}_{k}[x](y) |\mathcal{E}=e_{k}\right]-\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{ D}}-\log\tilde{f}_{k}[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i}) \tag{129}\] \[\leq q(\mathcal{E}=e_{k})H_{\mathcal{V}}(Y|X,\mathcal{E}=e_{k})- \frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D}}-\log\hat{f}_{k}[x_{i}] (y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\] (130) \[\leq q(\mathcal{E}=e_{k})\mathbb{E}_{q}\left[-\log\hat{f}_{k}[x] (y)|\mathcal{E}=e_{k}\right]-\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in \mathcal{D}}-\log\hat{f}_{k}[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i}). \tag{131}\] Combining Equation 127 and Equation 129-131, the lemma is proved. \(\blacksquare\) **Lemma 15**: _Assume \(\forall x\in\mathcal{X}\),\(\forall y\in\mathcal{Y}\),\(\forall f\in\mathcal{V}\), \(\log f[\emptyset](y)\in[-B,B]\) where \(B>0\). The definition of \(\mathcal{G}_{\mathcal{V}}^{k}\) and \(\mathscr{R}_{N}(\mathcal{G})\) follows from Lemma 14. Define \(\hat{f}_{k}=\arg\inf_{f}\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D} }-\log f[\emptyset](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\)._ _Then for any \(q\in\mathcal{Q}\), any \(\delta\in(0,1)\), with a probability over \(1-\delta\), we have_ \[\left|q(\mathcal{E}=e_{k})H_{\mathcal{V}}(Y|\mathcal{E}=e_{k})- \frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D}}-\log\hat{f}_{k}[ \emptyset](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\right| \tag{132}\] \[\leq 2\mathscr{R}_{|\mathcal{D}|}(\mathcal{G}_{\mathcal{V}}^{k})+B \sqrt{\frac{2\log\frac{1}{\delta}}{|\mathcal{D}|}}. \tag{133}\] **Proof** Similar to Lemma 14, we could prove that \[\left|q(\mathcal{E}=e_{k})H_{\mathcal{V}}(Y|\mathcal{E}=e_{k})- \frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D}}-\log\hat{f}_{k}[ \emptyset](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\right| \tag{134}\] \[\leq 2\mathscr{R}_{|\mathcal{D}|}(\mathcal{G}_{\mathcal{V}^{ \emptyset}}^{k})+B\sqrt{\frac{2\log\frac{1}{\delta}}{|\mathcal{D}|}}, \tag{135}\] where \(\mathcal{G}_{\mathcal{V}^{\emptyset}}^{k}=\{g|g(x,y)=\log f[\emptyset](y)q( \mathcal{E}=e_{k}|x,y),f\in\mathcal{V},q\in\mathcal{Q}\}\). According to the definition for the predictive family \(\mathcal{V}\) (Xu et al. (2020), Definition 1), \(\forall f\in\mathcal{V}\), there exists \(f^{\prime}\in\mathcal{V}\) such that \(\forall x\in\mathcal{X}\), \(f[\emptyset]=f^{\prime}[x]\). Thus, \(\mathcal{G}^{k}_{\mathcal{V}\emptyset}\subset\mathcal{G}^{k}_{\mathcal{V}}\), and therefore \(\mathscr{R}_{|\mathcal{D}|}(\mathcal{G}^{k}_{\mathcal{V}\emptyset})\leq \mathscr{R}_{|\mathcal{D}|}(\mathcal{G}^{k}_{\mathcal{V}})\). Substituting into Equation 134, the lemma is proved. **Lemma 16** ((Xu et al., 2020), Theorem 1): _Assume \(\forall x\in\mathcal{X}\),\(\forall y\in\mathcal{V}\),\(\forall f\in\mathcal{V}\), \(\log f[x](y)\in[-B,B]\) where \(B>0\). Define a function class \(\mathcal{G}^{*}_{\mathcal{V}}=\{g|g(x,y)=\log f[x](y),f\in\mathcal{V}\}\). The definition of \(\mathscr{R}_{N}(\mathcal{G})\) follows from Lemma 14._ _Then for any \(\delta\in(0,0.5)\), with a probability over \(1-2\delta\), we have_ \[\left|\mathbb{I}_{\mathcal{V}}(X\to Y)-\hat{\mathbb{I}}_{\mathcal{V}}(X\to Y )\right|\leq 4\mathscr{R}_{|\mathcal{D}|}(\mathcal{G}^{*}_{\mathcal{V}})+2B \sqrt{\frac{2\log\frac{1}{\delta}}{|\mathcal{D}|}}. \tag{136}\] Finally we are prepared to prove Theorem 11. **Proof** [Proof of Theorem 11] We first bound the error of empirical estimation with the sum of items in Lemma 14,15,16. \[|\mathcal{H}^{\mathscr{G}_{\mathcal{K}}}_{\mathcal{V}}(X\to Y)-\hat{H}^{ \mathscr{G}_{\mathcal{K}}}_{\mathcal{V}}(X\to Y;\mathcal{D})| \tag{137}\] \[\leq\left|\sup_{\mathcal{E}\in\mathscr{E}_{K}}\mathbb{I}_{ \mathcal{V}}(X\to Y|\mathcal{E})-\sup_{\mathcal{E}\in\mathscr{E}_{K}}\hat{ \mathbb{I}}_{\mathcal{V}}(X\to Y|\mathcal{E};\mathcal{D})\right|+\left| \mathbb{I}_{\mathcal{V}}(X\to Y)-\hat{\mathbb{I}}_{\mathcal{V}}(X\to Y; \mathcal{D})\right|\] (138) \[\leq\sup_{\mathcal{E}\in\mathscr{E}_{K}}\left|\mathbb{I}_{ \mathcal{V}}(X\to Y|\mathcal{E})-\hat{\mathbb{I}}_{\mathcal{V}}(X\to Y| \mathcal{E};\mathcal{D})\right|+\left|\mathbb{I}_{\mathcal{V}}(X\to Y)-\hat{ \mathbb{I}}_{\mathcal{V}}(X\to Y;\mathcal{D})\right|\] (139) \[=\sup_{q\in\mathcal{Q}}\left|\sum_{k=1}^{K}\left[q(\mathcal{E}=e _{k})H_{\mathcal{V}}(Y|\mathcal{E}=e_{k})-q(\mathcal{E}=e_{k})H_{\mathcal{V}}( Y|X,\mathcal{E}=e_{k})\right]\right.\] (140) \[\qquad\left.-\sum_{k=1}^{K}\left[q(\mathcal{E}=e_{k})\hat{H}_{ \mathcal{V}}(Y|\mathcal{E}=e_{k};\mathcal{D})-q(\mathcal{E}=e_{k})\hat{H}_{ \mathcal{V}}(Y|X,\mathcal{E}=e_{k};\mathcal{D})\right]\right|\] (141) \[\quad+\left|\mathbb{I}_{\mathcal{V}}(X\to Y)-\hat{\mathbb{I}}_{ \mathcal{V}}(X\to Y;\mathcal{D})\right|\] (142) \[\leq\sum_{k=1}^{K}\sup_{q\in\mathcal{Q}}\left|q(\mathcal{E}=e_{k} )H_{\mathcal{V}}(Y|\mathcal{E}=e_{k})-q(\mathcal{E}=e_{k})\hat{H}_{\mathcal{V }}(Y|\mathcal{E}=e_{k};\mathcal{D})\right|\] (143) \[\quad+\sum_{k=1}^{K}\sup_{q\in\mathcal{Q}}\left|q(\mathcal{E}=e_{k })H_{\mathcal{V}}(Y|X,\mathcal{E}=e_{k})-q(\mathcal{E}=e_{k})\hat{H}_{\mathcal{ V}}(Y|X,\mathcal{E}=e_{k};\mathcal{D})\right|\] (144) \[\quad+\left|\mathbb{I}_{\mathcal{V}}(X\to Y)-\hat{\mathbb{I}}_{ \mathcal{V}}(X\to Y;\mathcal{D})\right|\] (145) \[=\sum_{k=1}^{K}\sup_{q\in\mathcal{Q}}\left|q(\mathcal{E}=e_{k})H_ {\mathcal{V}}(Y|\mathcal{E}=e_{k})-\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i} \in\mathcal{D}}-\log\hat{f}_{k}[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\right|\] (146) \[\quad+\sum_{k=1}^{K}\sup_{q\in\mathcal{Q}}\left|q(\mathcal{E}=e_{ k})H_{\mathcal{V}}(Y|X,\mathcal{E}=e_{k})-\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i} \in\mathcal{D}}-\log\hat{f}^{\prime}_{k}[\emptyset](y_{i})q(\mathcal{E}=e_{ k}|x_{i},y_{i})\right|\] (147) \[\quad+\left|\mathbb{I}_{\mathcal{V}}(X\to Y)-\hat{\mathbb{I}}_{ \mathcal{V}}(X\to Y;\mathcal{D})\right|, \tag{148}\] where \(\hat{f}_{k}=\arg\inf_{f}\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D}}- \log f[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\), and \(\hat{f}^{\prime}_{k}=\arg\inf_{f}\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in \mathcal{D}}-\log f[\emptyset](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\), for any \(q\in\mathcal{Q}\) and \(1\leq k\leq K\). For simplicity, let \[\operatorname{Err}_{k} =\sup_{q\in\mathcal{Q}}\left|q(\mathcal{E}=e_{k})H_{\mathcal{V}}( Y|X,\mathcal{E}=e_{k})-\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D}}- \log\hat{f}_{k}[x_{i}](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i})\right|. \tag{149}\] \[\operatorname{Err}^{\prime}_{k} =\sup_{q\in\mathcal{Q}}\left|q(\mathcal{E}=e_{k})H_{\mathcal{V}}( Y|X,\mathcal{E}=e_{k})-\frac{1}{|\mathcal{D}|}\sum_{x_{i},y_{i}\in\mathcal{D}}- \log\hat{f}^{\prime}_{k}[\emptyset](y_{i})q(\mathcal{E}=e_{k}|x_{i},y_{i}) \right|.\] (150) \[\operatorname{Err}^{*} =\left|\mathbb{I}_{\mathcal{V}}(X\to Y)-\hat{\mathbb{I}}_{ \mathcal{V}}(X\to Y;\mathcal{D})\right|. \tag{151}\] Then, by Lemma 14,15,16, \[\operatorname{Pr}\left[|\mathcal{H}^{\mathcal{V}}_{K}-\hat{ \mathcal{H}}^{\mathcal{V}}_{K}(\mathcal{D})|>4(K+1)\mathscr{R}_{|\mathcal{D}|} (\mathcal{G}_{\mathcal{V}})+2(K+1)B\sqrt{\frac{2\log\frac{1}{\delta}}{| \mathcal{D}|}}\right] \tag{152}\] \[\leq\operatorname{Pr}\left[\sum_{i=1}^{K}\operatorname{Err}_{k} +\sum_{i=1}^{K}\operatorname{Err}^{\prime}_{k}+\operatorname{Err}^{*}>4(K+1) \mathscr{R}_{|\mathcal{D}|}(\mathcal{G}_{\mathcal{V}})+2(K+1)B\sqrt{\frac{2 \log\frac{1}{\delta}}{|\mathcal{D}|}}\right]\] (153) \[\leq\operatorname{Pr}\left[\sum_{i=1}^{K}\operatorname{Err}_{k} +\sum_{i=1}^{K}\operatorname{Err}^{\prime}_{k}+\operatorname{Err}^{*}>\sum_{ k=1}^{K}4\mathscr{R}_{|\mathcal{D}|}(\mathcal{G}_{\mathcal{V}}^{k})+4\mathscr{R}_{| \mathcal{D}|}(\mathcal{G}_{\mathcal{V}}^{*})+2(K+1)B\sqrt{\frac{2\log\frac{1}{ \delta}}{|\mathcal{D}|}}\right]\] (154) \[\qquad+\left(\operatorname{Err}^{*}>4\mathscr{R}_{|\mathcal{D}|} (\mathcal{G}_{\mathcal{V}}^{*})+2B\sqrt{\frac{2\log\frac{1}{\delta}}{| \mathcal{D}|}}\right)\] (155) \[\leq 2(K+1)\delta. \tag{156}\] Equation 154 is because of \(\mathcal{G}_{\mathcal{V}}^{k}=\mathcal{G}_{\mathcal{V}}\), \(\mathcal{G}_{\mathcal{V}}^{*}\subset\mathcal{G}_{\mathcal{V}}\) and therefore \(R_{|\mathcal{D}|}(\mathcal{G}_{\mathcal{V}}^{k})\leq R_{|\mathcal{D}|}( \mathcal{G}_{\mathcal{V}})\), \(R_{|\mathcal{D}|}(\mathcal{G}_{\mathcal{V}}^{*})\leq R_{|\mathcal{D}|}( \mathcal{G}_{\mathcal{V}})\). Hence, \[\operatorname{Pr}\left[|\mathcal{H}_{\mathcal{V}}^{\mathscr{E}_{ K}}(X\to Y)-\hat{H}_{\mathcal{V}}^{\mathscr{E}_{K}}(X\to Y;\mathcal{D})|\leq 4(K+1) \mathscr{R}_{|\mathcal{D}|}(\mathcal{G}_{\mathcal{V}})+2(K+1)B\sqrt{\frac{2 \log\frac{1}{\delta}}{|\mathcal{D}|}}\right] \tag{158}\] \[\geq 1-2(K+1)\delta. \tag{159}\] ## Appendix E Proof of Theorem 12 **Proof** [Proof of Theorem 12] The objective function of our IM algorithm is directly derived from the definition of empirical predictive heterogeneity in Definition 10. For the regression task, we assume the predictive family as \[\mathcal{V}_{1}=\{g:g[x]=\mathcal{N}(f_{\theta}(x),\sigma^{2}),f\text{ is the regression model and }\theta\text{ is learnable, }\sigma=1.0(\text{fixed})\}, \tag{160}\] where we only care about the output of the model and the noise scale of the Gaussian distribution is often ignored, for which we simply set \(\sigma=1.0\) as a fixed term. Then for each environment \(e\in\operatorname{supp}(\mathcal{E}^{*})\), the \(\mathbb{I}_{\mathcal{V}}(X\to Y|\mathcal{E}^{*}=e)\) becomes \[\mathbb{I}_{\mathcal{V}}(X\to Y|\mathcal{E}^{*}=e)\propto\min_{\theta} \mathbb{E}^{\|Y-f_{\theta}(X)\|^{2}|\mathcal{E}^{*}=e]-\operatorname{Var}(Y| \mathcal{E}^{*}), \tag{161}\] which corresponds with the MSE loss and the proposed regularizer in Equation 35. For the classification task, the derivation is similar, and the regularizer becomes the entropy of \(Y\) in sub-population \(e\) and the loss function becomes the cross-entropy loss.
2308.09792
An Integrated Simulation Tool for Dark Current Radiation Effects Using ACE3P and Geant4
A simulation workflow is under development to interface particle data transfer and matching of geometry between the electromagnetic (EM) cavity simulation code ACE3P and radiation code Geant4. The target is to simulate dark current (DC) radiation effects for the KEK 56-cell S-band accelerating structure using ACE3P and Geant4, and benchmark against KEK experiment data. As a first step, ACE3P DC simulations using a 7-cell structure have been performed by first calculating the operating mode in the structure and then tracking field-emitted electrons under the influence of the EM fields of the mode. The ACE3P simulation results agree well with the EM software CST for an accelerating gradient of 21.8 MV/m. The reader/writer I/O in ACE3P and the transfer of particle data from Track3P to Geant4 for DC radiation effects studies have been implemented. The simulation workflow between the two codes will be demonstrated with the goal of performing large-scale simulations for the KEK 56-cell structure. In addition to modeling DC effects in linacs, the integrated simulation workflow will be applicable to studying positron source and capture structure for future lepton colliders.
Lixin Ge, Zenghai Li, Cho-Kuen Ng, Liling Xiao, Hiroyasu Ego, Yoshinori Enomoto, Hiroshi Iwase, Yu Morikawa, Takashi Yoshimoto
2023-08-18T19:41:23Z
http://arxiv.org/abs/2308.09792v1
# An Integrated Simulation Tool for Dark Current Radiation Effects Using ACE3P and Geant4* ###### Abstract A simulation workflow is under development to interface particle data transfer and matching of geometry between the electromagnetic (EM) cavity simulation code ACE3P and radiation code Geant4. The target is to simulate dark current (DC) radiation effects for the KEK 56-cell S-band accelerating structure using ACE3P and Geant4, and benchmark against KEK experiment data. As a first step, ACE3P DC simulations using a 7-cell structure have been performed by first calculating the operating mode in the structure and then tracking field-emitted electrons under the influence of the EM fields of the mode. The ACE3P simulation results agree well with the EM software CST for an accelerating gradient of 21.8 MV/m. The reader/writer I/O in ACE3P and the transfer of particle data from Track3P to Geant4 for DC radiation effects studies have been implemented. The simulation workflow between the two codes will be demonstrated with the goal of performing large-scale simulations for the KEK 56-cell structure. In addition to modeling DC effects in linacs, the integrated simulation workflow will be applicable to studying positron source and capture structure for future lepton colliders. Lixin Ge, Zenghai Li, Cho-Kuen Ng, Liling Xiao SLAC, Stanford, CA 94025, USA Hiroyasu Ego, Yoshinori Enomoto, Hiroshi Iwase, Yu Morikawa, Takashi Yoshimoto KEK, 1-1 Oho, Tsukuba, Ibaraki, Japan * Japan Science and Technology Cooperation Program (2022-2025) The present approach of radiation calculation involves separate simulations such as using ACE3P [1-5] and Geant4 [6-8], or ACE3P and FLUKA [9,10]. In this paper, we aim to streamline the code integration process using Geant4 instead of FLUKA because the latter is proprietary and code developers generally have no access to the source codes. Geant4, on the other hand, is open source and a widely accepted radiation modeling package. The current integrated simulation process for large accelerator systems presents several challenges, such as long simulation times, the need for multiple experts from different physics domains, and difficulties in sharing and replicating simulations within the community. It is of utmost importance to provide an easy-to-use and streamlined tool for the accelerator community to better utilize the resources. Supported by HEP US - Japan Science and Technology Cooperation Program, a standalone software package for modeling particle-matter interactions radiation effects in accelerator cavity is under development by integrating ACE3P for modeling of accelerator structures and Geant4 for calculation of interactions of particles with matter. The work is to integrate the separate calculations into a single simulation workflow from start to end without the need for individual scientists performing different tasks and communicating with each other. This paper will describe the methods used in the integration package in Section 2: 2.1 is a schematic of the simulation workflow for the integrated software; 2.2 is the basic information of two integrated software ACE3P and Geant4; 2.3 shows how to deal with geometry CAD model in both ACE3P and Geant4; 2.4 describes a particle data transfer capability between ACE3P and Geant4 based on the standardized openPMD format. Preliminary dark current application using the developed simulation tool is presented in Section 3, followed by conclusion and future work in Section 4. ## 2 Methods for ACE3P-Geant4 Integration ### Simulation Workflow A schematic of the simulation workflow for the integrated software is illustrated in Fig. 1. It starts with the construction of the geometrical models as two separate computation domains for ACE3P and Geant4 simulations. An integrated code driver first assigns the problem type to determine which code initiates a simulation: ACE3P for dark current and Geant4 for positron source simulations, respectively. Particles hitting the geometrical interface between the two computational domains will be collected and transferred to the other code for its respective physics simulation. Finally, the particle data and radiation distribution will be output to files for visualization and postprocessing. ### ACE3P and Geant4 ACE3P is a comprehensive set of parallel multi-physics codes with electromagnetics, thermal and mechanical simulation capabilities developed at SLAC for almost two decades through the support of DOE Computational Grand Challenge and SciDAC [11] initiatives. It is based on high order curved finite elements for high-fidelity modeling and has been implemented on massively parallel computers for increased problem size and speed. All modules are highly parallelized running on high performance computing (HPC) platforms with thousands or more cores such as those at NERSC [12]. The use of high-order finite elements on tetrahedral conformal meshes with quadratic surfaces enables accurate and fast solution. ACE3P has been well accepted by the accelerator community as a benchmark and guidance of structure optimizations from large-scale simulations. Its eigensolver module Omega3P and S-parameters calculation module S3P are used widely in RF structure simulations, and its particle tracking module Track3P models multipacting and dark current effects in the structure. In this integrated tool, Omega3P/S3P will be used for EM field calculation, and Track3P for DC simulation. Figure 1: Workflow for integration of ACE3P and Geant4 simulation Geant4 is a software toolkit for simulation of elementary particles passing through and interacting with matter. Geant4 is developed and maintained by the Geant4 Collaboration. Nowadays all LHC experiments at CERN, many current and near future experiments of DOE laboratories all rely on Geant4. Recently, a Geant4 based positron beam source package (GPos) has been developed at LBNL [13]. It is an easy-to-use publicly available C\(++\) code, with support for hybrid MPI and openPMD [14] as well as parallel I/O tool created to model relativistic particle beam and solid target interactions. Our goal is to develop a standalone software package for modeling particle-matter interactions radiation effects in accelerator cavity by integrating ACE3P for modeling of accelerator structures and Geant4 for calculation of interactions of particles with matter. ### Data transfer between ACE3P and Geant4 A C\(++\) API for openPMD [15] has been developed in ACE3P to convert ACE3P unstructured EM field data in NetCDF format from finite element discretization to structured data for Cartesian grid simulation used in other code, such as beam dynamics code IMPACT [16-18] and Geant4. The openPMD-API can execute in parallel using multiple compute cores with MPI, allowing faster data output. The electric field of a resonant mode of a simple pillbox cavity using the openPMD format is shown in Fig. 2. For particle data transfer from ACE3P to Geant4, an intermediary particle data reader/writer for particles 6D phase space data (**x**, **p**) and their timestamps (t) in ASCII format has been developed in both ACE3P and Geant4. We will expand the particle data transfer capability to include openPMD-API reader/writer to enhance I/O efficiency. Figure 2: Visualization of electric field in openPMD format. ### CAD Model ACE3P and Geant4 can both start from a CAD model of a geometric domain. The ACE3P modules, Omega3P, S3P and Track3P, discretize the CAD model into a finite element mesh consisting of curved finite elements and provide mesh representation knowledge of which finite element entities are on which boundaries of the CAD model. The way ACE3P handles CAD geometry is through the third-party mesh generator Cubit [19], which can read in a CAD model to generate finite element meshes for ACE3P simulation. Geant4 employs a faceted representation of the CAD model boundary. The build-in GDML writer and reader will be used for CAD model import and export in Geant4. A community-standard format that most finite element software accepts is the Standard for the Exchange of Product Data (STEP), which is recognizable by ACE3P, but not directly by Geant4. For model import to Geant4, a converter tool is needed to convert a CAD model to Geant4 GDML recognized format. There are several developed convert tools for Geant4 CAD model import [20-23]. In this integration, Cubit is used to convert CAD model to STL format, which can be recognizable by Geant4 through its built-in GDML writer and reader. ## 3 Application ### Dark current and radiation simulation on KEK 7 cell structure KEK has performed DC simulations for a smaller model with 7 cells instead of 56 cells using the commercial EM code CST due to lack of computing power and memories. We benchmarked ACE3P DC simulations with CST using the same 7-cell structure. #### 3.1.1 EM modelling using ACE3P S3P, the S-parameter module in ACE3P, was used to solve for the operating mode in the 7-cell structure. Track3P, the particle tracking code module in ACE3P, was then used to track the particles under the EM fields from S3P. The ACE3P results are compared with CST for a field gradient of 21.8 MV/m as shown in Fig. 3. #### 3.1.2 Radiation modelling using Geant4 The simulations from Track3P and CST show that particles mainly emit from the cavity disks, where higher electric fields are. Some particles hit the cavity wall and interact with it, which can be studied using Geant4. The primary particles collected from Track3P, the 7-cell structure solid model with the two couplers and the 7-cell vacuum cavity are loaded into Geant4 for radiation study. Fig. 4 shows the preliminary radiation simulation setup, and further radiation study will be carried out for the 56-cell real structure. ### Dark current and radiation simulation on KEK 56 cell structure KEK performed dark current and radiated light intensity study on a 56-cell S-band accelerating structure [24]. Fig. 5 is the bench test of the S-band structure at KEK. Based on the S-band 56-cell model provided by KEK (Fig. 6), a preliminary dark current simulation is performed on NERSC Perlmutter supercomputer. The simulation procedure is as follows. 1) A mesh with 4M curved tetrahedral elements is generated for the full 56-cell vacuum region using Cubit, with denser mesh resolution around each cell iris region (Figure 7). Figure 4: Particles trajectory on 7-cell structure by using Geant4. Figure 3: Energy spectrum from Track3P (left) and CST (right) in the 7-cell structure. 2) S3P, the S-parameter module in ACE3P, is used to solve for the operating mode with frequency at 2.856 GHz. It took several minutes to get 2nd order EM field by using 8 CPU nodes on NERSC Perlmutter. Fig. 7 shows the electric field magnitude. 3) Track3P, a particle tracking code in ACE3P, is used for dark current simulation. Electrons are emitted from the cavity surface according to the Fowler-Nordheim formula. The high-fidelity geometry representation built in the finite-element method allows for realistic modeling of particle emission on the cavity wall. These electrons contribute to the dark current, and their movements in the cavity are governed by the rf fields. When the electrons hit the cavity wall, they will be terminated in ACE3P simulation and their phase space information will be written to a file, which will be used for postprocessing and further study. For a typical dark current simulation, a total of 80k primary particles are emitted from the surface and it takes 25 RF cycles for particles transiting the whole 56 cells. The computational time was less than 30 minutes to complete the end-to-end simulation using 8 CPU Perlmutter nodes. Fig. 8 shows a snapshot of particle trajectories. This calculation will serve as a validation of the integrated simulation tool. 4) Dark current profiles observed on a phosphor screen would be used for validation of the integrated simulation code. Occasional high current spots have been detected on the screen and seem to cause extensive direct hitting and radiation damages to the structure and Figure 5: Bench test of an S-band structure at KEK. exact simulation could trace the emission origins of the high dark current in the structure and play crucial role in radiation protection. The benchmark of dark current between ACE3P simulation and measured data from KEK is in progress. An integrated simulation tool for dark current radiation effects using ACE3P and Geant4 is under development. The reader/writer I/O for transferring of particle data from Track3P to Geant4 for DC radiation effects studies have been implemented. The development of geometry interface between ACE3P and Geant4 have been completed, enabling the use of CAD model in the integrated tool. A 7-cell structure dark current simulation have been benchmarked well with EM software CST. Dark current validation between ACE3P simulation and KEK measurement for 56-cell S-band accelerating structure is in progress. In addition, there will be more capabilities development and more applications by using the developed tool, for example, * Implement s Python script to facilitate simulation workflow on NERSC supercomputers. * Perform thermal analysis using ACE3P from positron target simulation using Geant4. * Complete positron generation simulation with diagnostics and visualization capabilities. ## Acknowledgments This research used resources of the National Energy Research Scientific Computing (NERSC) Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
2302.13791
Repeated Purification versus Concatenated Error Correction in Fault Tolerant Quantum Networks
Entanglement distribution is a core mechanism for the future quantum Internet. The quantum world is, however, a faulty environment. Hence, successful entanglement swapping is error-prone. The occurrence of quantum state errors can be mitigated using purification and error correction, which can be repeated in the former case and concatenated in the latter case. Repeated purification merges low-fidelity qubits into higher-quality ones, while concatenated error correction builds upon the redundancy of quantum information. In this article, we study in-depth and compare the two options: repeated purification and concatenated error correction. We consider using repeated purification and concatenated error correction to mitigate the presence of faults that occur during the establishment of Bell pairs between remote network nodes. We compare their performance versus the number of repetitions or concatenations, to reach a certain level of fidelity in quantum networks. We study their resource requirements, namely, their work memory complexity (e.g., number of stored qubits) and operational complexity (e.g., number of operations). Our analysis demonstrates that concatenated error correction, versus repeated purification, requires fewer iterations and has lower operational complexity than repeated purification to reach high fidelity at the expense of increased memory requirements.
Michel Barbeau, Joaquin Garcia-Alfaro, Evangelos Kranakis
2023-02-27T14:10:35Z
http://arxiv.org/abs/2302.13791v1
# Repeated Purification versus Concatenated Error Correction in Fault Tolerant Quantum Networks ###### Abstract. Entanglement distribution is a core mechanism for the future quantum Internet. The quantum world is, however, a faulty environment. Hence, successful entanglement swapping is error-prone. The occurrence of quantum state errors can be mitigated using purification and error correction, which can be repeated in the former case and concatenated in the latter case. Repeated purification merges low-fidelity qubits into higher-quality ones, while concatenated error correction builds upon the redundancy of quantum information. In this article, we study in-depth and compare the two options: repeated purification and concatenated error correction. We consider using repeated purification and concatenated error correction to mitigate the presence of faults that occur during the establishment of Bell pairs between remote network nodes. We compare their performance versus the number of repetitions or concatenations, to reach a certain level of fidelity in quantum networks. We study their resource requirements, namely, their work memory complexity (e.g., number of stored qubits) and operational complexity (e.g., number of operations). Our analysis demonstrates that concatenated error correction, versus repeated purification, requires fewer iterations and has lower operational complexity than repeated purification to reach high fidelity at the expense of increased memory requirements. Key words and phrases:Quantum Network, Quantum Communication, Quantum Repeater, Entanglement Swapping, Entanglement Distribution, Repeated Purification, Concatenated Error Correction 2020 Mathematics Subject Classification: 14C35, 14C the former multiple times or concatenating several instances of the latter. Both purification and error correction deserve to be considered to handle the occurrence of quantum state faults. Repeated purification enhances the quality of a quantum state by merging low-fidelity qubits into higher-quality ones. For concatenated error correction, the quality enhancement procedure leverages redundant quantum information. Recent research shows the relevance of quantum error correction to make advances in the quantum Internet (Han et al., 2017; Chen et al., 2018). Software tools that benefit from such techniques include security applications (e.g., enforcement of quantum key establishment (Barbaeau et al., 2019)) and distributed applications (e.g., augmenting the parallelism of quantum machine learning (Barbaeau et al., 2019)). An interesting question is which, among repeated purification and concatenated error correction, is best to use in the quantum Internet. In this article, we analyze this question in depth. For each case, namely, repeated purification and concatenated error correction, we develop analytic models of fidelity, quantum memory complexity (number of required qubits), and time complexity (number of operations). We quantify the requirements (memory and operations) to improve the quality of a quantum state using repeated purification and concatenated error correction. The former includes work memory (e.g., number of qubits stored), while the latter refers to the process's fidelity improvement operational complexity (e.g., number of steps required). Our analysis demonstrates that reaching high fidelity via concatenated error correction requires fewer iterations than repeated purification at the expense of increased memory requirements. At the same time, repeated purification increases the operational complexity in contrast with concatenated error correction. Numeric simulations complement the results obtained with the analytic models. The article is structured as follows. Section 2 surveys related work. Section 3 presents our model for quantum networks and repeaters. Section 4 introduces entanglement swapping and discusses our error model and repeated purification in conjunction with concatenated error correction for fidelity enhancement. Section 5 provides our analysis of the efficiency of each approach (i.e., repeated purification vs. concatenated error correction). Section 6 presents the results of numeric simulations. Finally, Section 7 summarizes our main work and discusses prospects for future research. ## 2. Related Work Metropolitan-scale applications using quantum networks have been recently analyzed via simulation (Han et al., 2017; Chen et al., 2018). Results show that practical quantum-enhanced network functionalities may soon be ready for security applications (e.g., enforcement of quantum key establishment (Barbaeau et al., 2019)) and distributed applications (e.g., augmenting the parallelism of quantum machine learning (Barbaeau et al., 2019)). Nevertheless, quantum-enhanced networks must face the problem of entanglement distribution under fidelity constraints (Barbaeau et al., 2019). Given a source and a destination, the length of the channel imposes a significant decay in the quality of communication. Several protocols have been proposed to address this problem (Han et al., 2017; Chen et al., 2018; Chen et al. with a focus on linear (Han and Yang, 2007; Yang and Yang, 2008; Yang and Yang, 2010), grid (Han and Yang, 2008), ring (Han and Yang, 2008; Yang and Yang, 2010) or sphere (Han and Yang, 2008) topologies, in which the size of the required quantum memory is related to the number of neighbors that a repeater has. Under this classical error model, repeated purification (Zhou and Yang, 2008) vs. concatenated repetition codes (Zhou and Yang, 2008) can be compared (Zhou and Yang, 2008) to decide which one achieves better performance with minimal increase in resources. Our analysis demonstrates that concatenated error correction requires fewer iterations and operations than repeated purification at the expense of increased memory requirements. ## 3. Quantum Networking We begin with a description of our quantum network model and architecture. As network model (Kozil and Riedler, 2007), let us consider a connected graph denoted as the pair \(G=(V,E)\), where \(V\) is the vertex set and \(E\) the edge set. Assume that the set \(V\) of vertices is partitioned into two distinct subsets \(R\) and \(T\), such that \(V=R\cup T\), where \(R\) is the set of _repeaters_ and \(T\) is the set of _terminals_. A network example is depicted in Fig. 1. An edge represents a bi-directional quantum channel that can be used to establish entanglement between its two endpoints. All repeaters and terminals can perform entanglement swapping, but have limited quantum memory. Let us consider a path \(p=v_{0},v_{1},\ldots,v_{n-1},v_{n}\) of \(n+1\) vertices in \(V\). The start and final vertices \(v_{0}\) and \(v_{n}\) are terminals in \(T\). All intermediate vertices \(v_{i}\in R\), \(0<i<n\), are repeaters in \(R\). The number \(n\) of edges of \(p\) is its length and is denoted by \(|p|\). Definition 1 (Complete Set of Paths).: _A set \(P\) of paths in the given graph \(G\) is called complete when for any pair \(t,t^{\prime}\) of terminals in \(T\) there is a path \(p=v_{0},v_{1},\ldots,v_{n-1},v_{n}\) in \(P\) such that \(t=v_{0},t^{\prime}=v_{n}\) and all intermediate vertices \(v_{i}\), for \(0<i<n\), are repeaters in \(R\)._ We require that: i) terminal nodes are not adjacent to each other, ii) every terminal node is adjacent to at least one repeater, and iii) adjacent repeaters can communicate directly with each other. Definition 2 (Capacity of a Repeater).: _For a given complete set \(P\) of paths in a graph \(G\) and a repeater \(r\in R\), let the capacity of \(r\), denoted by \(C_{P}(r)\), be defined as the number of paths \(p\in P\) that pass through \(r\), see Fig. 2._ According to this definition, the capacity of a repeater is proportional to the number of qubits it must be able to store, paired with entanglement, to enable communication between terminals. Figure 1. Vertices depicted by disks are terminals in \(T\) while those depicted by squares are repeaters in \(R\). Note that terminals are directly connected to repeaters. A dashed line represents a path consisting of repeaters; the endpoints of the path connect terminals. Definition 3 (Capacity Induced by a Complete Set of Paths): _For a given complete set \(P\) of paths in a graph \(G\), the capacity of \(G\) induced by the complete set \(P\) of paths is defined as the maximum capacity caused by repeaters in \(R\). It is defined by the formula_ \[C_{G}(P):=\max_{r\in R}C_{P}(r)\text{ qubits}. \tag{1}\] Let \(\mathcal{P}_{G}\) denote the set of complete sets of paths for the graph \(G\). When this is understood from the context, we omit the subscript \(G\) in \(\mathcal{P}_{G}\). Among the collections of a complete set of paths for the graph \(G\), we are interested in minimizing the quantity \(C_{G}(P)\), namely \[\min_{P\in\mathcal{P}_{G}}C_{G}(P)=\min_{P\in\mathcal{P}_{G}}\max_{r\in R}C_{ P}(r)\text{ qubits}, \tag{2}\] where the minimum is taken over the set \(\mathcal{P}_{G}\) of all possible complete sets of paths \(P\) for the graph \(G\). The main problem concerns the capacity induced by connecting all pairs of terminals by paths consisting of repeaters. We are aiming for an algorithm that defines the set \(P\) of paths while minimizing the resulting capacity induced on the graph. The main problem is formally described as follows. Problem 4: _Given a graph \(G\), find a complete set \(P\) of paths that attains or approximates the quantity \(\min_{P\in\mathcal{P}_{G}}C_{G}(P)\)._ This capacity is somewhat related to the number of qubits the repeaters would need to store to allow communication between terminals using qubits. In the sequel, we explore these various aspects of quantum network architecture. Quantum networks can be visualized by employing layering architectures [32]. Figure 3 depicts the layers assumed in this work. Each network node (repeaters and terminals) comprises a physical, a link, and a network layer. The physical Figure 3: Quantum network layered architecture. Figure 2: The capacity of a repeater \(r\) is the number of paths that go through \(r\); in the picture, this is equal to three. Note that the hollow squares depict repeaters (from the set \(R\)) and hollow disks terminals (from the set \(T\)). layer assumes error-prone point-to-point transfer of quantum bits, using single photons or laser pulses. It also assumes the generation of low-fidelity entangled pairs using, e.g., entangled photon pair source devices (Shi et al., 2017). Such devices can be configured to attempt the creation of Bell pairs continuously. When Bell pair creation is successful, the resulting two qubits are physically separated. For instance, the first qubit can go to the left node and the second to the right node. The physical transmission of entangled pairs is also prone to errors. Non-perfect entanglement (fidelity below one) assumes Bell pair creation, but potentially contains errors. Fidelity improvement methods can be conducted at the upper layers (at the link and network layers). These two upper layers implement purification, error correction, and entanglement swapping. There is a transport layer and an application layer in all the terminals. The transport layer uses the network layer to provide end-to-end transfer of quantum states to participating processes. The transport layer may also implement end-to-end error correction. The application layer comprises processes running quantum algorithms. ## 4. Fault tolerant quantum networking The quantum Internet will rely on establishing quality Bell pairs between network nodes. Bell pairs can transfer quantum states using teleportation, or classical data, using super-dense coding. For two neighbor network nodes connected by a direct link, Bell pairs can be established by leveraging parametric down-conversion. Bell pairs can be established with entanglement swapping for remote network nodes, not directly linked. Nevertheless, both parametric down-conversion and entanglement swapping may be faulty and cause errors. In the sequel, we assume that adjacent nodes, repeaters, or terminals, use direct communications to establish Bell pairs. Quantum repeaters and entanglement swapping establish Bell pairs between non-adjacent repeaters and terminals. Bell pair establishment procedures can be faulty and introduce errors. Fault tolerance can be achieved using purification and error correction. The usage of entanglement swapping, repeated purification, and concatenated error correction is discussed further. Quantum errors resulting from faulty procedures are modeled in several ways. For the sake of simplicity and without loss of generality, let us consider a bit-flip model. Randomly, qubit \(\ket{0}\) is converted to \(\ket{1}\), or vice versa. An error changes the qubit \(\alpha\ket{0}+\beta\ket{1}\) to the qubit \(\beta\ket{0}+\alpha\ket{1}\). For a Bell pair such as \[\ket{\Phi^{+}}=\frac{\ket{00}+\ket{11}}{\sqrt{2}} \tag{3}\] when both qubits are inverted, the errors are canceled, the term \(\ket{00}\) becomes \(\ket{11}\) and vice versa. However, the presence of a single qubit error results in the quantum state \[\ket{\Psi^{+}}=\frac{\ket{01}+\ket{10}}{\sqrt{2}}. \tag{4}\] In this example, the first or second qubit flips, but not both. However, in both cases, the outcome is the same. When the qubit in the first position is flipped, the term \(\ket{00}\) is transformed to \(\ket{10}\) and the term \(\ket{11}\) is transformed \(\ket{01}\). When the qubit in the second position is flipped, the term \(\ket{00}\) is transformed to \(\ket{01}\), and the term \(\ket{11}\) is transformed to \(\ket{10}\). The resulting quantum state is the same for both events, Eq. (4). Let \(p\in[0,1]\) be the probability of a single qubit inversion error in a Bell pair. The error model is represented as the quantum state \[\sqrt{1-p}\ket{\Phi^{+}}+\sqrt{p}\ket{\Psi^{+}}. \tag{5}\] Entanglement swapping is a core quantum networking procedure. In an instance of the procedure, three network nodes are involved: a source \(s\), a repeater \(r\), and a destination \(d\). The source and destination can be repeaters or terminals. In the sequel, possession subscripts are used. The subscript \(s\) in the ket-notation \(\ket{\phi}_{s}\) means that the qubit \(\ket{\phi}\) is possessed by nodes \(s\). The subscript \(sd\) in the ket-notation \(|\Phi^{+}\rangle_{sd}\) means that the Bell pair \(|\Phi^{+}\rangle\) is shared between nodes \(s\) and \(d\). Node \(s\) possesses the first qubit, while node \(d\) possesses the second qubit. Entanglement swapping assumes that \(s,r\) and \(r,d\) share the Bell pairs \(|\Phi^{+}\rangle_{sr}\) and \(|\Phi^{+}\rangle_{rd}\), respectively. They may have been created using parametric down-conversion or previous instances of entanglement swapping. Next, the repeater \(r\) does Bell measurement of the second qubit of \(|\Phi^{+}\rangle_{sr}\) and the first qubit of \(|\Phi^{+}\rangle_{rd}\) into the classical bits \(c_{1}\) and \(c_{2}\), respectively. The repeater sends the two classical bits \(c_{1}\) and \(c_{2}\) resulting from the measurement to the destination \(d\). If \(c_{2}\) is equal to one, then \(d\) applies the Pauli gate \(X\) to the second qubit of \(|\Phi^{+}\rangle_{rd}\). If \(c_{1}\) is equal to one, then \(d\) also applies to gate \(Z\). The final result is an end-to-end Bell pair \(|\Phi^{+}\rangle_{sd}\) shared between the source and destination, the first qubit of \(|\Phi^{+}\rangle_{sr}\) and second qubit of \(|\Phi^{+}\rangle_{rd}\), possibly transformed by the \(X\) and \(Z\) gates. Entanglement swapping can be multi-hop. In such a scenario, several swapping operations are used to establish entanglement between two distant terminals, \(s\) and \(d\), connected by a multi-hop path. Using a routing algorithm, a path \(p=r_{0}=s,r_{1},\ldots,r_{n-1},r_{n}=d\) is chosen, where \(r_{1},\ldots,r_{n-1}\) are repeaters, with \(r_{0}\) equal to \(s\) and \(r_{n}\) equal to \(d\). Entanglement is established stage by stage. There are two available scheduled swapping protocols [19; 20], namely, sequential and nested, see Figure 4. For the sequential protocol, for \(i=1,2,3,\ldots,n-1\), using repeater \(r_{i}\) as intermediate, an entanglement swapping operation creates a Bell pair \(|\Phi^{+}\rangle_{sr_{i+1}}\) between nodes \(s\) and \(r_{i+1}\). In \(n-1\) iterations, a Bell pair \(|\Phi^{+}\rangle_{sd}\) is created between terminals \(s\) and \(d\). For the sake of simplicity, let us assume that the path length \(n\) is a power of two. With the nested protocol, firstly, for \(i=0,2,4,\ldots,n-2\), using repeater \(r_{i+1}\) as intermediate, an entanglement swapping operation creates a Bell pair \(|\Phi^{+}\rangle_{r_{i}r_{i+2}}\) between nodes \(r_{i}\) and \(r_{i+2}\). Next, for \(i=0,4,8,\ldots,n-4\), using node repeater \(r_{i+2}\) as intermediate, entanglement swapping creates a Bell pair \(|\Phi^{+}\rangle_{r_{i}r_{i+4}}\) between nodes \(r_{i}\) and \(r_{i+4}\). Each iteration doubles the length of the segment bridged by a pair. In \(\log_{2}n\) iterations, a Bell pair \(|\Phi^{+}\rangle_{sd}\) is created between terminals \(s\) and \(d\). Note that in both protocols, sequential and nested, every repeater needs to be able to store two qubits. Errors may result from faulty entanglement swapping. Figure 5 (a) pictures the initial state of an instance of the entanglement swapping procedure. There is a chain of two point-to-point links connecting terminal \(s\) to repeater \(r\) and repeater \(r\) to terminal \(d\). Terminal \(s\) shares a Bell pair \(|\Phi^{+}\rangle_{sr}\) with repeater \(r\). Repeater \(r\) shares a Bell pair \(|\Phi^{+}\rangle_{rs}\) with terminal \(s\). Part (b) shows the target state resulting from a successful entanglement swapping. Terminal \(s\) shares a Bell pair \(|\Phi^{+}\rangle_{sd}\) with terminal \(d\). Assuming the bit flip error model, Part (c) shows an entanglement swapping that failed to produce a correct result. Terminal \(s\) shares a Bell pair \(|\Psi^{+}\rangle\) with terminal \(d\). Qubit errors are an important quantum networking problem. However, the quality of a Bell pair can be improved with repeated purification and concatenated error correction, discussed in the sequel. Purification is a procedure executed between two nodes, a source terminal \(s\) and a destination terminal \(d\), to augment the fidelity of Bell pairs [34]. The outcome of purification can be characterized by the probability to correct errors Figure 4: Scheduled swapping, (a) sequential and (b) nested. successfully, referring to the concept of fidelity. Fidelity indicates the degree of resemblance of a quantum state to its original value. Fidelity is affected by errors but can be improved using purification. The goal is to establish high-fidelity Bell pairs between quantum network nodes. Figure 6 depicts purification. The goal is to establish the high-fidelity Bell pair \(\ket{\Phi^{+}}_{sd}\). The leftmost dotted-line rectangle represents the attempt by a repeater \(r\) to create two Bell pairs shared with nodes \(s\) and \(d\). During Bell pair establishment, qubit errors can be introduced. Rectangle \(E\) models the introduction of errors. For the sake of simplicity, let us consider solely bit-flip errors. The action of \(E\) on every one of the four qubits is defined as the following weighted sum: \[\sigma=\sqrt{p_{I}}\cdot I+\sqrt{p_{X}}\cdot X,\text{ with identity and Pauli matrices }I\text{ and }X,\text{ and probabilities }p_{I}+p_{X}=1 \tag{6}\] The term \(p_{X}\) is a bit-flip error probability. Gate \(E\) is the tensor product of four such gates, that is, \(E=\sigma^{\otimes 4}\). In the output of \(E\), let us denote the first pair's first qubit as \(\ket{\phi}_{s}\) while the first qubit of the second pair is \(\ket{\psi}_{s}\), both possessed Figure 5. Entanglement swapping in the presence of a bit-flip error. (a) Initial state. (b) Target outcome. (c) Faulty outcome. Figure 6. Purification procedure. According to the bit-flip error model, gate \(E\) represents the arbitrary corruption of qubits. As depicted, Bell pair creation may happen at the physical layer, followed by the link layer; it may also happen at the network layer. by terminal \(s\). Using \(\ket{\phi}_{s}\) as the control qubit and \(\ket{\psi}_{s}\) the target qubit, node \(s\) applies a \(CNOT\) gate yielding the pair \(\ket{\phi}_{s}\ket{\psi^{\prime}}_{s}=CNOT\left(\ket{\phi}_{s}\ket{\psi}_{s}\right)\). Let us denote the first pair's second qubit as \(\ket{\phi}_{d}\) while the second qubit of the second pair is \(\ket{\psi}_{d}\), both possessed by terminal \(d\). Using \(\ket{\phi}_{d}\) as the control qubit and \(\ket{\phi}_{d}\) as the target qubit, node \(d\) applies a \(CNOT\) gate yielding the pair \(\ket{\phi}_{d}\ket{\psi}_{d}^{\prime}=CNOT\left(\ket{\phi}_{d}\ket{\psi}_{d}\right)\). Terminal \(s\) measures \(\ket{\psi}_{s}^{\prime}\) into classical bit \(x_{1}\). Terminal \(d\) measures \(\ket{\psi}_{d}^{\prime}\) into classical bit \(x_{2}\). Using classical communications, \(s\) and \(d\) compare the values of \(x_{1}\) and \(x_{2}\). When they are equal, it is concluded that the pair \(\ket{\phi}_{s}\ket{\phi}_{d}\) has been purified and corresponds to the Bell pair \(\ket{\Phi^{+}}_{sd}\). When \(x_{1}\) and \(x_{2}\) are different, it is concluded that the pair \(\ket{\phi}_{s}\ket{\phi}_{d}\), is not equal to \(\ket{\Phi^{+}}_{sd}\). Purification failed. The pair \(\ket{\phi}_{s}\ket{\phi}_{d}\), is rejected. There are four possible purification outcomes. When \(x_{1}\) is equal to \(x_{2}\), there are two cases. There are no errors, and both pairs \(\ket{\phi}_{s}\ket{\phi}_{d}\) or the pair \(\ket{\psi}_{s}\ket{\psi}_{d}\) at the output of gate \(E\) are in state \(\ket{\Phi^{+}}_{sd}\). Or, both pairs are in error in state \(\ket{\Psi^{+}}_{sd}\). The probability of the first case is \(q^{2}\), with \(q=1-p\). Purification is successful. The probability of the second case is \(p^{2}\). The errors are undetected and purification fails and wrongly concludes with success. When \(x_{1}\) and \(x_{2}\) are different, there is a qubit error in one of the pairs. Either the pair \(\ket{\phi}_{s}\ket{\phi}_{d}\) or the pair \(\ket{\psi}_{s}\ket{\psi}_{d}\) is in state \(\ket{\Psi^{+}}\), but not both. The probability for the first or second pair to be in error is \(qp\). In both cases, the error is detected. Purification fails. In this setting, as a function of the single qubit inversion error probability \(p\) in a Bell pair, fidelity becomes equivalent to the likelihood of the absence of errors when purification concludes with a positive result, that is: \[f(p)=\frac{q^{2}}{q^{2}+p^{2}}\text{ with }q=1-p. \tag{7}\] An important observation is that purification does not improve the fidelity, i.e., the condition \(f(p)>q\), when the input fidelity is \(0.5\) or below (see Fig. 9.2 in Ref. [34]). A desired degree of fidelity, approaching one, can be obtained with several purification rounds, see Section 5.1. An alternative to purification is error correction. Table 1 lists error correction codes by name. A distinction is made between physical qubits and logical qubits. All error correction codes use several physical qubits to represent every abstract logical qubit. A pair \((n,k)\) is associated with every correction code. Parameter \(n\) represents the number of physical qubits used to encode \(k\) logical qubits. Desurvire's analysis (Ref. [37] Sec. 24.1) for the bit-flip model shows that a \((3,1)\) error correction code improves fidelity when the bit-flip probability is greater than or equal to 0.5. Longer codes (\(n>3\)) do not yield better results because they increase the risks of getting more errors. Error correction can be recursively applied, or concatenated, several times [30, 38]. This is further studied in Section 5.2. Figure 7 depicts the circuit associated with a single qubit error correction procedure, as used in the sequel. It represents repeater \(r\) and two terminals \(s\) and \(d\). A \((3,1)\) repetition code is used. The leftmost rectangle represents the behavior \begin{table} \begin{tabular}{l l} \hline \hline Code & Example \((n,k)\) \\ \hline Calderbank-Shor-Steane (CSS) [35, 36] & \((5|7|9,1)\) \\ Hadamard-Steane & \((7,3|4)\) \\ Repetition & \((3,1)\) \\ Shor & \((9,1)\) \\ Steane [36] & \((5|7,1)\) \\ \hline \hline \end{tabular} \end{table} Table 1: Error correction codes. In the \((n,k)\) notation, the variables \(n\) and \(k\) indicate the number of physical qubits and a corresponding number of logical qubits used to encode them. Available choices for \(n\) or \(k\) are listed, separated by vertical bars. of the repeater. The input qubits \(\ket{0}_{1}\) and \(\ket{0}_{2}\) are the Bell pair selectors. Since they are both equal to \(\ket{0}\), the \(H\) gate and first \(CNOT\) gate of \(r\) create the Bell pair \(\ket{\Phi^{+}}\). The second and third \(CNOT\) gates of \(r\) map the two members of the pair to a specific codeword, for instance, \(\ket{0}\) is mapped \(\ket{000}\) and \(\ket{1}\) is mapped \(\ket{111}\). The quantum channel \(E\) can arbitrarily corrupt qubits and introduce bit-flip errors. The gate \(E\) is the tensor product of six copies of the gate defined in Equation (6), that is, \(E=\sigma^{86}\). The behavior of every terminal is identical. Terminal \(s\) (\(d\)) receives a three-qubit code word. The first and second \(CNOT\) gates map a code word to a single qubit (first line), that is, \(\ket{000}\) is mapped \(\ket{000}\) and \(\ket{111}\) is mapped \(\ket{100}\). The Toffoli gate performs single qubit error correction. Both qubits on the second and third lines need to be \(\ket{1}\) to flip the qubit's value on the target on the first line. For example, if the input to the terminal is the three qubits \(\ket{011}\), it decodes into \(\ket{011}\). The first qubit is in error. The Toffoli gate flips the first qubit into \(\ket{1}\). Entanglement swapping can be used in conjunction with error correction. Figure 8 depicts a circuit describing a qubit-flip error correction procedure combined with entanglement swapping. It represents repeater \(r\) and two terminals \(s\) and \(d\). A \((3,1)\) repetition code is used for error correction. The entanglement-swapping procedure is under the control of repeater \(r\). The repeater \(r\) entangles remote three-qubit code words, contrasting with base entanglement swapping that entangles remote individual qubits. The subscripts \(s\), \(r\), and \(d\) are used to denote qubit possession by the nodes \(s\), \(r\), and \(d\), respectively. The qubits \(\ket{0}_{s}\) and \(\ket{0}_{d}\) are ancillary. The first and second qubits of the Bell pair \(\ket{\Phi^{+}}_{sr}\) shared by terminal \(s\) and repeater \(r\) are denoted as \(\ket{\phi_{1}}_{s}\) and \(\ket{\phi_{1}}_{r}\), respectively. The first and second qubits of the Bell pair \(\ket{\Phi^{+}}_{rd}\) shared by repeater \(r\) and terminal \(d\) are denoted as \(\ket{\phi_{2}}_{r}\) and \(\ket{\phi_{2}}_{d}\), respectively. Before entanglement swapping starts, the physical qubits \(\ket{\phi_{1}}_{s}\) and \(\ket{\phi_{2}}_{d}\) are encoded into logical qubits. The second and third \(CNOT\) gates of \(s\) map qubits \(\ket{\phi_{1}}_{s}\) to three qubit code words, that is, \(\ket{0}\) is mapped to \(\ket{000}\) and \(\ket{1}\) is mapped to \(\ket{111}\). Similarly, the second and third \(CNOT\) gates of \(d\) map qubits \(\ket{\phi_{2}}_{d}\) to three qubit code words, that is, \(\ket{0}\) is mapped to \(\ket{000}\) and \(\ket{1}\) is mapped to \(\ket{111}\). The entanglement-swapping procedure introduces bit-flip errors, modeled by the \(E\) gates. Following entanglement swapping with errors, the behavior of every terminal is similar. Every terminal, \(s\) and \(d\), possesses a three-qubit block. They apply consecutively two \(CNOT\) gates and a Toffoli gate. They correct single bit-flip errors. Figure 7: Encoding and decoding procedure. Gate \(E\) arbitrarily corrupts qubits according to the bit-flip error model. The physical layer creates the Bell pair, while the link layer maps qubits into code words. In addition, the link layer receives the qubit triples and decodes them back to single qubits. In the absence of errors, the final state is \[\frac{\left|000\right\rangle_{s}\left|000\right\rangle_{d}+\left|111\right\rangle _{s}\left|111\right\rangle_{d}}{\sqrt{2}}.\] Projecting on the third qubit (possessed by \(s\)) and fourth qubit (possessed by \(d\)), it corresponds to the shared Bell pair \(\left|\Phi^{+}\right\rangle_{sd}\), whose first qubit is denoted as \(\left|\phi_{3}\right\rangle_{s}\) and second qubit as \(\left|\phi_{3}\right\rangle_{d}\). In case of a single-bit flip error, at \(s\), for example, on the third line, the final state is \[\frac{\left|110\right\rangle_{s}\left|000\right\rangle_{d}+\left|111\right\rangle _{s}\left|111\right\rangle_{d}}{\sqrt{2}}.\] Projecting on the third qubit (possessed by \(s\)) and fourth qubit (possessed by \(d\)), it also yields the shared Bell pair \(\left|\Phi^{+}\right\rangle_{sd}\). Double or triple bit-flip errors at node \(s\) or node \(d\) are not corrected. In the next lemma, the fidelity of a Bell pair corrected with a \((3,1)\) repetition code is characterized. Lemma 4.1: _The fidelity of a Bell pair error-corrected with the \((3,1)\) repetition code is \(\mathcal{F}(p)=(1-p)^{3}+3p(1-p)^{2}\)._ Proof.: Considering that a Bell pair involves two qubits, the proof follows from an analysis in Ref. [37]. It states that given the bit-flip error probability \(p\), where \(p\) is smaller than \(1/2\), the fidelity of a single qubit corrected with a \((3,1)\) repetition code is \(F(p)=\sqrt{(1-p)^{3}+3p(1-p)^{2}}\). Hence, the fidelity of the Bell pair is \(\mathcal{F}(p)=F(p)^{2}=(1-p)^{3}+3p(1-p)^{2}\). Error correction can also be iterated. Figure 9 shows a two-concatenation scenario, focusing on the qubit \(\left|\phi_{1}\right\rangle_{s}\) of terminal \(s\). The \((3,1)\) repetition error correction is used. The qubit \(\left|\phi_{1}\right\rangle_{s}\) is mapped to a three-qubit codeword, the first, fourth, and seventh lines from the bottom. Each qubit of this codeword is recursively mapped to a three-qubit low-level codeword. Following the swapping procedure, gate \(E\) models the occurence of by flip-errors. Then, every low-level three-qubit block is mapped to a three-qubit block, which is in turn mapped to the single qubit \(\left|\phi_{3}\right\rangle_{s}\). Using this model, error correction can be applied several times recursively to improve the quality of a qubit involved in an entanglement swapping procedure on every side. This is analyzed in detail in the following section. Figure 8: Entanglement swapping procedure with bit-flip errors and \((3,1)\) repetition error correction. Gate \(E\) arbitrarily flips qubits, \(E=\sigma^{\otimes 3}\). ## 5. Analysis of repeated purification and concatenated error correction The fidelity of a Bell pair may be increased with repeated purification and concatenated error correction. This capability is analyzed in detail in this section. The precise requirements for repeated purification or concatenated error correction are identified and compared against each other. ### Purification Analysis Let \(F\) be the initial fidelity of a Bell pair, with \(F>1/2\). Using repeated purification, let us consider the fidelity sequence \(F_{n}\) defined recursively as follows: \[F_{0}=F\text{ and }F_{n}=\frac{F_{n-1}^{2}}{F_{n-1}^{2}+(1-F_{n-1})^{2}}\text{ for }n=1,2,3,\ldots \tag{8}\] Observe that the sequence \(F_{n}\) is increasing. Therefore the limit \(\lambda:=\lim_{n\rightarrow\infty}F_{n}\) exists. Passing to the limit in the righthand side of Equation (8), we conclude that \(\lambda=1\). This follows from the fact that \(\lambda=\frac{\lambda^{2}}{\lambda^{2}+(1-\lambda)^{2}}\), which gives the solution \(\lambda=1\). Therefore taking into account that \(F_{n}\to 1\) as \(n\rightarrow\infty\), we can make repeated purifications to the resulting state (9) To get an idea of the costs required to obtain a certain fidelity, the speed of convergence of repeated purification is investigated. The main question of interest is the following: Given a starting value \(F_{0}>1/2\), how many purification repetitions are needed to obtain a certain level of fidelity? Namely, given an arbitrarily small \(\epsilon>0\) for what value of \(n\) can we claim \(F_{n}>1-\epsilon\)? The following analysis of our main question requires some claims we state and prove below as lemmas. Consider the operation \(F\to F^{\prime}:=\frac{F^{2}}{F^{2}+(1-F)^{2}}\). We look at the ratio of the new value \(F^{\prime}\) versus the old \(F\), namely \(\frac{F}{F}=\frac{\frac{F^{2}}{F^{2}+(1-F)^{2}}}{F}=\frac{F}{F^{2}+(1-F)^{2}}\) and prove the following lemma. Figure 9. Entanglement swapping procedure with concatenated \((3,1)\) repetition error correction. Only the circuit at the location of terminal \(s\) is shown. A similar circuit for terminal \(d\) can be drawn. Gate \(E\) arbitrarily flips qubits, \(E=\sigma^{\otimes 9}\). **Lemma 5.1**: _If \(1/2<F<1/\sqrt{2}\) then_ \[\frac{F^{\prime}}{F}>1+\left(F-\frac{1}{2}\right)=\frac{1}{2}+F. \tag{10}\] Multiply Inequality (10) by two to obtain \(\frac{2F}{F^{2}+(1-F)^{2}}>1+2F.\) Then multiply out to derive the equivalent form \(2F>(F^{2}+(1-F)^{2})(1+2F).\) When we expand and simplify the last inequality, it turns out to be equivalent to \((2F-1)(2F^{2}-1)<0.\) Since by assumption \(1/2<F,\) we conclude that the last inequality is equivalent to \(F<1/\sqrt{2},\) which is valid from the hypothesis of the lemma. The next observation is that the purification operation is monotone increasing. **Lemma 5.2**: _The purification function is monotone increasing, namely if \(F<G\) then \(\frac{F}{F^{2}+(1-F)^{2}}<\frac{G}{G^{2}+(1-G)^{2}}.\)_ The proof is straightforward. **Lemma 5.3**: _If \(F_{0}<1/\sqrt{2}\) then in at most \(n+1\) iterations of the purification operation, where_ \[n=\left\lceil\frac{-\log_{2}(F_{0}\sqrt{2})}{\log_{2}(F_{0}+1/2)}\right\rceil \tag{11}\] _we have that \(F_{n}\geq 1/\sqrt{2}\)_ Repeat purification as in \(F_{0}=F\) and \(F_{n}=\frac{F_{n-1}^{2}}{F_{n-1}^{2}+(1-F_{n-1})^{2}}\) and use Lemma 5.1 to conclude that if \(F_{n}<1/\sqrt{2}\) then \[\frac{F_{n}}{F_{0}}=\frac{F_{n}}{F_{n-1}}\cdot\frac{F_{n-1}}{F_{n-2}}\cdots \frac{F_{1}}{F_{0}}>\prod_{i=0}^{n-1}\left(F_{i}+\frac{1}{2}\right)\geq\left(F _{0}+\frac{1}{2}\right)^{n}.\] We conclude that \[F_{n}\geq\left(F_{0}+\frac{1}{2}\right)^{n}F_{0}.\] It follows that the right-hand side above can be made greater than \(1/\sqrt{2}\) for \(n\geq\frac{-\log_{2}(F_{0}\sqrt{2})}{\log_{2}(F_{0}+1/2)},\) which implies that \(F_{n}\geq 1/\sqrt{2}.\) We conclude by proving the following theorem. **Theorem 5.4**: _For any initial purification value \(F_{0}\) such that \(1/2<F_{0}<1/\sqrt{2}\) and any \(0<\epsilon<1\) arbitrarily small, in at most \(m\leq\left\lceil\frac{-\log_{2}(F_{0}\sqrt{2})}{\log_{2}(F_{0}+1/2)}\right\rceil +\log_{2}\log_{2}(1/\epsilon)\) repetitions of the purification operation we will have that \(F_{m}\geq 1-\epsilon.\) Moreover, if already \(F_{0}\geq 2/3\) then \(m\leq\log_{2}\log_{2}(1/\epsilon).\)_ From the discussion above, we see that starting from any initial value \(F_{0}\) such that \(1/2<F_{0}<1/\sqrt{2}\) we can repeat the purification operation at \(n\) times, where \(n\) is given in Equation (11) so that \(F_{n+1}\geq 1/\sqrt{2}.\) Next, we indicate how many additional steps are needed to obtain accuracy. First of all, observe that \(1/\sqrt{2}>2/3.\) A simple calculation using the definition of the purification operation shows that \[\text{if }F=\frac{k}{k+1}\text{ then }F^{\prime}=\frac{k^{2}}{k^{2}+1}. \tag{12}\] Because of Lemma 5.2, we may assume without loss of generality that \(F_{n+1}=2/3.\) Observe that \(F_{n+1}=2/3=\frac{2^{2^{0}}}{2^{2^{0}}+1}.\) By induction, if we assume that \(F_{n+k}=\frac{2^{2^{k-1}}}{2^{2^{k-1}}+1}\) then using the previously proved Assertion (12) we have that \(F_{n+k+1}=\frac{2^{2^{k}}}{2^{2^{k}}+1}\). It follows that for any required accuracy \(\epsilon>0\), we have that \(F_{n+k+1}>1-\epsilon\), provided that \(k>\log_{2}\log_{2}(1/\epsilon)\). Next, Theorem 5 characterizes the memory cost of a one-time execution of purification and it provides an upper bound on the number of qubits required inside each repeater to support repeated execution of purification. Theorem 5 (Repeated Purification).: _For a path of length \(\ell\) (a positive integer), a repeater requires at most \(2^{n+\ell-1}\) qubits of memory to complete \(n\) purification repetitions._ Proof.: The proof is by induction on the path's length \(\ell\). Firstly, let us assume that \(n\) is equal to one, i.e., one-time execution of purification. _Base Case:_ (\(\ell=1\)). The path consists of a source \(s\) and a destination \(d\), connected by a link. The nodes \(s\) and \(d\) are terminals or repeaters. Using two Bell pairs shared by \(s\) and \(d\), established with direct communications and purification, their fidelity is improved into a single Bell pair, as shown in Figure 6. Every endpoint, \(s\) and \(d\), uses two qubits, which is equal to \(2^{\ell}\) qubits, for \(\ell=1\). _Inductive Step:_ (\(\ell>1\)). Let \(\ell_{1}\) and \(\ell_{2}\) be the number of links to the left and right of repeater \(r\) on a path of length \(\ell\) between nodes \(s\) and \(d\), with \(\ell=\ell_{1}+\ell_{2}\). Clearly, both \(\ell_{1}\) and \(\ell_{2}\) are equal to or greater than one. Let us assume that the nodes \(s\) and \(r\) (respectively, \(r\) and \(d\)) share two Bell pairs. Using these four Bell pairs and two entanglement swapping operations, the repeater \(r\) establishes two Bell pairs between \(s\) and \(d\). Purification improves their fidelity into a single Bell pair shared between \(s\) and \(d\). To create the two Bell pairs between \(s\) and \(r\) (respectively, \(r\), \(d\)), repeater \(r\) requires \(2\cdot 2^{\ell_{1}}\) (respectively, \(2\cdot 2^{\ell_{2}}\)) qubits, for a total of \(2\cdot\left(2^{\ell_{1}}+2^{\ell_{2}}\right)\). Because \(\ell_{1}\), \(\ell_{2}\) are greater than or equal to one but lower than \(\ell\), it is equal to or lower than \(2\cdot\left(2^{\ell-1}+2^{\ell-1}\right)=2^{\ell}\) qubits. This proves the theorem when \(n\) is equal to one. Observe that the general statement of the theorem regarding the number \(n\) of repetitions follows immediately by applying \(n\) iterations of the above argument. Since every additional repetition of the procedure by repeater \(r\) multiplies the number of qubits by two, we have that \(n\) repetitions need \(2^{n-1}\cdot 2^{\ell}=2^{n+\ell-1}\) qubits. Definition 6 (Purification procedure).: Involving three nodes, where the middle one is a repeater \(r\) while the two others \(s\) and \(d\) are terminals of repeaters, the purification procedure consists of seven operations, namely, the generation of four Bell pairs (two instances between \(s\) to \(r\) and two instances between \(r\) to \(d\)), two entanglement swapping operations and one purification cycle under the control of \(r\). Corollary 7 ().: \(n\) _repetitions of purification by a repeater \(r\) for a path of length \(\ell\) requires at most \(7n(\ell-1)\) operations._ Proof.: It follows from the definition of nested entanglement, the proof of Theorem 5, as well as Definition 5. ### Error Correction Analysis As demonstrated in Lemma 4, the efficacy of error correction can also be captured using the concept of fidelity. Fidelity is used to quantifying the quality of Bell pairs established by a quantum system. The fidelity of a Bell pair error-corrected with the \((3,1)\) repetition code is \(\mathcal{F}(p)=(1-p)^{3}+3p(1-p)^{2}\), where \(p\) is the probability of a qubit transformation from \(\left|0\right\rangle\) to \(\left|1\right\rangle\), or vice versa Definition 8 (\((3,1)\) repetition error correction concatenation for a single qubit).: Building on Lemma 4, let us define the fidelity of concatenated error corrections as \[F_{0}=F(p)\text{ and }F_{n}=\sqrt{F_{n-1}^{3}+3(1-F_{n-1})F_{n-1}^{2}}\text{ for the number of concatenations }n=1,2,3\ldots. \tag{13}\] Moreover, let us define the sequence \(\{F_{n}:n\geq 1\}\) with \(F_{0}=1/2\). **Definition 5.9** (\((3,1)\) repetition error correction for a Bell pair): _Concatenated \(n\) times, for the \((3,1)\) repetition error correction, let us define the fidelity of a Bell pair as_ \[\mathcal{F}_{n}=F_{n}^{2}=F_{n-1}^{3}+3(1-F_{n-1})F_{n-1}^{2}. \tag{14}\] **Lemma 5.10**: _The sequence \(\{F_{n}:n\geq 1\}\) is monotone non-decreasing, and its limit as \(n\) goes to infinity equals one._ Consider the function \(f(x)=\sqrt{x^{3}+3(1-x)x^{2}}\). Clearly, \(f(x)=x\sqrt{3-2x}\). It is straightforward to verify that \(f(x)\leq 1\), for all \(0\leq x\leq 1\). By definition of \(F_{n}\), we have that \[F_{n}=F_{n-1}\sqrt{F_{n-1}+3(1-F_{n-1})}=F_{n-1}\sqrt{3-2F_{n-1}},\] for \(n\geq 1\). However, \(3-2x\geq 1\), for \(0\leq x\leq 1\). Using this and the fact that \(F(p)=p\sqrt{3-2p}\) we see that \(F(p)\geq p\), for all \(0\leq p\leq 1\). Therefore \(F_{n}\geq F_{n-1}\), for all \(n\geq 2\). It follows that the sequence \(\{F_{n}:n\geq 0\}\) is monotone non-decreasing. Consequently, its limit \(f:=\lim_{n\to+\infty}F_{n}\) exists. It is now shown that the limit \(f\) must equal one. Indeed, consider the defining equation of \(F_{n}\), namely \(F_{n}=F_{n-1}\sqrt{3-2F_{n-1}}\). In this equation, passing to the limit as \(n\to+\infty\) we see that \(f=f\sqrt{3-2f}\). The only solution to this equation is \(f=1\). Now, let us estimate the convergence speed of the sequence \(F_{n}\). First, consider the formula \(F_{n}=F_{n-1}\sqrt{3-2F_{n-1}}\). Let \(F_{n-1}=1-\epsilon_{n-1}\), for some \(\epsilon_{n-1}>0\). Observe that \[F_{n}=(1-\epsilon_{n-1})\sqrt{3-2(1-\epsilon_{n-1})}=(1-\epsilon_{n-1})\sqrt{1 +2\epsilon_{n-1}}.\] It can be verified that the inequality \[(1-\epsilon_{n-1})\sqrt{1+2\epsilon_{n-1}}>1-\epsilon_{n-1}/2\] is valid as long as \(\epsilon_{n-1}<\frac{6-\sqrt{20}}{8}\approx 0.191\ldots\) (to see this claim, multiply out and use the resulting quadratic in the variable \(\epsilon_{n-1}\)). Therefore, we can conclude that \(F_{n}=1-\epsilon_{n}\), where \(\epsilon_{n}\leq\epsilon_{n-1}/2\). As depicted in Figure 10, when starting with fidelity \(F_{0}>1/2\), after four iterations we can attain \(\epsilon_{n_{0}}\leq 0.1<\frac{6-\sqrt{20}}{8}\). Therefore, convergence of the fidelity \(F_{n}\) to one is exponentially fast, namely \(\epsilon_{n}\leq\epsilon_{n_{0}}/2^{n-n_{0}}\), where \(n>n_{0}\). To conclude, we have the following theorem. **Theorem 5.11**: _If \(1/2<F_{0}\leq 1\) then \((3,1)\) repetition error concatenation fidelity \(F_{n}\geq 1-\epsilon\) can be reached in at most \(\lceil\log_{2}(c/\epsilon)\rceil\) steps, where \(c>0\) is constant._ **Theorem 5.12** (Concatenated error correction): _Assume that a one-time execution of error correction requires \(m\) ancillary qubits at every subpath's endpoint. For a path of length \(\ell\), a repeater requires at most \(m^{n+\ell-1}\) qubits of memory to complete \(n\) error correction concatenations, \(\ell,m\) and \(n\) are positive integers, \(\ell,n\geq 1\) and \(m\geq 3\)._ The proof is by induction on the path's length \(\ell\). Firstly, let us assume that \(n\) is equal to one, i.e., one-time execution of error correction. _Base Case:_ (\(\ell=1\)). The path consists of a source \(s\) and a destination \(d\), connected by a link. The nodes \(s\) and \(d\) are terminals or repeaters. Using direct communications, a \(m\) qubit block possessed by \(s\) entangled with a \(m\) qubit block possessed by \(d\) are established, as depicted in Figure 7. Using error correction, they are decoded into a single qubit pair shared between \(s\) and \(d\). The endpoints \(s\) and \(d\) require \(m\) qubits, which is equal to \(m^{\ell}\), for \(\ell=1\) and \(m\geq 3\). _Inductive Step:_ (\(\ell>1\)). Let \(\ell_{1}\) and \(\ell_{2}\) be the number of links to the left and right of repeater \(r\) on a path of length \(\ell\) between nodes \(s\) and \(d\), with \(\ell=\ell_{1}+\ell_{2}\). Both \(\ell_{1}\) and \(\ell_{2}\) are greater than or equal to one. The nodes \(s\) and \(d\) can be either terminals or repeaters. Let us assume the nodes \(s\) and \(r\) (respectively, \(r\) and \(d\)) share a Bell pair. As depicted in Figure 8, using entanglement swapping the repeater \(r\) establishes entanglement between a \(m\) qubit block possessed by \(s\) and a \(m\) qubit block possessed by \(d\). Using error correction, they are decoded into a single qubit pair shared between \(s\) and \(d\). The repeater \(r\) uses one qubit in each Bell pair, i.e., two qubits. Every node \(s\) and \(d\) used \(m\) ancillary qubits. To create the Bell pair between \(s\) and \(r\) (respectively, \(r\) and \(d\)) it requires at most \(m^{\ell_{1}}\) (respectively, \(m^{\ell_{2}}\)) qubits, for a total of \(m^{\ell_{1}}+m^{\ell_{2}}\) qubits. Because \(\ell_{1},\ell_{2}\) are greater than or equal to one but lower than \(\ell\), it is lower than or equal to \(m^{\ell-1}+m^{\ell-1}\). Which is lower than or equal to \(m^{\ell}\) qubits. This proves the theorem when \(n\) is equal to one. Observe that the general statement of the theorem regarding the number \(n\) of concatenations (as depicted in Figure 9) follows immediately by applying \(n\) iterations of the above argument. Since every additional concatenation of the procedure by repeater \(r\) multiplies the number of qubits by \(m\), we have that \(k\) concatenations would need \(m^{n-1}\cdot m^{\ell}\), or \(m^{n+\ell-1}\) qubits. Definition 5.13 (Error correction procedure).: For a \((3,1)\) error correction code, involving three nodes, where the middle one is a repeater \(r\) while the two others \(s\) and \(d\) are terminals of repeaters, the error correction procedure consists of four operations, namely, the generation of two Bell pairs (between \(s\) to \(r\) and between \(r\) to \(d\)), one entanglement swapping and one error correction operation performed by \(s\) and \(d\), at their location. Corollary 5.14.: _Concatenation of error correction by a repeater for a path of length \(\ell\) requires at most \(4(\ell-1)\) operations._ Notice, from Corollary 5.14, that concatenated error correction, compared to repeated purification, only introduces a constant number of ancillary qubits per node, independently from the number of iterations, i.e., the number of operations is proportional to the path length but independent of the number of concatenations. ## 6. Analytical and simulation results Figure 10 (a) shows the evolution of the fidelity sequence \(F_{n}\) versus the number of repeated purifications \(n\), for initial fidelity \(F_{0}\) values \(0.51,0.53,0.55,0.57\) and \(0.59\) (Equation 8). The plot shows that \(F_{n}\) rapidly tends to value one, with Figure 10. Fidelity (\(F_{n}\) or \(\mathcal{F}_{n}\)) vs. number of repetitions or concatenations (\(n\)), for initial fidelity values \(0.51,0.53,0.55,0.57\), and \(0.59\). We can observe that \(\mathcal{F}_{n}\) tends to value one faster than \(F_{n}\), for all initial fidelity values \(F_{0}\). initial fidelity greater than \(0.5\). In Figure 10 (b), we have the same type of graph but for concatenated error correction, with the \((3,1)\) repetition code (Equation 14). \(\mathcal{F}_{n}\) tends to value one faster than \(F_{n}\). Figures 11 (a,b) plot the value of \(n\), i.e., the number of repetitions and concatenations required to achieve a given fidelity, \(F_{n}\) or \(\mathcal{T}_{n}\). The initial fidelity (\(F_{0}\)) is \(0.51,0.75,\) and \(0.9\). There are data points for repeated purification (a) and concatenated error correction concatenation for the \((3,1)\) repetition code (b). In the \(0.51\) case, with low initial fidelity, error correction requires fewer concatenations than purification repetitions to achieve a given fidelity. They are almost the same in the \(0.7\) case. They are the same in the \(0.9\) case. Figures 11 (c,d) plot the numbers of qubits needed to reach a given fidelity (\(F_{n}\) or \(\mathcal{F}_{n}\)). The initial fidelity (\(F_{0}\)) is \(0.51\). The path length (\(\ell\)) is 4, 6 or 8 hops. Repeated purification achieves near 100% fidelity using fewer physical qubits than concatenated error correction. Figures 11 (e,f) plot the numbers of operations needed to reach a given fidelity (\(F_{n}\) or \(\mathcal{T}_{n}\)). The initial fidelity (\(F_{0}\)) is \(0.51\). The path length (\(\ell\)) is 4, 6 or 8 hops. Error correction achieves near 100% fidelity using fewer operations than repeated purification. Let us now consider a \(k\) by \(k\) sparse grid topology with \(k\) terminals at the top and \(k\) terminals at the bottom rows (see Figure 12). The goal is to determine the memory requirements and operational complexity of the most congested repeaters in the network. We do not assume full connectivity among terminals. Instead, with probability \(p\), a terminal \(u\) at the top of the grid becomes active and intends to communicate to some terminal at the bottom row of the grid. Therefore in expectation, \(pk\) such terminals become active. Each terminal \(u\) at the top selects a terminal \(b(u)\) among the terminals in the bottom row. For each terminal \(u\) at the top, the terminal \(b(u)\) is chosen randomly among the \(k\) terminals at the bottom. Moreover, the choice of any two terminals \(u,v\) are independent of each other. For each terminal \(u\), consider a path \(P_{u}\) of repeaters connecting terminals \(u\) and \(b(u)\). The path \(P_{u}\) may be chosen by any standard procedure considering the grid topology of active repeaters using Dijkstra's algorithm. It does not have to have optimal length. We are interested in the expected number of crossings (see Figure 12). The actual number of crossings may well depend on the paths \(\{P_{u}:1\leq u\leq k\}\) selected. The following argument gives a lower bound on the expected number of crossings. By the previous discussion, the expected number of active terminals at the top row is \(pk\). We say that the order of a pair of terminal \(\{u,v\}\) at the top is reversed if \(u<v\) and \(b(u)\geq b(v)\). It is clear that, regardless of the choice of paths \(P_{u},P_{v}\), there is a crossing between them provided there is a reversal, i.e., \(u<v\) and \(b(u)\geq b(v)\). Note that at every crossing, between \(P_{u},P_{v}\), at least one repeater (possibly more) must serve both paths. Observe that if \(u<v\) then we have that \[\Pr[P_{u}\text{ crosses }P_{v}] \geq\sum_{i\geq j}\Pr[b(u)=i\ \&\ b(v)=j]\] \[=\sum_{i\geq j}\Pr[b(u)=i]\cdot\Pr[b(v)=i]\] \[=\sum_{i\geq j}\frac{1}{k^{2}}=\frac{k^{2}+k}{2k^{2}}=\frac{1}{2 }+\frac{1}{2k}.\] If we assume that the random variable \(I_{uv}\) indicates that the path \(P_{u}\) crosses the path \(P_{v}\), then it follows that the expectation of the random variable \(\mathcal{C}\) which counts the total number of crossings must satisfy \[E[\mathcal{C}]\geq(pk)^{2}\left(\frac{1}{2}+\frac{1}{2k}\right). \tag{15}\] As observed above, every order reversal between terminals at the top row creates a path crossing. However, it is also clear that the total number of path crossings depends on the grid topology of the repeaters and on how the paths are chosen and may well exceed the quantity in the right-hand side of Inequality (15). Figure 11: (a,b) Number of repetitions and concatenations (\(n\)) versus fidelity (\(F_{n}\) or \(\mathcal{F}_{n}\)). (c,d) Number of required qubits versus achieved fidelity (\(F_{n}\) or \(\mathcal{F}_{n}\)). (e,f) Number of required operations versus achieved fidelity (\(F_{n}\) or \(\mathcal{F}_{n}\)). Based on the aforementioned setting, we conduct Monte Carlo simulations using the NetworkX python library. The simulation code is available online. In each simulation, congestion is computed as the number of paths crossing through the most visited repeater. The random activation of terminals follows the strategy presented in Figure 12, using \(\frac{1}{2}\) as probability \(p\) for a terminal at the top of the grid to become active and communicate to some terminal at the bottom row of the grid. The random arrangement of repeaters and terminals follows the strategy and constraints defined in Section 3 (i.e., terminal nodes are not adjacent to each other in the grid and every terminal node is adjacent to at least one repeater). Figures 13 and 14 show the simulation Results. Every Boxplot corresponds to fifty independent run executions per scenario, increasing the size of the sparse \(k\) by \(k\) grid, from \(k=10\) to \(k=20\). We plot the number of required physical qubits and the number of required operations of the most congested repeater (i.e., the one crossed by the higher number of paths in each simulation run). Consistently, we can observe that concatenated error correction, versus repeated purification, presents lower operational complexity than repeated purification to reach high fidelity, at the expense of increasing the number of required physical qubits. ## 7. Conclusion In a quantum networking environment, we have explored the memory resource requirements analytically and numerically to attain a certain level of fidelity. We have also investigated repeated purification and concatenated error correction in this setting. We have observed that concatenated error correction can achieve a given degree of fidelity with fewer iterations than repeated purification, at the cost of considerably increasing the number of required physical qubits. At the same time, Figure 12. A sparse grid topology (not depicted) with terminals at the top and bottom and repeaters in between. Repeaters are depicted with \(\square\) and terminals with \(\Circle\). Terminals at the top and bottom are numbered \(1,2,\ldots,n\), respectively. A possible arrangement of three paths \(P_{u},P_{u},P_{w}\) of active terminals \(u<v<w\) in a grid topology of repeaters such that \(b(v)\geq b(w)\geq b(u)\). we have also observed that the cost in number of operations is higher in the case of repeated purification, compared to concatenated error correction. This results in a comparable amount of resources for both approaches. As perspectives for future work, one may want to analyze the requirements when combining both techniques simultaneously (concatenated error correction and repeated purification), to estimate the best work memory trade-off while obtaining the highest possible degree of fidelity. Acknowledgements -- We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC).
2310.03793
Mobile Impurity in a Two-Leg Bosonic Ladder
We study the dynamics of a mobile impurity in a two-leg bosonic ladder. The impurity moves both along and across the legs and interacts with a bath of interacting bosonic particles present in the ladder. We use both analytical (Tomonaga-Luttinger liquid - TLL) and numerical (Density Matrix Renormalization Group - DMRG) methods to compute the Green's function of the impurity. We find that for a small impurity-bath interaction, the bonding mode of the impurity effectively couples only to the gapless mode of the bath while the anti-bonding mode of the impurity couples to both gapped and gapless mode of the bath. We compute the time dependence of the Green's function of the impurity, for impurity created either in the anti-bonding or bonding mode with a given momentum. The later case leads to a decay as a power-law below a critical momentum and exponential above, while the former case always decays exponentially. We compare the DMRG results with analytical results using the linked cluster expansion and find a good agreement. In addition we use DMRG to extract the lifetime of the quasi-particle, when the Green's function decays exponentially. We also treat the case of an infinite bath-impurity coupling for which both the bonding and antibonding modes are systematically affected. For this case the impurity Green's function in the bonding mode decays as a power-law at zero momentum.The corresponding exponent increases with increasing transverse-tunneling of the impurity. We compare our results with the other impurity problems for which the motion of either the impurity or the bath is limited to a single chain. Finally we comments on the consequences of our findings for experiments with the ultracold gasses.
Naushad Ahmad Kamar, Adrian Kantian, Thierry Giamarchi
2023-10-05T18:00:02Z
http://arxiv.org/abs/2310.03793v1
# Mobile Impurity in a Two-Leg Bosonic Ladder ###### Abstract We study the dynamics of a mobile impurity in a two-leg bosonic ladder. The impurity moves both along and across the legs and interacts with a bath of interacting bosonic particles present in the ladder. We use both analytical (Tomonaga-Luttinger liquid - TLL) and numerical (Density Matrix Renormalization Group - DMRG) methods to compute the Green's function of the impurity. We find that for a small impurity-bath interaction, the bonding mode of the impurity effectively couples only to the gapless mode of the bath while the anti-bonding mode of the impurity couples to both gapped and gapless mode of the bath. We compute the time dependence of the Green's function of the impurity, for impurity created either in the anti-bonding or bonding mode with a given momentum. The later case leads to a decay as a power-law below a critical momentum and exponential above, while the former case always decays exponentially. We compare the DMRG results with analytical results using the linked cluster expansion and find a good agreement. In addition we use DMRG to extract the lifetime of the quasi-particle, when the Green's function decays exponentially. We also treat the case of an infinite bath-impurity coupling for which both the bonding and antibonding modes are systematically affected. For this case the impurity Green's function in the bonding mode decays as a power-law at zero momentum.The corresponding exponent increases with increasing transverse-tunneling of the impurity. We compare our results with the other impurity problems for which the motion of either the impurity or the bath is limited to a single chain. Finally we comments on the consequences of our findings for experiments with the ultracold gasses. ## I Introduction In a high dimensional bath, a mobile impurity behaves as a free particle, with a renormalized mass and lifetime, this description of the impurity is known as quasi-particle (QP) [1; 2; 3; 4]. The QP description is successfully applied to many problems from condensed matter to ultracold gases [5; 6; 7]. One classic example is the motion of an electron in the bath of phonons where the mass of the electron renormalizes and the electron behaves like a QP, known as polaron. However it is known that several mechanisms can lead to a very different physics than simple quasiparticles. This is the case in the celebrated X-ray edge problem where the appearance of a static impurity induces an infinite number of excitations in the bath leading to the famous Anderson orthogonality catastrophe [8; 9]. Similar physics occurs also in the Caldeira-Leggett problem where coupling to a bath can impede the tunnelling of a macroscopic quantum variable [3; 4]. Recently similar phenomena were shown to drastically affect the motion of impurities moving in a one dimensional bath of quantum interacting particles, leading to a motion quite different from a QP with a renormalized mass, namely to subdiffusion and a Green's function of the impurity exhibiting a powerlaw decay for a wide range of momenta [10; 11; 12; 13; 14; 15; 16]. A part of this physics is due to the fact that in one dimension (1D) the recoil due to the motion of the impurity does not totally suppress Anderson orthogonality catastrophe, contrarily to what happens in higher dimensions. Thus one of the questions of interest is how a mobile impurity will behave in a bath which has both transverse and horizontal extensions. This is a first step towards studying the the dimensional crossover in the impurity dynamics. To answer these questions, the dynamics of a mobile impurity in the ladder bath has been recently investigated for a system for which the impurity moves only along the legs of the ladder [17] and in two decoupled chains where an impurity tunnels in both longitudinal and transverse directions [18; 19]. For such systems, the impurity exhibits a similar class of dynamics as that of the 1D bath, but the power-law exponent becomes smaller in comparison to the one-dimensional bath. The study of a mobile impurity in a quantum bath is not limited to the theoretical and experiments on the ultracold gasses [20; 21; 22; 23] provide in particular a platform to investigate such problem with a large flexibility and control on the impurity and the bath. In this work, we address the dynamics of a mobile impurity in a two-leg bosonic ladder with the impurity being able to tunnel between the two legs. Compared to the single chain case, or to the case for which the impurity was restricted to a 1D motion, we can expect in the present case that the recoil could have more drastic effects than for the pure 1D case for the impurity [24]. Another way to study such a problem is to consider the leg index as some "spin" index both for the bath and the impurity. In such a description the present problem would be a generalization of the Kondo problem (by opposition to the X-ray edge one with a featureless impurity) but with the possibility of motion of the impurity. This poses the question of the subtle coupling of the internal and center of mass degrees of freedom. We study this problem using the numerical method time-dependent density matrix renormalization group (t-DMRG) [25; 26] and analytical methods such as Tomonaga-Luttinger liquid (TLL) [9; 27] and linked cluster expansion (LCE) [28]. The t-DMRG allows us to access the impurity dynamics from weak to strong interactions with the bath, while LCE describes the impurity dynamics in the weak coupling limit. We compare our results with previous studies on the impurity dynamics in the one-dimensional bath [29] and the ladder bath [17]. The plan of the paper is as follows: In Sec. II, we describe the model on the lattice and in the continuum limit, its bosonization representation, and various observables. In Sec. III, we describe the analytical expression of the observables by using bosonization and the LCE. In Sec. IV, we present the numerical t-DMRG [25; 26], analysis of this problem, and the results for the Green's function of the impurity. Sec. V, discusses these results both in connection with the one-dimensional motion of an impurity in a ladder's results and in view of the possible extensions. Finally, Sec. VI concludes the paper and presents some perspectives in connection with experiments. The analytical expression of the Green's function is given in the appendix A. ## II Mobile impurity in a two leg bosonic ladder ### Model We consider a mobile impurity moving in a two-leg Bosonic ladder in both horizontal and transverse directions. The model we consider is depicted in Fig. 1. The full Hamiltonian is given by \[H=H_{K}+H_{\text{lad}}+U_{\text{1imp}}\sum_{j=1}^{L}\rho_{1,j} \rho_{\text{imp},1,j} \tag{1}\] \[+U_{\text{2imp}}\sum_{j=1}^{L}\rho_{2,j}\rho_{\text{imp},2,j},\] where \(U_{\text{1imp}}\), \(U_{\text{2imp}}\), and \(L\) are the interactions strength between the particles in the leg 1, in the leg 2 with the impurity, and the ladder size along longitudinal direction, respectively. We consider \(U_{\text{1imp}}=U_{\text{2imp}}=U_{b}\). The impurity kinetic energy is given by the tight-binding Hamiltonian \[H_{K}=-t_{\text{imp}}\sum_{j=1}^{L-1}(d_{1,j+1}^{\dagger}d_{1,j} +d_{2,j+1}^{\dagger}d_{2,j}+\text{h.c.})\\ -t_{\perp\text{imp}}\sum_{j=1}^{L}(d_{1,j}^{\dagger}d_{2,j}+\text {h.c.}). \tag{2}\] We diagonalize \(H_{K}\) by using symmetric and anti-symmetric combinations of \(d_{1,j}\) and \(d_{2,j}\), and \(H_{K}\) can be re-expressed as \[H_{\text{imp}}=\sum_{q}\epsilon_{s}(q)d_{s,q}^{\dagger}d_{s,q}+\epsilon_{a}(q )d_{a,q}^{\dagger}d_{a,q} \tag{3}\] where \(d_{\gamma,j}\) (\(d_{\gamma,j}^{\dagger}\)) are the destruction (creation) operators of the impurity on site \(j\) in leg \(\gamma=1,2\), \(d_{s,q}=\frac{d_{1,q}+d_{2,q}}{\sqrt{2}},d_{a,q}=\frac{d_{1,q}-d_{2,q}}{\sqrt{ 2}}\), \(\epsilon_{a}(q)=-2t_{\text{imp}}\cos(q)+t_{\perp\text{imp}}\), and \(\epsilon_{s}(q)=-2t_{\text{imp}}\cos(q)-t_{\perp\text{imp}}\). The density of the impurity on site \(j\) is \[\rho_{\text{imp},1,j}=d_{1,j}^{\dagger}d_{1,j}. \tag{4}\] The ladder Hamiltonian \(H_{\text{lad}}\) is given by \[H_{\text{lad}}=H_{1}^{0}+H_{2}^{0}-t_{\perp}\sum_{j=1}^{L}(b_{1,j}^{\dagger}b _{2,j}+\text{h.c.}) \tag{5}\] where \(b_{a,j}\) (\(b_{a,j}^{\dagger}\)) are the destruction (creation) operators of a boson of the bath on chain \(a\) and site \(j\). The operators \(b\) obey the usual commutation relation rules. The single chain Hamiltonian is the Bose-Hubbard one \[H_{i}^{0}=-t_{b}\sum_{j=1}^{L-1}(b_{i,j+1}^{\dagger}b_{i,j}+\text{h.c.})+\frac {U_{i}}{2}\sum_{j=1}^{L}\rho_{i,j}(\rho_{i,j}-1)-\mu_{i}\sum_{j}\rho_{i,j}. \tag{6}\] The eq. (1) is convenient for the numerical study. In order to make connection with the field theory analysis we can also consider the same problem in a continuum. In that case the Hamiltonian becomes \[H=\frac{P^{2}}{2M}-t_{\perp\text{imp}}(|1\rangle\langle 2|+|2 \rangle\langle 1|)\\ +H_{\text{lad}}+U(\rho_{1}(X)|1\rangle\langle 1|+\rho_{2}(X)|2 \rangle\langle 2|), \tag{7}\] Figure 1: (color online) Impurity in a two-leg bosonic ladder. The blue solid circles represent the bath particles and the red circle represents the impurity. The bath particles move along the legs (resp. between the legs) with hopping \(t_{b}\) (resp. \(t_{\perp}\))(see text). The impurity moves in both longitudinal and transverse directions with amplitudes \(t_{\text{imp}}\) and \(t_{\perp\text{imp}}\) (see text) respectively. The impurity and the bath particles interact by the contact interactions \(U_{\text{1imp}}\) and \(U_{\text{2imp}}\) in leg 1 and leg 2 respectively. where \(X\) and \(P\) are the position and momentum operators of the impurity. The ladder Hamiltonian (5) in the continuum becomes as \[H_{\rm lad}=H_{1}^{0}+H_{2}^{0}-t_{\perp}\int dx(\psi_{1}^{\dagger}(x)\psi_{2}(x) +{\rm h.c.}), \tag{8}\] and the single chain Hamiltonian is \[H_{i}^{0}=\frac{1}{2m}\int dx|\nabla\psi_{i}(x)|^{2}+\frac{U_{i}}{2}\int dx\rho _{i}(x)^{2}-\mu_{i}\int dx\rho_{i}(x), \tag{9}\] where \(m\) is the mass of the bosons, \(\mu_{i}\) is the chemical potential and \(U_{i}\) is the intrachain interaction. ### Observables To characterize the dynamics of the impurity in the ladder we mostly focus on the Green's function of the impurity. We study it at zero temperature both analytically and numerically via DMRG. Compared to the case [9; 17] where the impurity was confined to a single chain, it is now necessary to introduce two independent Green's functions for the impurity. The Green's function are in the chain basis \[G_{\alpha\beta}(p,t)=\langle\hat{d}_{\alpha,p}(t)\hat{d}_{\beta,p}^{\dagger}( t=0)\rangle, \tag{10}\] where \(\alpha\) and \(\beta\) can take the values \(1,2\) corresponding to the chain index and \(\langle\cdots\rangle\) denotes the average in the ground state of the bath, and with zero impurities present. \(O(t)\) denotes the usual Heisenberg time evolution of the operator \[O(t)=e^{iHt}Oe^{-iHt}, \tag{11}\] and the operator \(\hat{d}_{1p}\) is the operator destroying an impurity with momentum \(p\) given by \[\hat{d}_{1,p}=\sum_{j}e^{ipr_{j}}d_{1,j}, \tag{12}\] with \(r_{j}=aj\) on the lattice and the corresponding integral \[\hat{d}_{1,p}=\int dxe^{ipx}d_{1}(x), \tag{13}\] in the continuum. By symmetry we can restrict ourselves to \(G_{11}(p,t)\) and \(G_{12}(p,t)\). The two other Green's function are simply related to (10) by \(G_{22}(p,t)=G_{11}(p,t)\) and \(G_{12}(p,t)=G_{21}(p,t)\), Instead of using the chain basis it can be more convenient to use the symmetric \(s\) and antisymmetric \(a\) operators leading to the two Green's functions \[\begin{split}& G_{s}(p,t)=\langle\hat{d}_{s,p}(t)\hat{d}_{s,p}^{ \dagger}(t=0)\rangle,\\ & G_{a}(p,t)=\langle\hat{d}_{a,p}(t)\hat{d}_{a,p}^{\dagger}(t=0) \rangle,\end{split} \tag{14}\] all other combinations being zero by symmetry. One has \(G_{s}(p,t)=G_{11}(p,t)+G_{12}(p,t)\), \(G_{a}(p,t)=G_{11}(p,t)-G_{12}(p,t)\). ### Bosonization representation To deal with the Hamiltonian defined in the previous section, we use the fields \(\theta_{\alpha}(x)\) and \(\phi_{\alpha}(x)\)[9] for chain \(\alpha=1,2\) which are related to the field operators of the system via \[\rho_{\alpha}(x)=\rho_{0,\alpha}-\frac{\nabla\phi_{\alpha}(x)}{\pi}+\rho_{0} \sum_{p\neq 0}e^{2ip(\pi\rho_{0,\alpha}x-\phi_{\alpha}(x))}, \tag{15}\] where \(\rho_{0,\alpha}\) is the average density on the chain \(\alpha=1,2\). The creation operator of a particle in the bath in term of \(\theta\) and \(\phi\) is given to lowest order by \[\psi_{\alpha}^{\dagger}(x)=\rho_{0,\alpha}^{1/2}e^{-i\theta_{\alpha}(x)}. \tag{16}\] The conjugate field operators \(\phi_{1,2}\) and \(\theta_{1,2}\) obeys \[[\phi(x_{1}),\frac{\nabla\theta(x_{2})}{\pi}]=i\delta(x_{1}-x_{2}). \tag{17}\] Using the bosonization framework, the Hamiltonian of the ladder is given by \[H_{\rm lad}=H_{s}+H_{a}, \tag{18}\] with \[\begin{split} H_{s}=&\frac{1}{2\pi}\int dx[u_{s}K_{ s}(\partial_{x}\theta_{s})^{2}+\frac{u_{s}}{K_{s}}(\partial_{x}\phi_{s})^{2}],\\ H_{a}=&\frac{1}{2\pi}\int dx[u_{a}K_{a}(\partial _{x}\theta_{a})^{2}+\frac{u_{a}}{K_{a}}(\partial_{x}\phi_{a})^{2}]\\ &-2\rho_{0}t_{\perp}\int dx\cos(\sqrt{2}\theta_{a}(x)),\end{split} \tag{19}\] and \[\begin{split}\theta_{s,a}=&\frac{\theta_{1}\pm \theta_{2}}{\sqrt{2}},\\ \phi_{s,a}=&\frac{\phi_{1}\pm\phi_{2}}{\sqrt{2}}. \end{split} \tag{20}\] The cosine term [9; 17] opens a gap in the antisymmetric sector when \(K_{a}>1/4\). This massive phase for the antisymmetric phase excitations signals the existence of phase coherence across the two legs of the ladder, with exponentially decreasing correlations for the antisymmetric density-density correlations. The symmetric sector is described by the usual TLL Hamiltonian, and has powerlaw correlations. A numerical calculation of the TLL parameters for the massless phase can be found in [30]. ## III Analytical solution for weak coupling Let us now investigate the full Hamiltonian (7) (or (1)) to compute the Green's function of the impurity (10). Using (15) the interaction term \(H_{\rm coup}\) with the impurity in terms of the \(d_{s}\) and \(d_{a}\) becomes \[H_{\rm coup}=\frac{U}{2}\int dx(\rho_{1}(x)+\rho_{2}(x))(d_{s}(x)^ {\dagger}d_{s}(x)+d_{a}(x)^{\dagger}d_{a}(x))\\ +\frac{U}{2}\int dx(\rho_{1}(x)-\rho_{2}(x))(d_{s}(x)^{\dagger}d_{ a}(x)+h.c), \tag{21}\] which leads to the expression, in terms of the symmetric and anti-symmetric modes of the bath, \[H_{\rm coup}=\frac{-U}{\sqrt{2}\pi}\int dx\nabla\phi_{s}(x)(d_{s} (x)^{\dagger}d_{s}(x)+d_{a}(x)^{\dagger}d_{a}(x))\\ -\frac{U}{\sqrt{2}\pi}\int dx\nabla\phi_{a}(x)(d_{s}(x)^{\dagger }d_{a}(x)+h.c). \tag{22}\] Note that we have kept in (22) only the most relevant term, which for bosons is the forward scattering on the symmetric and anti-symmetric modes of the bath. To compute the Green's functions We use same approach as in [17], namely the linked cluster expansion (LCE) [10; 24; 31]. The calculation is detailed in Appendix A and gives the asymptotic behavior of the impurity Green's function (10) for \(2t_{\perp{\rm imp}}>\frac{\Delta_{a}\sqrt{2u_{a}\pi}}{\sqrt{K_{a}}}\) as \[|G_{s}(p,t)|=e^{-\frac{K_{a}U_{a}^{2}}{4\pi a_{a}^{2}}(1+\frac{1 2t_{\rm imp}^{2}p^{2}}{u_{a}^{2}})\log(t)}, \tag{23}\] \[|G_{a}(0,t)|=e^{-aU_{a}^{2}t}\] where \(K_{s}=.835\), \(u_{s}=1.86\) for \(t_{b}=t_{\perp}=1,U_{1}=U_{2}=\infty,\rho_{0}=1/3\)[30] and \[a\simeq\frac{K_{a}}{4u_{a}\pi^{2}}\frac{(u_{a}^{2}q_{-}^{2}+ \tilde{\Delta}^{2})}{q_{-}(2t_{\rm imp}\sqrt{u_{a}^{2}q_{-}^{2}+\tilde{\Delta} ^{2}+u_{a}^{2}})}, \tag{24}\] where \(q_{-}\) and \(\tilde{\Delta}\) are expressed in A. In our LCE calculation we also finds that for \(2t_{\perp{\rm imp}}<\frac{\Delta_{a}\sqrt{2u_{a}\pi}}{\sqrt{K_{a}}}\) the Green's function in both symmetric and anti-symmetric sectors decay as a power-law at \(p=0\). However, the Green's functions of the impurity at \(p=0\) in two-decoupled chains [18] always decay as power-law and exponentially in the symmetric and anti-symmetric sector at any finite transverse tunneling of the impurity, respectively. For this case, one needs to choose a sufficiently small \(t_{\perp{\rm imp}}\). However, to compare the LCE with the t-DMRG results, we need to wait for a longer time for which the impurity will realize the effect of the \(t_{\perp{\rm imp}}\). Such time is inaccessible in t-DMRG because of the entanglement growth in the system with time, which is often linear [32]. So, in order to compare t-DMRG results with LCE, we chose a sufficiently large \(t_{\perp{\rm imp}}\) in our work. For a weak repulsion between the impurity and the bath we thus find that the Green's function in the symmetric mode decays as a power-law with time as was the case with an impurity confined to a single chain [17]. In the anti-symmetric mode on the other hand it decays exponentially. ## IV Numerical solution Analyzing the regime \(U\gg\Delta_{a}\) is much more involved since now excitations across the antisymmetric gap can be created. We thus turn to a numerical analysis of this problem. ### Method We use time-dependent DMRG (t-DMRG) [32] to compute the Green's function of the impurity, and we follow the method described in Ref. [17; 31; 33]. For completeness let us recall the method, which is described below. We map the ladder-impurity problem to a one-dimensional problem by a supercell approach. We denote bath particles in leg 1 and leg 2 by \(B\) and \(C\), the impurity in leg 1 and leg 2 by \(A\) and \(D\) and the total number of bath particles and number of impurity are conserved separately. The local dimension of Hilbert space for A, B, C, D is two for hardcore bosons hence the dimension of local Hilbert space of supercell (A, B, C, D) is \(2\times 2\times 2\times 2=16\). We compute the Green's function of the impurity in the ground state of the ladder. The ground state (\(|GS_{b}\rangle\)) is computed using DMRG. The Green's functions \(G_{11}(x,t)(G_{12}(x,t))\) of the impurity in Heisenberg picture is given by \[G_{11}(x,t)=e^{iE_{GS_{b}}t}\langle GS_{b}|d_{1,\frac{L+1}{2}-x} e^{-iHt}d_{1,\frac{L+1}{2}}^{\dagger}|GS_{b}\rangle,\] \[G_{12}(x,t)=e^{iE_{GS_{b}}t}\langle GS_{b}|d_{2,\frac{L+1}{2}-x} e^{-iHt}d_{1,\frac{L+1}{2}}^{\dagger}|GS_{b}\rangle, \tag{25}\] where \(E_{GS_{b}}\) is ground state energy of the bath. We compute \(e^{-iHt}d_{1,\frac{L+1}{2}}^{\dagger}|GS_{b}\rangle\) using t-DMRG and \(\langle GS_{b}|d_{2,\frac{L+1}{2}-x}(\langle GS_{b}|d_{1,\frac{L+1}{2}-x})\) are computed using DMRG. By using \(G_{11}(x,t)\) and \(G_{12}(x,t)\), we compute \(G_{s}(x,t)=G_{11}(x,t)+G_{12}(x,t)\) and \(G_{a}(x,t)=G_{11}(x,t)-G_{12}(x,t)\). For the numerical calculation we have used a bath of hardcore bosons at a density of \(\rho_{0}=1/3\). This choice avoids the Mott-insulating phase that the ladder's symmetric mode might enters at commensurate density [30]. We have used a bond dimension \(\chi=400\) to compute the Green's function in a reasonable time. We chose Hamiltonian parameters \(t_{\perp}=t_{b}=t_{\rm imp}=1,U_{1}=U_{2}=\infty\) and various values of \(t_{\perp{\rm imp}}\) and \(U\). We fix the size of the system to \(L=101\) sites per leg. ### Zero momentum regime We show the Green's function of the impurity in the anti-symmetric and symmetric modes \(|G_{a}(p,t)|,|G_{s}(p,t)|\) at momentum \(p=0\) in Fig. 2 on semi-log and log-log scales, we find that \(|G_{s}(0,t)|\) decays as a power-law which is similar to one observed in one-dimensional motion of the impurity in a two-leg bosonic ladder [17]. However, the Green's function of the impurity in the anti-symmetric mode shows an exponential decay. For the parameters used in these two figures the gap in the antisymmetric sector is \(\Delta_{a}=.33t_{b}\). This value of the impurity-bath interaction corresponds to the regime of weak coupling for which a comparison with the analytical results of Sec. III is meaningful. The comparison of the numerical results with the analytical results (23) is shown in Fig. 3. The numerical analysis thus fully confirms that in this regime the \(G_{s}(0,t)\) and \(G_{a}(0,t)\) decay as power-law and exponentially, respectively. To further analyze the data we fit the numerical results to the form \[\begin{split}|G_{a}(p=0,t)|\propto\exp(-t/\tau(0))\\ |G_{s}(p=0,t)|\propto\left(\frac{1}{t}\right)^{\alpha}\end{split} \tag{26}\] #### iii.2.1 Small U To analyze the data, we use the analytic estimates of Sec. III which suggest a power law and an exponential decay of the Green's function of the symmetric and the anti-symmetric mode respectively. We fit the numerical data with the linked cluster expansion (LCE) result at \(p=0\) and they agree very well with numerical results. The numerical and analytical results for \(G_{a}\) and \(G_{s}\) are shown in Fig. 3. #### iii.2.2 Hardcore bath-impurity repulsion In the case of \(U\to\infty\), \(|G_{s}(p,t)|\) is plotted on the log-log scale in Fig. 4 for different value of \(t_{\perp\mathrm{imp}}\) at \(p=0\). As shown, \(|G_{s}(p,t)|\) decays as a power-law and the power-law exponent as function of \(t_{\perp\mathrm{imp}}\) is depicted in Fig. 5. The power-law exponent increases as a function of \(t_{\perp\mathrm{imp}}\). For a small \(t_{\perp\mathrm{imp}}\), the exponent is similar to the one observed for the purely one-dimensional motion of an impurity in a two-leg ladder bath [17], while for a large \(t_{\perp\mathrm{imp}}\), it is similar to the motion of an impurity in a one-dimensional bath [29; 14]. ### Green's function at finite momentum As we have seen previously that the Green's function decays as a power-law in the symmetric mode and exponentially in the anti-symmetric mode at \(p=0\), now we turn on finite momentum. The Green's functions \(|G_{s}(p,t)|\) and \(|G_{a}(p,t)|\) for finite momentum are shown Figure 3: (color online) Life-time of the impurity in anti-symmetric sector (upper panel) and power-law exponent (lower panel) in the symmetric at \(t_{\mathrm{imp}}=t_{b}=1\), \(t_{\perp}=1\), \(t_{\perp\mathrm{imp}}=3\) as function of \(U\) at zero momentum. The black lines are numerical results and the red lines are LCE results, both numerical and analytical results show a nice agreement for small \(U\). Figure 2: (color online) The modulus of the green’s function of the impurity in anti-symmetric (left panel) and symmetric sectors (right panel) as a function of time for hardcore bosonic bath at \(t_{\mathrm{imp}}=t_{b}=1\), \(t_{\perp}=1\), \(U=0.5-1\), \(t_{\perp\mathrm{imp}}=3\) and \(p=0\). The left panel is depicted on semi-log scale show a linear behavior while right panel shows a linear behavior on log-log scale. in Fig. 6 and 7, respectively, for various momenta \(p\) of the impurity. We find that \(|G_{s}(p,t)|\) decays as power-law below \(p=0.3\pi\) and beyond \(p=0.3\pi\). Beyond this point, it decays exponentially and the impurity in the symmetric mode enters into a QP regime. An analogous crossover has been established in the one-dimensional motion of an impurity in a one-dimensional bath [31] and two-leg bosonic ladder bath [17; 33]. The crossover depends on the TLL characteristics of the bath in the symmetric sector, namely the velocity of sound \(u_{s}\) in the ladder and the TLL parameter \(K_{s}\). Using the values extracted from [30] we get \[p^{*}=\frac{u_{s}}{2t_{b}}=.93 \tag{27}\] which is in reasonably good agreement with the observed change of behavior in Fig. 6. Beyond p = p\({}^{*}\) and for small \(U\), the Green's function Figure 5: (color online) Power-law exponent of the symmetric Green’s function of the impurity with a hardcore repulsion with the bath of hardcore bosons at filling \(1/3\) as a function of \(t_{\perp 1\mathrm{imp}}\) at \(t_{\mathrm{imp}}=1\), \(U\rightarrow\infty\) and p=0. Circles are the numerical data for \(\chi=400\) and the line is a guide to the eyes. Figure 6: (color online) Modulus of the Green’s function of the impurity in the symmetric sector (see text) \(|G_{s}(p,t)|\) at different momentum \(p\) (c.f. legend) for the impurity. The Hamiltonian parameters are \(t_{b}=1\), \(t_{\perp}=1\), \(t_{\mathrm{imp}}=1\), \(t_{\perp 1\mathrm{imp}}=3\) and \(U=1.0\) on log-log scale (right panel) and semi-log scale (left panel). We observe a linear behavior on log-log scale for small momenta (p = \(0-0.2\pi\)), and linear behavior for large momenta p = \(0.3\pi-\pi\) on semi-log scale. Figure 7: Modulus of the Green’s function of the impurity in ant-symmetric sector (see text) \(|G_{a}(p,t)|\) at different momentum \(p\) (shown in inset) for the impurity. The Hamiltonian parameters are \(t_{b}=1\), \(t_{\perp}=1\), \(t_{\mathrm{imp}}=1\), \(t_{\perp 1\mathrm{imp}}=3\) and \(U=1.0\) on log-log scale (right panel) and semi-log scale (left panel). We observe a linear behavior on semi-log scale for all momenta (p = \(0-0.9\pi\)). decays exponentially, the impurity behaves like a QP and the Green's function of the impurity in term of life time \(\tau(p)\) is given by \[|G_{s}(p,t)|=\exp(-t/\tau(p)) \tag{28}\] In the top panel of the Fig. 8, we plot the inverse of life time \(1/\tau(p)\) in the symmetric mode of the impurity, defined in eq. (28) of the QP as function of p for interaction \(U=1,t_{\perp\mathrm{imp}}=3\). As can be expected \(1/\tau(p)\) increases with increasing p. In Fig. 7, we have shown \(|G_{a}(p,t)|\) on semi-log and log-log scales, we find that the \(|G_{a}(p,t)|\) always decays as exponentially for all momenta, and the impurity in the anti-symmetric band always behaves like a QP. In the bottom panel of the Fig. 8, we have shown the inverse lifetime as a function \(p\), it shows a non-monotonic behavior but overall increases with increasing \(p\). ## V Discussion Our findings suggest that the impurity exhibits a very different dynamics in the ladder because of the motion in the horizontal and transverse directions than ones observed in one-dimensional (1D) motion of an impurity in a ladder and 1D bath. Let us first discuss the weak interaction limit. Initially, both the symmetric and anti-symmetric modes of the impurity couple to both gapless and gapped mode of the bath, but our numerical and analytical findings suggest that in the long-time limit the impurity in the anti-symmetric sector effectively couples to the gapped mode of the bath and the impurity in the symmetric mode couples to the gapless mode of the bath with an effective interaction \(U/\sqrt{2}\). Both the numerical and analytical results show an excellent agreement as depicted in Fig. 2 and 3. The transverse tunneling of the impurity is the main ingredient in the exponential decay of the Green's function of the impurity in the anti-symmetric mode while for the Green's function in the symmetric mode, the power-law exponent does not depend on the transverse tunneling of the impurity. Now let us turn to infinite interaction limit between the impurity and the bath. We find that the Green's function in the symmetric sector decays as a power law at zero momentum, which is similar to the one observed in the impurity dynamics in a one-dimensional (1D) bath and two-leg ladder bath. However, the power-law exponent increases with the tunneling amplitude of the impurity which is contrasted the one observed for small \(U\), where power-law exponent does not depend on \(t_{\perp\mathrm{imp}}\). We find that for small \(t_{\perp\mathrm{imp}}\), the power-law exponent is same as the one observed in 1D motion in the ladder bath but for larger value the power law exponent is equal to one observed in the 1D bath. As a function of \(t_{\perp\mathrm{imp}}\), we observed that the impurity dynamics exhibits a dimensional crossover from ladder to 1D bath. For large transverse tunneling, one can understand that the impurity will energetically favor the symmetric mode of the impurity, and it would be hard to excite the impurity from symmetric to anti-symmetric mode and vice versa. Hence the impurity effectively moves in a gapless 1D bath formed by the ladder's symmetric mode. This description contradicts our common understanding that in a ladder the power-law exponent should be smaller than that of 1D motion in the ladder [17]. It will be interesting to investigate how the dynamics of the impurity behaves with increasing the number of legs and this needs a further study. Our findings also suggest that the impurity in the symmetric sector at the zero momentum in the ladder can be viewed as an X-ray edge problem [9]. The Green's function in the X-ray edge problem has similar behaviour as the Green's function of the symmetric mode at zero momentum. Of course in this case, contrarily to the historical X-ray edge problem the impurity can move. At Figure 8: Inverse life-time of the impurity in both symmetric (upper panel) and anti-symmetric (lower panel) sectors as function of momentum at \(t_{b}=1\), \(t_{\perp}=1\), \(t_{\mathrm{imp}}=1\), \(t_{\perp\mathrm{imp}}=3\) and \(U=1.0\). zero longitudinal tunneling of the impurity the impurity-ladder problem can be mapped into a spin-boson problem [4] and by using an unitary transformation it can also be mapped into a Kondo problem [34]. The impurity-ladder problem can be viewed as a quantum simulator for the spin-boson model and Kondo problem. Now we finally turn to the case of finite momentum. For small interaction and small momentum, the Green's functions in the symmetric mode of the impurity decay as a power-law which are shown in the upper panel of the Fig. 6. Beyond a critical momentum \(\mathrm{p}^{*}\), the Green's function is depicted in the lower panel of the Fig. 6, it decays exponentially, and the impurity enters into a QP regime which is very similar to ones observed in the 1D motion of an impurity in a one-dimensional bath and in a two-leg bosonic ladder when \(t_{\perp\mathrm{imp}}\) is zero. The critical momentum is precisely equal to that of 1D motion of an impurity in a ladder and in a 1D bath. The Green's functions of the impurity in the anti-symmetric mode are depicted in Fig. 7, and they always decay exponentially for all momenta, and the impurity in the anti-symmetric mode always behave like a QP. ## VI Conclusion and perspectives We have studied the dynamics of an impurity in a reservoir of hard-core bosons moving in a two-leg ladder where the impurity may tunnel in both transverse and horizontal directions. We have computed the Green's function of the impurity for different momenta in order to understand the dynamics of the impurity. We use both analytical and numerical approaches, where later are comprised of the time-dependent DMRG. When impurity-bath interactions are weak, the Green's function of the impurity in the symmetric sector decays as a power-law below a critical momentum and exponentially above the critical momentum like the 1D dynamics of an impurity in a two-leg ladder bath where transverse tunneling of the impurity is suppressed. However, in the anti-symmetric sector the Green's function of the impurity always decays exponentially and the impurity behaves like a quasi-particle. One can expect that when the bath is made of several 1D chains then in the lowest energy band of the impurity, the impurity would exhibit a crossover (beyond a critical momentum) from a power-law to an exponential decay. However, in the other energy sectors of the impurity, the Green's function would decay exponentially. The above observations suggest that the crossover of the dynamics of the impurity from a one-dimensional bath to a two-dimensional bath made up of finite number of 1D baths is not smoothly connected. The system we have studied can be tested experimentally in the context of circuit QED [35; 36] and cold atoms. When the impurity moves only in transverse direction, the impurity acts like a two-level system which is analogous to a supercondcting qubit, and the two-leg ladder bath acts like a one-dimensinal wave guide, and the impurity-reservoir interaction is the equivalent of the standard qubit-waveguide coupling. The bosonic ladder has been realized experimentally in ultracold gases [37; 38] and atom chips [39].The impurity dynamics in one-dimensional bath has been investigated experimentally using ultracold gasses [20; 21; 22; 23]. Combination of these aspects and ongoing experimental advancement in the ultracold gasses could provide the ideal testbed for our findings in near future. ###### Acknowledgements. Calculations were performed using the Matrix Product Toolkit [40]. We thank N. Laflorencie and G. Roux for providing us with the precise numerical value for the TLL parameters of the ladder of publication [30]. This work was supported in part by the Swiss National Science Foundation under grant 200020-188687. ## Appendix A Green's function of the mobile impurity in the two-leg bosonic ladder As we have shown in the main text (see section III) that for small interaction \(U\) between the impurity and the ladder bath the impurity effectively coupled to the forward scattering terms of the gapped and gapless modes of the bath. In this section, we give a linked-cluster expansion (LCE) expression of the Green's function of the impurity. We express \(\cos(\sqrt{2}\theta_{a})=1-\theta_{a}^{2}\) and use the continuity equation \[\nabla\phi_{a}(q,t)=\frac{\partial\theta_{a}(q,t)}{\partial t}. \tag{10}\] The impurity-bath Hamiltonian is expressed as \[H=H_{s}+H_{a}+H_{\mathrm{imp}}+H_{\mathrm{coup}}, \tag{11}\] where \(H_{\mathrm{s}}\), \(H_{\mathrm{a}}\), \(H_{\mathrm{imp}}\), and \(H_{\mathrm{coup}}\) are the symmetric mode, anti-symmetric mode of the bath, the impurity Hamiltonian, and the coupling between the impurity and the bath, respectively, and these terms in bosonized language are expressed as \[H_{s} =\sum_{q}u_{s}|q|b^{\dagger}_{s,q}b_{s,q},\] \[H_{a} =\frac{1}{2\pi}\int dx\Big{[}u_{a}K_{a}(\partial_{x}\theta_{a})^{2 }+\frac{u_{a}}{K_{a}}(\partial_{x}\phi_{a})^{2}\Big{]}\] \[-\Delta_{a}^{2}\int dx\theta(x)^{2},\] \[H_{\rm coup} =\sum_{q,k}\Big{[}V(q)(d^{\dagger}_{s,k+q}d_{s,k}+d^{\dagger}_{a,k+q}d_{a,k})(b_{s,q}+b^{\dagger}_{s,-q})\] \[+\tilde{U}(d^{\dagger}_{a,k+q}d_{s,k}+d^{\dagger}_{s,k+q}d_{a,k}) \frac{\partial\theta_{a}(q,t)}{\partial t}\Big{]},\] \[H_{\rm imp} =\sum_{q}\epsilon_{s}(q)d^{\dagger}_{s,q}d_{sq}+\epsilon_{a}(q)d ^{\dagger}_{u,q}d_{a,q}, \tag{24}\] where \(\tilde{U}=-\frac{U}{\sqrt{2\pi}}\), \(b^{\dagger}\) and \(d^{\dagger}\) are the creation operators for the bath in the bosonized language and the impurity respectively, \(\epsilon_{a}(p)=-2t_{\rm imp}\cos(p)+t_{\perp{\rm imp}}\), \(\epsilon_{s}(p)=-2t_{\rm imp}\cos(p)-t_{\perp{\rm imp}}\). \(\Delta_{a}\) is the gap in the anti-symmetric mode of the bath. The coupling term \(V(q)\) can be expressed as \[V(q) = \frac{U}{\sqrt{2}}\sqrt{\frac{K_{s}|q|}{2\pi L}}\exp\Big{(}-\frac{ |q|}{2q_{c}}\Big{)}. \tag{25}\] The Green's function of the impurity in symmetric and ant-symmetric sectors are defined by \[G_{s}(p,t) =-i\langle d_{s,p}(t)d^{\dagger}_{s,p}(0)\rangle, \tag{26}\] \[G_{a}(p,t) =-i\langle d_{a,p}(t)d^{\dagger}_{a,p}(0)\rangle.\] By using LCE, (26) can be written as \[G_{s}(p,t) = -ie^{-i\epsilon_{s}(p)t}e^{F_{2s}(p,t)},\] \[G_{a}(p,t) = -ie^{-i\epsilon_{a}(p)t}e^{F_{2s}(p,t)}, \tag{27}\] where \(F_{2,s/a}(p,t)\) is defined as \[F_{2s/a}(p,t) = e^{ie_{s/a}(p)t}W_{2s/a}(p,t). \tag{28}\] \(W_{2s/a}(p,t)\) is given by \[W_{2s/a}(p,t) = -\frac{1}{2}\int_{0}^{t}dt_{1}\int_{0}^{t}dt_{2}\] \[\times\langle T_{t}d_{s/a,p}(t)H_{\rm coup}(t_{1})H_{\rm coup}(t_{2 })d^{\dagger}_{s/a,p}(0)\rangle.\] By employing the Wick's theorem \(W_{2a}(p,t)\) can be expressed as \[W_{2a}(p,t) =-\sum_{q}\int_{0}^{t}dt_{1}\int_{0}^{t}dt_{2}Y(t-t_{1})Y(t_{1}-t_ {2}) \tag{29}\] \[\times Y(t_{2})\Big{[}V(q)^{2}e^{-i\epsilon_{a}(p)(t-t_{1})}e^{-i \epsilon_{a}(p+q)(t_{1}-t_{2})}\] \[\times e^{-i\epsilon_{a}(p)t_{2}}e^{-i(u_{a}|q|(t_{1}-t_{2}))}\] \[+\frac{U_{a}^{2}}{4\pi}\sqrt{K_{a}^{2}q^{2}+\frac{2\pi\Delta_{a} ^{2}K_{a}}{u_{a}}}e^{-i\epsilon_{a}(p)(t-t_{1})}\] \[\times e^{-i\epsilon_{a}(p+q)(t_{1}-t_{2})}e^{-i\epsilon_{a}(p)t _{2}}\] \[\times e^{-i\left(\sqrt{u_{a}^{2}q^{2}+\frac{2\pi\Delta_{a}^{2} u_{a}}{K_{a}}}(t_{1}-t_{2})\right)}\Big{]},\] where \(Y(t)\) is a step function, which is zero for \(t<0\) and one for \(t>0\). \(U_{a}\) is coupling between the impurity and the anti-symmetric mode of the bath, at this moment \(U_{a}=U\). \(Y(t)\) changes the limit of integration of \(t_{2}\) and \(t_{1}\), and \(F_{2a}(p,t)\) is modified as \[F_{2a}(p,t) =-\sum_{q}\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}\Big{[}V(q)^{2}e ^{i\epsilon_{a}(p)t_{1}}e^{-i\epsilon_{a}(p+q)(t_{1}-t_{2})}\] \[\times e^{-i\epsilon_{a}(p)t_{2}}e^{-i(u_{a}|q|(t_{1}-t_{2}))}+ \frac{U_{a}^{2}}{4\pi}\] \[\times\sqrt{K_{a}^{2}q^{2}+\frac{2\pi\Delta_{a}^{2}K_{a}}{u_{a}}} e^{i\epsilon_{a}(p)t_{1}}e^{-i\epsilon_{a}(p+q)(t_{1}-t_{2})}\] \[\times e^{-i\epsilon_{a}(p)t_{2}}e^{-i\left(\sqrt{u_{a}^{2}q^{2}+ \frac{2\pi\Delta_{a}^{2}u_{a}}{K_{a}}}(t_{1}-t_{2})\right)}\Big{]}. \tag{30}\] We can simplify eq. (A) as \[F_{2a}(p,t) =-\sum_{q}\int du\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}\] \[\times\Big{[}V(q)^{2}e^{it_{1}u}e^{-it_{2}u}\delta(u-\epsilon(p )+\epsilon(p+q)+u_{a}|q|)\] \[+\frac{U_{a}^{2}}{4\pi}\sqrt{K_{a}^{2}q^{2}+\frac{2\pi\Delta_{a} ^{2}K_{a}}{u_{a}}}e^{-it_{1}(u-2t_{\perp{\rm imp}})}e^{it_{2}(u-2t_{\perp{\rm imp }})}\] \[\times\delta\Big{(}u+\epsilon(p)-\epsilon(p+q)-\sqrt{u_{a}^{2}q^{2 }+\frac{2\pi\Delta_{a}^{2}K_{a}}{u_{a}}}\Big{)}\Big{]}. \tag{31}\] Finally, we integrate over \(t_{1}\) and \(t_{2}\) and the real part of \(F_{2a}\) can be expressed as \[{\rm Re}[F_{2a}(p,t)] =-\sum_{q}\int du\Big{[}V(q)^{2}\frac{(1-\cos(ut))}{u^{2}}\] \[\times\delta(u-\epsilon(p)+\epsilon(p+q)+u_{a}|q|)\] \[+\frac{U_{a}^{2}}{4\pi}\sqrt{K_{a}^{2}q^{2}+\frac{2\pi\Delta_{a} ^{2}K_{a}}{u_{a}}}\frac{(1-\cos(t(u-2t_{\perp{\rm imp}})))}{(u-2t_{\perp{\rm imp }})^{2}}\] \[\times\delta(u+\epsilon(p)-\epsilon(p+q)-\sqrt{u_{a}^{2}q^{2}+ \frac{2\pi\Delta_{a}^{2}K_{a}}{u_{a}}})\Big{]}. \tag{32}\] Similarly one can show that \[\begin{split}\text{Re}[F_{2s}(p,t)]&=-\sum_{q}\int du \\ &\times\Big{[}V(q)^{2}\delta(u-\epsilon(p)+\epsilon(p+q)+u_{s}|q|)) \\ &\times\frac{(1-\cos(ut))}{u^{2}}+\frac{U_{a}^{2}}{4\pi}\sqrt{K_{a }^{2}q^{2}+\frac{2\pi\Delta_{a}^{2}K_{a}}{u_{a}}}\\ &\times\frac{(1-\cos(t(u+2t_{\perp\text{imp}})))}{(u+2t_{\perp \text{imp}})^{2}}\\ &\times\delta\Big{(}u+\epsilon(p)-\epsilon(p+q)-\sqrt{u_{a}^{2}q ^{2}+\frac{2\pi\Delta_{a}^{2}u_{a}}{K_{a}}}\Big{)}\Big{]},\end{split} \tag{13}\] \(R_{1s/a}(u)\) and \(R_{2}\) are expressed as \[\begin{split} R_{1s/a}(u,p)&=\frac{1}{2\pi}\int dqV (q)^{2}\\ &\times\delta(u-(\epsilon(p)-\epsilon(p+q)-u_{s/a}|q|)),\end{split} \tag{14}\] \[\begin{split} R_{2}(u,p)&=\frac{U_{a}^{2}}{8\pi^{2 }}\int dq\sqrt{K_{a}^{2}q^{2}+\frac{2\Delta_{a}^{2}K_{a}\pi}{u_{a}}}\\ &\times\delta\Big{(}u+\epsilon(p)-\epsilon(p+q)-\sqrt{u_{a}^{2}q ^{2}+\frac{\Delta_{a}^{2}u_{a}2\pi}{K_{a}}}\Big{)}.\end{split} \tag{15}\] For small \(p\), \(\epsilon(p)\simeq 2t_{\text{imp}}p^{2}\). \(R_{1}(u,p)\) is computed in Ref. [17; 31] for \((p-\frac{u_{p}}{2t_{\text{imp}}})<0,\wedge u<0\). However, the computation of \(R_{2}(u,p)\) for an arbitrary \(p\) is difficult analytically so we restrict at \(p=0\). At \(p=0\), \(R_{1}\) is non-zero for \(u<0\) can be expressed as \[R_{1s/a}(u,0)\ \propto\ u, \tag{16}\] and for \(u>0\) \[R_{2}(u,0)\ \propto\ A_{2}. \tag{17}\] Let us define \(A_{s}(u,t)\) and \(A_{a}(u,t)\) as \[\begin{split} A_{a}(u,t)&=\int_{0}^{\infty}du\Big{[} \frac{1-\cos(ut)}{u}\Big{]}\\ &+\int_{\frac{\Delta_{a}\sqrt{2u_{a}\pi}}{\sqrt{K_{a}}}-2t_{ \perp\text{imp}}}^{\infty}du\Big{[}\frac{1-\cos(ut)}{u^{2}}A_{2}\Big{]},\\ A_{s}(u,t)&=\int_{0}^{\infty}du\Big{[}\frac{1-\cos (ut)}{u}\Big{]}\\ &+\int_{\frac{\Delta_{a}\sqrt{2u_{a}\pi}}{\sqrt{K_{a}}}+2t_{ \perp\text{imp}}}^{\infty}du\Big{[}\frac{1-\cos(ut)}{u^{2}}A_{2}\Big{]}.\end{split} \tag{18}\] In the long time limit \(A_{s}\) and \(A_{a}\) can be expressed as \[\begin{split} A_{a}(u,t)&\simeq A_{2}\pi t,\\ A_{s}(u,t)&\simeq-\log(t),\end{split} \tag{19}\] where \(A_{2}\) is expressed as \[\begin{split} A_{2}&\simeq\frac{U^{2}K_{a}}{4u_{a} \pi^{2}}\frac{(u_{a}^{2}q_{-}^{2}+\tilde{\Delta}^{2})}{q_{-}(2t_{\text{imp}} \sqrt{u_{a}^{2}q_{-}^{2}+\tilde{\Delta}^{2}+u_{a}^{2}})},\end{split} \tag{20}\] \(R_{1s/a}(u)\) and \(R_{2}\) are expressed as \[\begin{split}\tilde{\Delta}&=\frac{\Delta_{a}\sqrt{u _{a}2\pi}}{\sqrt{K_{a}}},\\ q_{-}&=\sqrt{\frac{2t_{\text{imp}}}{t_{\text{imp }}}+\frac{u_{a}^{2}}{2t_{\text{imp}}^{2}}-\sqrt{\Big{(}\frac{2t_{\text{\perp imp }}}{t_{\text{imp}}}+\frac{u_{a}^{2}}{2t_{\text{imp}}}^{2}\Big{)}^{2}-\frac{(4t _{\text{\perp imp}}^{2}-\tilde{\Delta}^{2})}{t_{\text{imp}}^{2}}}}.\end{split} \tag{21}\] By using equations (12, 13, 18, 19), the final expression of \(F_{2s},F_{2a}\) for \(2t_{\perp\text{imp}}>\frac{\Delta_{a}\sqrt{2u_{a}\pi}}{\sqrt{K_{a}}}\) in long-time limit is given by \[\text{Re}[F_{2a}(0,t)]\ \simeq\ -A_{2}\pi t \tag{22}\] \[\text{Re}[F_{2s}(0,t)]\ \simeq\ -\log(t) \tag{23}\] For \(2t_{\perp\text{imp}}<\frac{\Delta_{a}\sqrt{2u_{a}\pi}}{\sqrt{K_{a}}}\) \[\text{Re}[F_{2a}(0,t)]\ \simeq\ -\log(t) \tag{24}\] \[\text{Re}[F_{2s}(0,t)]\ \simeq\ -\log(t) \tag{25}\] leading to the Green's functions decay as \[|G_{a}(0,t)|=e^{-A_{2}\pi t}, \tag{26}\] \[|G_{s}(p,t)|=e^{-\frac{K_{a}U^{2}}{4\pi^{2}t^{2}}(1+\frac{12t_{\text{imp}}^{2} p^{2}}{v^{2}})\log(t)}. \tag{27}\] In the anti-symmetric mode of the impurity the Green's function decays exponentially but in the symmetric mode the Green's function decays as power-law.
2305.07743
Observed Dust Surface Density Across Cosmic Times
Our ability to interpret observations of galaxies and trace their stellar, gas, and dust content over cosmic time critically relies on our understanding of how the dust abundance and properties vary with environment. Here, we compute the dust surface density across cosmic times to put novel constraints on simulations of the build-up of dust. We provide observational estimates of the dust surface density consistently measured through depletion methods across a wide range of environments, going from the Milky Way up to $z=5.5$ galaxies. These conservative measurements provide complementary estimates to extinction-based observations. In addition, we introduce the dust surface density distribution function -- in analogy with the cold gas column density distribution functions. We fit a power law of the form: $\log f( \Sigma_{\rm Dust})=-1.92 \times \log \Sigma_{\rm Dust} - 3.65$ which proves slightly steeper than for neutral gas and metal absorbers. This observed relation, which can be computed by simulations predicting resolved dust mass functions through 2D projection, provides new constraints on modern dust models.
Céline Péroux, Annalisa De Cia, J. Christopher Howk
2023-05-12T20:00:01Z
http://arxiv.org/abs/2305.07743v1
# Observed Dust Surface Density Across Cosmic Times ###### Abstract Our ability to interpret observations of galaxies and trace their stellar, gas, and dust content over cosmic time critically relies on our understanding of how the dust abundance and properties vary with environment. Here, we compute the dust surface density across cosmic times to put novel constraints on simulations of the build-up of dust. We provide observational estimates of the dust surface density consistently measured through depletion methods across a wide range of environments, going from the Milky Way up to z=5.5 galaxies. These conservative measurements provide complementary estimates to extinction-based observations. In addition, we introduce the dust surface density distribution function - in analogy with the cold gas column density distribution functions. We fit a power law of the form: \(\log f(\Sigma_{\rm Dust})=-1.92\times\log\Sigma_{\rm Dust}-3.65\) which proves slightly steeper than for neutral gas and metal absorbers. This observed relation, which can be computed by simulations predicting resolved dust mass functions through 2D projection, provides new constraints on modern dust models. keywords: galaxies: abundances - galaxies: evolution - galaxies: high-redshift - Galaxies Magellanic Clouds - quasars: absorption lines - Interstellar Medium (ISM), Nebulae - ISM: dust, extinction ## 1 Introduction Dust grains absorb stellar light in the ultraviolet (UV)-optical wavelengths and re-emit it in the far-infrared, which represents 30%-50% of the radiative output of a galaxy (Roman-Duval et al., 2017). Therefore, our ability to interpret observations of galaxies and trace their stellar, gas, and dust content with redshift across the entire spectral range critically relies on measurements of the dust abundance and properties vary with environment and cosmic times. This in turn requires us to understand the processes responsible for dust formation, destruction, and transport, as well as their associated timescales. A fraction of metals in the interstellar medium of galaxies in both the local and high-redshift Universe resides in microscopic solid particles or dust grains (Field, 1974; Savage and Sembach, 1996; Jenkins, 2009; De Cia et al., 2016). Interstellar dust has manifold impact on the physics and chemistry of the interstellar medium (Zhukovska et al., 2016, 2018; Galliano, 2022) as well as the intra-cluster medium (Shchekinov et al., 2022). Because dust locks some elements away from the gas phase, it affects our measurements of the metallicity of galaxies. One of the most important roles of interstellar grains is that they facilitate the formation of molecular hydrogen, H\({}_{2}\), on their surfaces (Hollenbach and Salpeter, 1971). The H\({}_{2}\) molecule is the main component of molecular clouds, which are the cradle of star formation in most of the Universe (Klessen and Glover, 2016). Because dust absorbs UV emission from young massive stars and re-emits it in the infrared, the spectral energy distribution from dust is one of the primary indicators of star formation (Calzetti et al., 2000). Yet, in both local and high-redshift galaxies, the dust production rates in evolved stars (Bland and Hofner, 2012; Riebel et al., 2012) and supernova remnants (Matsuura et al., 2011; Lesniewska and Michalowski, 2019; Slavin et al., 2020) are largely insufficient compared to the dust destruction rates in interstellar shocks (Jones et al., 1996) to explain the dust masses of galaxies over cosmic times (Morgan and Edmunds, 2003; Boyer et al., 2012; Rowlands et al., 2014; Zhukovska and Henning, 2013). This so-called dust budget crisis poses an important challenge to our modelling of dust into a cosmological context (Mattsson, 2021). Observationally, the amount of dust in astrophysical objects has been quantified with a number of different methods probing various dust signature. _Extinction_ refers to the amount of light dimming due to all the material lying along the line of sight between the astrophysical object and the observer. Extinction is therefore an integrated quantity which encapsulates absorption and scattering away from the line-of-sight. The observational determination of extinction requires backlights such as stars, Gamma-Ray Bursts, quasars, or other objects with much smaller angular extent than a galaxy. The extinction at a given wavelength results from a combination of the grain size distribution (Mattsson, 2020), metallicity (Shivaei et al., 2020) and the optical properties of the grains (which is itself dependent on the chemical composition of the grains). Therefore, the extinction scales with dust column density or surface density. Similarly, infra-red emission has been used to probe the dust content of galaxies (Chiang et al., 2021). In particular, significant work has been put into characterizing these quantities in galaxies beyond the Milky Way, assessing observational constraints both on the integrated values (e.g., Remy-Ruyer et al., 2014; De Vis et al., 2019) and spatially-resolved values within galaxies (e.g., Vilchez et al., 2019). These works have placed particular emphasis on the variation of the dust properties with metallicity, stellar mass, star formation rate, and gas content of the galactic environments, as these help shape our understanding of the factors that drive the formation/destruction balance of dust. _Reddening_, expressed through the colour excess, E(B-V), quantifies the differential extinction. _Attenuation_ represents the effect of dust on the light continuum from the geometric mix of stars and dust in galaxies. It thus reflects the net effect on light due to a combination of multiple effects, including extinction, scattering back into the line-of-sight as well as contribution from unobscured stars (Salim and Narayanan, 2020). Reddening is often expressed in terms of UV continuum slope, \(\beta\)(Shivaei et al., 2020). The UV continuum slope depends on the column density of dust along the line-of-sight to the observer that is dimming the UV light of background objects. A fully consistent comparison between depletion-estimated extinction and colour-based estimates is found in Konstantopoulou et al. (2023). The total far-infrared emission is a proxy for the total dust mass. At z\(>\)5, that emission is successfully probed at mm wavelengths with facilities such as ALMA. The dust temperature however is less well-constrained in these early times (see Figure 1 of Bouwens et al., 2020), leading to a degeneracy in the current estimates (Faisst et al., 2020; Sommowigo et al., 2020; Bakx et al., 2021; Sommovigo et al., 2022; Chen et al., 2022; Fudamoto et al., 2022; Viero et al., 2022; Drew and Casey, 2022; Ferrara et al., 2022). Alternatively, the dust mass is being derived from the mm-continuum of high-redshift galaxies and the gas mass is then estimated assuming a dust-to-gas ratio - often taken to be the value of the Milky Way (Scoville et al., 2017; Dunne et al., 2022, but see Popping and Peroux, 2022). The ratio of dust emission in infrared to the observed UV emission, known as the infrared excess (IRX), is a measure of the UV dust attenuation. The attenuation/extinction curve/law is characterised by its slope and normalisation (e.g. Calzetti et al., 2000). Two galaxies having the same attenuation curves (i.e. same shape) might still differ in their normalisation (i.e. column density of dust). The two quantities, \(\beta\) and IRX, are often related into one diagram (Meurer et al., 1999; Faisst et al., 2017; Shivaei et al., 2020). The relation is sensitive to a range of interstellar properties including dust geometries, dust-to-gas ratios, dust grain properties, and the spatial distribution of dust. The relation provides a powerful empirical constraint on dust physical properties because it laid the foundation for a straightforward correction of UV emission in galaxies where infrared observations are lacking (especially at higher redshifts), based on the easily observable UV slope (or colour). Dust is made of some of the available metals produce by stars. In the interstellar medium, a fraction of these elements is in the gas, and the rest is locked up in dust. Indeed, most metals are underabundant in the interstellar gas of the Milky Way (Jenkins, 2009), reflecting the amount of dust in our galaxy. _Dust depletion_ can be used to give hints on the dust composition (Savage and Sembach, 1996; Jenkins, 2014; Dwek, 2016; Mattsson et al., 2019; Roman-Duval et al., 2022) and this often indicates that significant amounts of Fe-rich dust should be present in the interstellar medium. Dust measurements based on depletion estimated at UV and optical wavelengths might suffer from a bias where the cluster systems would be obscured at these wavelengths. In that sense, dust depletion provides conservative measurements which can be seen as a lower limit on the total amount of dust in a population. Depletion measurements provide a direct estimate of the dust content (Dwek, 1998; Draine and Li, 2007; Galliano et al., 2018). The depletion can be used to calculate the extinction through the gas which is proportional to the column density of metals (see equation 1 of Savaglio et al., 2003). In their Figure 9, Wiseman et al. (2017) offer a comparison of the two measurements between depletion-estimated extinction and colour-based estimates in a sample of Gamma-Ray Bursts, indicating significant discrepancies highlighting the possible limitations described above (see also De Cia et al., 2013; Zafar et al., 2014). These multiple observational results have triggered a number of simulation efforts. These works come into two main families: i) semi-analytical models which use empirical relation to approximate some of the physical processes at play; and ii) hydrodynamical cosmological simulations which include full treatments of dust production and destruction. Contemporary models have reached a new level of realism by including a large number of physical processes (Draine, 2003). Dust _shattering_ refers to the breaking of large grains into small dust grains, due to high-velocity collisions. Conversely, _coagulation_ describes the processes of large grains being made out of smaller entities because of low-velocity collisions. Therefore, both these processes do not change the mass but the size distribution of dust grains. _Sputtering_ refers to dust destruction by shocks, including those produced by Supernovae blasts. _Astration_ reflects the process of dust being absorbed by stars. We note that the same processes can both produce and destroy small/big grains, so that the production sources and destruction sinks are complex processes to simulate. A number of efforts based on semi-analytical models have made predictions on the dust mass of galaxies (Bekki, 2015; Pantoni et al., 2019; Lapi et al., 2020; Gjergo et al., 2020; Dayal et al., 2022) and associated dust mass function (Popping et al., 2017; Triani et al., 2020; Vijayan et al., 2019). As subset of these studies has provided information on the dust surface density specifically (Bekki, 2013, 2015; Osman et al., 2020; Gjergo et al., 2020). In parallel, there have been a number of works introducing dust physical processes within hydrodynamical cosmological models (Moseley et al., 2023). Many report estimates of the dust mass density (Gioannini et al., 2017; Aoyama et al., 2018; Lewis et al., 2023), while others report the dust mass function (McKinnon et al., 2017; Graziani et al., 2020; Li et al., 2019; Hou et al., 2019; Baes et al., 2020). A number of these efforts have made predictions on dust surface densities in particular, as the ones presented here (McKinnon et al., 2016; Trayford and Schaye, 2019). The goal of this work is two-fold. On one hand, we provide observational estimates of the dust surface density consistently measured through depletion methods across a wide range of environments, going from the Milky Way up to z=5.5 galaxies. While previous works have estimated the dust mass in local galaxies (De Vis et al., 2019; Millard et al., 2020; De Looze et al., 2020; Morselli et al., 2020; Casasola et al., 2020; Nanni et al., 2020; Galliano et al., 2021), this study focuses on the dust column density. On the other hand, we introduce the dust surface density distribution function - in analogy with the gas (\(N\)(H i) or \(N\)(H\({}_{2}\))) column density distribution functions (Peroux et al., 2003; Zwaan et al., 2005; Zwaan and Prochaska, 2006; Klitsch et al., 2019; Peroux and Howk, 2020; Szakacs et al., 2022). Spatially resolved simulations predicting the dust-mass function (Pozzi et al., 2020; Millard et al., 2020) can predict the dust surface density distribution function through 2D projection. Thus, the observed dust surface density distribution function potentially offers new constraints on modern dust models. The manuscript is organized as follows: Section 2 presents the methods used in this study. Section 3 details how dust surface density relates to the global physical properties of galaxies. We summarize and conclude in Section 4. Here, we adopt an H\({}_{0}\) = 67.74 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}\) = 0.3089, and \(\Omega_{\Lambda}\) = 0.6911 cosmology. We use the latest solar abundance values from Asplund et al. (2021). ## 2 Differential Dust Depletion ### Dust Depletion in our Galaxy #### 2.1.1 Methodology The depletion of metals is differential, with some elements showing a higher affinity for incorporation into solid-phase grains than others, based on their chemical properties (Pettini et al., 1997; Vladilo, 2002; Jenkins, 2009a). The differential nature of elemental depletion has traditionally been used to correct the observed abundance for unseen metals. Early works used lightly-depleted elements to derive abundances (e.g., focusing on Zn or S or to a lesser degree Si). More recent works have taken advantage of the patterns of differential depletion to estimate the extent of the dust-depletion correction. These efforts follow the spirit of Vladilo (1998) and Jenkins (2009a), using the observed abundances to study the dust depletion beyond the Milky Way (Jenkins and Wallerstein, 2017; Roman-Duval et al., 2022). Specifically, De Cia et al. (2016) developed a method to characterize dust depletion, \(\delta_{X}\), without assumption on the total gas-dust metallicity. This is achieved through the study of relative abundances of several metals with different nucleosynthetic and refractory properties, as follows. The relative gas-phase abundance of the metals X and Y is written: \([X/Y]=\log(\mathrm{N(X)/N(Y)})-\log(\mathrm{N(X)_{\odot}/N(Y)_{\odot}})\). This approach enables to calculate the depletion without assumptions on the total metallicity of the gas, including metals locked onto dust grains (see also De Cia et al., 2021; De Cia, 2018). We derive the depletions of different elements as follows: \[\delta_{X}=A2_{X}+(B2_{X}\times[Zn/Fe]_{\mathrm{fit}}), \tag{1}\] where \([\mathrm{Zn/Fe}]_{\mathrm{fit}}\) traces the overall amount of dust depletion and is taken from De Cia et al. (2021). This quantity is equivalent to the observed \([\mathrm{Zn/Fe}]\), although it is based on the observations of all available metals. The coefficients A\(2_{X}\) and B\(2_{X}\) are taken from Konstantopoulou et al. (2022). The total dust-corrected metallicity is then computed as: \[[X/H]_{\mathrm{total}}=[X/H]_{\mathrm{observed}}-\delta_{X} \tag{2}\] where \([\mathrm{X/H}]_{\mathrm{total}}\) is the total metallicity including metals locked into dust grains, \([\mathrm{X/H}]_{\mathrm{observed}}\) is the observed abundance of X in the gas phase, and \(\delta_{X}\), i.e. the logarithm of the fraction of X in the gas phase. Given that each element has a different propensity to deplete onto dust grains, the observations of multiple element ratios provide a measure of the interstellar depletions for various elements X. Given estimates of \(\delta_{X}\) and observations of \([\mathrm{X/H}]_{\mathrm{observed}}\), one can derive the quantity: \([\mathrm{X/H}]_{\mathrm{total}}\). #### 2.1.2 Observational Results in the Milky Way In the Milky Way, depletions have been studied for many heavy elements in several hundreds of sightlines through the diffuse neutral medium (Field, 1974; Phillips et al., 1982; Jenkins, 2009b; De Cia et al., 2021). Here, we make use of results from two works: while Jenkins (2009a) assumes solar metallicity (and solar abundance pattern) for the Milky Way, De Cia et al. (2021) report both depletion and metallicity measurements along different line-of-sight. For consistency with other measurements and to avoid complications related to ionisation corrections, we focus here on \(\mathrm{log}\,\mathrm{N(H)}\geq 20.3\) measurements from Jenkins (2009a). We also note that the methodology of Jenkins (2009a) follows the one presented for the Magellanic Clouds next as a fixed global metallicity is assumed. These studies indicate that in total about 50% of the metals in the Milky Way's interstellar medium are incorporated into grains (Draine, 2003). The dust-to-gas (DTG) and dust-to-metal (DTM) mass ratios are calculated from these values of the dust depletions (see Section 3). ### Dust Depletion in the Magellanic Clouds #### 2.2.1 Methodology The method that we use to estimate the amount of dust in the Magellanic Clouds is slightly different from the one used to estimate the same quantity in the Milky Way. The depletion for element X is expressed as follows: \[\delta_{X}=[X/H]_{\mathrm{observed}}-[X/H]_{\mathrm{assumed\_total}} \tag{3}\] where \([X/H]_{\mathrm{assumed\_total}}=\log(X/H)_{\mathrm{assumed\_total}}-\log(X/H)_{\odot}\) is the total abundance of element X (gas + dust) which here is assumed to be equal to the abundance of element X in the photospheres of young stars that have formed out of the interstellar medium (as done for the Milky Way by Savage and Sembach, 1996; Jenkins, 2009a; Tchernyshyov et al., 2015; Jenkins and Wallerstein, 2017; Roman-Duval et al., 2019). These works compare the chemical abundances in neutral interstellar gas based on UV spectroscopy to stellar abundances of OB stars and HII regions (e.g. Luck et al., 1998; Hunter et al., 2007; Trundle et al., 2007; Toribio San Cipriano et al., 2017) to estimate \(\delta_{X}\) from equation 3. In principle, there could be variations of the metallicity of the neutral interstellar medium in the Magellanic Clouds. We stress that the approach described in this section is therefore different from the ones reported in other sections for the Milky Way and for high-redshift galaxies. #### 2.2.2 Observational Results in the Large Magellanic Cloud The Large Magellanic Cloud lies just 50 kpc from us (Subramanian and Subramanian, 2009). Its dust content is probed through its extinction map (Furuta et al., 2022), while dust reddening (Chen et al., 2022) and extinction (Gordon et al., 2003) have been measured in both the Large and Small Magellanic Clouds as well as dust emission (Chastenet et al., 2017). Recently, the Large Magellanic Cloud has been the focus of a HST Large Program dubbed "The Metal Evolution, Transport, and Abundance in the Large Magellanic Cloud" (METAL) and introduced in Roman-Duval et al. (2019). Roman-Duval et al. (2021) demonstrates that the depletion of different elements in these data are tightly correlated with the gas (hydrogen) surface density. Roman-Duval et al. (2022) further make a new appraisal of the dust estimates in the Milky Way, Large and Small Magellanic Clouds. The Galaxy is more strongly affected by dust depletion than the Large Magellanic Cloud, and even more than the Small Magellanic Cloud. Nevertheless, the way different elements deplete into dust is very similar between these various environments (De Cia, 2018; Konstantopoulou et al., 2022). Here, we use the data shown in Figure 7 of Roman-Duval et al. (2022), which also include results from Tchernyshyov et al. (2015). We use a constant metallicity throughout the cloud, taken to be [X/H]=\(-0.30\)(Roman-Duval et al., 2022). #### 2.2.3 Observational Results in the Small Magellanic Cloud Similarly, depletion of gas-phase metal abundances are clearly seen in the Small Magellanic Cloud, situated at about 60 kpc (Subramanian and Subramaniam, 2009), though with a smaller degree of depletion reflecting the lower dust-to-metal mass ratio in this system. Initial works from Tchernyshyov et al. (2015); Jenkins and Wallerstein (2017) provide the first estimates of dust depletion in multiple lines-of-sight. Here, we use the observations displayed in Figure 6 of Roman-Duval et al. (2022a).These works assume a constant metallicity throughout the Small Magellanic Cloud, taken to be [X/H]\(--\)0.70 (Roman-Duval et al. 2022a). ### Dust in High-Redshift Galaxies #### 2.3.1 Methodology Given the presence of dust in a broad range of galaxies, depletion is naturally expected to be also detected in the material traced by high-redshift absorption lines seen against the background light of bright sources. In extragalactic systems, there have been numerous studies of diffuse neutral medium depletions, facilitated by the redshifting of the rest-frame UV absorption lines in the visible range (Ledoux et al. 2002; Vladilo 2002; De Cia et al. 2016; Bolmer et al. 2019a). Similar to the Milky Way, we derive the dust-to-gas ratios for individual absorption systems from the elemental depletions. Because the behavior of each metal species varies (Jenkins 2009a), the estimates of the depletion, \(\delta_{X}\), for each individual element, \(X\), are derived from the dust sequences following the approach highlighted by De Cia et al. (2016, 2018); Peroux and Howk (2020). The methodology used here is therefore identical to the one described in Section 2.1.1. #### 2.3.2 Observational Results in Gamma-Ray Burst Hosts Gamma-Ray Burst events in particular probe the interstellar medium of their galaxy hosts. These systems have been used to probe the dust and metals in Gamma-Ray Burst host galaxies (Savaglio and Fall 2004; Schady et al. 2007; De Cia et al. 2012; Zafar and Moller 2019). Here, we report observations from Bolmer et al. (2019a) which offer a set of dust-depletion estimates based on the correction from De Cia et al. (2016), hydrogen column density and metallicity for a sample of Gamma-Ray Bursts. #### 2.3.3 Observational Results in Quasar Absorbers Quasar absorbers probe the gas inside and around hundreds of foreground galaxies unrelated to the background quasars (Petini et al. 1994; Dessauges-Zavadsky et al. 2004; Rafelski et al. 2012). There is strong evidence for the differential depletion of metals in the quasar absorbers observed both for the neutral gas (De Cia et al. 2016, 2018; Peroux and Howk 2020) as well as for partially-ionized gas (Quiret et al. 2016; Fumagalli et al. 2016). The depletion in quasar absorbers is typically smaller than in the Milky Way (De Cia et al. 2016; Roman-Duval et al. 2022b; Konstantopoulou et al. 2022, 2023). In addition, the abundance ratios change in a similar way from local to high-redshift galaxies (De Cia 2018; Konstantopoulou et al. 2022), indicating that the depletion of dust evolves homogeneously all the way to high-redshift systems. These results also imply that grain growth in the interstellar medium is an important process of dust production Here, we use the values of dust depletion summarised by Peroux Figure 1: **Dust surface density as a function of cosmic times.** In this figure and the following, the dust column densities are derived from depletion measurements performed in absorption at UV wavelengths. The shade of colours from light to dark refers to the Milky Way, the Large and Small Magellanic Clouds, Gamma-Ray Bursts and quasar absorbers at z\(>\)0. We note that Gamma-Ray Bursts galaxy hosts preferentially lie above the quasar absorbers. There is also a clear trend of increasing dust surface density with cosmic time with a large scatter at any given redshift, as expected from the building of dust with time. & Howk (2020), which are based on results from De Cia et al. (2018) with some additional updates (see footnote 5 of Peroux & Howk 2020). We note that only a small fraction (of the order 10%) of these systems have detections of molecular hydrogen, \(N\)(H\({}_{2}\)). In addition, when detected, the fraction of molecular gas in the cold phase is also found to be small (\(\sim\)0.01%, see Petitjean et al. 2000; Noterdaeme et al. 2008; Balashev et al. 2019). For these reasons, we neglect the molecular hydrogen gas in quasar absorbers and assume N(H)=\(N\)(H i). ## 3 Observed Dust Surface Density ### Methodology The characterisation of the bulk statistical properties of dust involve assessing the dust-to-gas (DTG) and dust-to-metal (DTM) mass ratios. The former is the fraction of the interstellar mass locked into dust grains; the latter is the fraction of the metal mass incorporated into the solid phase. The dust-to-gas ratio for an individual element, \(X\), is related to its depletion, \(\delta_{X}\), and its dust-to-metal ratio as follows: \[{\rm DTG}_{X}=(1-10^{6\delta_{X}})\,Z_{\rm total}^{X}={\rm DTM}_{X}\,Z_{\rm total }^{X} \tag{4}\] where \(Z_{\rm total}^{X}\) is the intrinsic abundance of X expressed by mass (e.g., Vladilo 2004; De Cia et al. 2016). We derive the \(\delta_{X}\) from the differential depletion of various elements as described in Section 2, and use them to calculate the individual DTG\({}_{X}\) (equation 4). \[{\rm DTG}=\sum_{x}{\rm DTG}_{x} \tag{5}\] To obtain a global DTM ratio expressed for all the elements and in mass fraction, we then average the \({\rm DTM}_{X}\) and weight them by elemental abundances and atomic weight for the Milky Way and the Magellanic clouds (as also done in Roman-Duval et al. 2021). For the high-redshift galaxies, we derive the global DTM from the DTG as follows: \[Z_{\rm total}=\sum_{x}Z_{\rm total}^{x} \tag{6}\] \[{\rm DTM}={\rm DTG}/Z_{\rm total}=\frac{\sum_{x}\,{\rm DTG}_{x}}{\sum_{x}Z_{ \rm total}^{x}} \tag{7}\] For this calculation, we include the 18 elements which depletion have been characterized by Konstantopoulou et al. (2022), though C, O, Si, Mg, and Fe contribute a major fraction of the total dust mass (see also Konstantopoulou et al. 2023). In the calculation we also include all the volatile metals that have an element abundance higher than \(12+\log(X/H)>\)3 (Table 1 of Asplund et al. 2009, see also Asplund et al. (2021)), most notably the N and Ne, which do not contribute to the dust budget, but contribute to the metal budget. We then directly calculate the dust surface density which provide a observationally-based measurements of the dust quantity. Specifically, we use several UV-based depletion measurements to calculate the dust mass surface density, \(\Sigma_{\rm Dust}\), in various environments. To this end, we couple observations of the total dust-to-gas ratio, DTG, described above with estimates of the column density of hydrogen gas, Figure 2: **Dust surface density as a function of cold gas surface density.** The grey lines represent constant dust-to-gas ratios in steps of 1 dex. The cold gas refers to the sum of neutral atomic, H i, and molecular, \(N\) (H\({}_{2}\)), except for quasar absorbers where the molecular gas is found to be negligible (Ledoux et al. 2003). Overall, the dust column densities follow the total hydrogen column densities with a 4-order of magnitude scatter. both in its atomic and molecular phases. We derive the dust surface densities as: \[\Sigma_{\rm Dust}={\rm DTG}\times\Sigma_{\rm Gas}={\rm DTG}\times N(H)\times m_{ \rm H}\times\mu[g/cm^{2}] \tag{8}\] where the total hydrogen column density is N(H)=\(N\)(H )+2N(H )\({}_{2}\), the sum of \(N\)(H ) and two atoms of hydrogen, \(N\)(H\({}_{2}\)), expressed in atoms/cm\({}^{2}\). The quantity \(\Sigma_{\rm Gas}\) is the gas surface density, \(m_{\rm H}\) is the hydrogen mass \(m_{\rm H}\)=\(1.67\times 10^{-24}\) g, and \(\mu\) is the mean molecular weight of the gas which is taken to be 1.3 (76% hydrogen and 24% helium by mass). The dust surface density, \(\Sigma_{\rm Dust}\), is therefore expressed in g/cm\({}^{2}\) or alternatively in M\({}_{\odot}\)/kpc\({}^{2}\). The resulting values for each of the systems are listed in the table available on-line, an excerpt of which is presented in the Appendix A. ### Dust Surface Density as a function of Gas Properties For completeness, we start by briefly summarising the relations between the dust surface densities and galaxy physical properties. This work intentionally refrains to plot dust-to-gas ratio relation with galaxy's properties since those have presented elsewhere (e.g. Popping and Peroux, 2022). Figure 1 displays the dust surface density as a function of cosmic time. In this figure and the followings, the left y-axis displays the dust surface density in units of g per cm\({}^{2}\) and the right y-axis in units of M\({}_{\odot}\)/kpc\({}^{2}\). We stress that all these dust sur Figure 3: _Top Panel:_** Dust surface density as a function of gas metallicity.** There is a tight correlation between dust surface densities and the dust-corrected metallicity estimates at all cosmic times. For a given metallicity, quasar absorbers have lower surface density of dust than GRB host galaxies and the Milky Way, which are closer to the denser and colder parts of their galaxies. The Milky Way values from Jenkins (2009) and the Large and Small Magellanic Clouds (Roman-Duval et al., 2022) are assumed to have one global metallicity each. _Bottom Panels:_**Distribution of dust surface density in quasar absorbers, the Magellanic Clouds, and GRB hosts at fixed metallicity.** The bottom left panel displays observations for the SMC, the bottom middle panel shows data for the LMC, while the bottom right whose measurements from GRB hosts. At fixed metallicity, the distribution in dust surface density in the LMC is wider than for quasar absorbers. Similarly, at a given metallicity, \(\Sigma_{\rm dust}\) is higher in GRB sight lines that are likely closer to the denser and colder parts of their galaxies than quasar absorbers. face densities are derived from depletion measurements performed in absorption at UV wavelengths. The shade of colours from light to dark refers to the Milky Way, the Large and Small Magellanic Clouds, Gamma-Ray Bursts and quasar absorbers at z\(>\)0. There is a clear trend of increasing dust surface density with cosmic time with a large scatter at any given redshift, as expected from the build up of metals and associated dust with time. Gamma-Ray Burst host galaxies overall have higher dust surface densities than quasar absorbers at the same redshift. This is consistent with Gamma-Ray Bursts excluding in inner regions of their host galaxies, while quasar absorbers probe peripheral regions of the intervening galaxies where the sky cross-section is the largest (Prochaska et al., 2007; Fynbo et al., 2008). Figure 2 shows the dust surface density as a function of cold gas surface density. The cold gas refers to the sum of neutral atomic, H i, and molecular, \(N\)(H\({}_{2}\)), except for quasar absorbers where the molecular gas is found to be negligible (Ledoux et al., 2003). The grey lines show constant dust-to-gas ratios in steps of 1 dex. The dust column densities roughly follow the total hydrogen column densities, though with a 4-order of magnitude spread at a given \(\Sigma_{\rm Gas}\). Roman-Duval et al. (2014) used resolved _Herschel_ infra-red maps of Magellanic clouds in combination with 21cm, CO and H\(\alpha\) observations to infer the relation between the atomic, molecular and ionised gas with dust surface density. The authors report a clear trend of increasing dust surface density with increasing gas surface density in the diffuse interstellar medium, albeit for a medium with consistent metallicity. Our depletion-derived results show a similar scaling in Figure 2. Roman-Duval et al. (2017) further explore the relation based on IRAS and Planck observations and find an increase by a factor three of the dust-to-gas ratio going from the diffuse to the dense interstellar medium, in line with elemental depletions results. In the Milky Way (Jenkins, 2009a) and Small Magellanic Cloud (Jenkins and Wallerstein, 2017), the fraction of metals in the gas phase decreases with increasing hydrogen volume density and column density, albeit at different rates for different elements. Gamma-Ray Burst host galaxies display low dust surface densities in comparison with quasar absorbers while still bearing large amounts of gas (Jakobsson et al., 2006; Fynbo et al., 2009), even at relatively high gas surface densities. One possible reason for this difference is that Gamma-Ray Burst host galaxies have overall low metallicities and low dust content (Perley et al., 2016; Kruhler et al., 2015; Savaglio et al., 2009), although dustier systems do exist (Perley et al., 2011, 2013). We cannot exclude that systems with high dust surface density (even in the low metallicity surface density regime) are missing from current sample based on UV dust-depletion observations. Interestingly, there is a dearth of systems with high gas surface density and low dust surface density which can not be attributed to an observational bias, because systems lying in this parameter space - if they exist - would have small extinction. The top panel of Figure 3 displays the dust surface density as a function of gas metallicity. There is a relatively tight correlation between the dust column densities and the dust-corrected metallicity estimates at all cosmic times. Works from Jenkins (2009a); Roman-Duval et al. (2022b) report dust-depletion measurements assuming a global metallicity of [X/H]=0 for the Milky Way, [X/H]=\(-\)0.30 for the Large Magellanic Cloud and [X/H]=\(-\)0.70 for the Small Magellanic Cloud. Additionally, the top panel of Figure 3 shows a clear correlation between \(\Sigma_{\rm dust}\) and metallicity. These results are in line with the dust-to-metal ratio increasing with metallicity (Wiseman et al., 2017; De Cia et al., 2013; Peroux and Howk, 2020). The bottom panels of Figure 3 illustrate the distribution in dust surface density _at fixed metallicity_ materialised by the grey shaded areas in the top panel. In the Large Magellanic Clouds in particular, the dispersion in \(\Sigma_{\rm dust}\) is larger than in quasar absorbers for a given metallicity, providing further evidence that the metallicity within the Clouds might vary. Likewise, we note that the Magellanic Clouds dust distributions are skewed towards higher values than quasar absorbers with the similar metallicity. This effect might be related to (i) quasar absorbers probing random position in the galaxy, and therefore being more likely to probe the outermost parts, and (ii) Magellanic Clouds are infalling satellites, so the gas compression coming from the ram pressure might boost dust formation by increasing the densities. We stress that further differences in \(\Sigma_{\rm dust}\) might be not revealed here because of the assumption of fixed metallicity in the Magellanic Clouds. The right most panel of Figure 3 shows a comparison of quasar absorbers with Gamma-Ray Bursts in the range -1.5\(<\)[M/H]\(<\)-0.5 indicating that the latter have larger dust surface density values. Indeed, at a given metallicity, \(\Sigma_{\rm dust}\) is higher in systems such as Gamma-Ray Bursts that are closer to the denser and colder parts of their galaxies, where the physical conditions (high density, low temperature and high pressure) favour the formation of molecules (Blitz and Rosolowsky, 2006) and likely dust. ### Dust Surface Density Distribution Function Next, we use these observations to calculate the dust surface density distribution function. To this end, we propose to use an analogue of the gas (\(N\)(H i) or \(N\)(H\({}_{2}\))) column density distribution functions (Peroux et al., 2003; Zwaan et al., 2005; Zwaan and Prochaska, 2006; Klitsch et al., 2019; Peroux and Howk, 2020; Szakacs et al., 2022). We express the function as follows: \[f(\Sigma_{\rm Dust})=\frac{\mathcal{N}}{\Delta\Sigma_{\rm Dust}[{\rm g/cm^{2} }]} \tag{9}\] where \(\mathcal{N}\) denotes the number of absorbers in the dust surface density bin \(\Delta\Sigma_{\rm Dust}\)(see also Churchill et al., 2003; Richter et al., 2011). Since the function is not normalised by the redshift path, it depends on the number of sightlines in the survey. Our quasar absorber sample comprises a total of 247 systems. The resulting function for quasar absorbers is displayed in Figure 4. The data appear to follow a power-law distribution, with the turn-over at small column densities. This turn-over kicks in at dust surface densities below \(\log\Sigma_{\rm Dust}\leq-6\). We interpret this feature as due to incompleteness in the observed sample. We stress that the observations reported here are focused on the larger \(N\)(H i) column density quasar absorbers. It is likely that at low gas surface density, there are a number of lower dust surface density systems which are currently not included in the sample. Indeed, the sample is limited to strong quasar absorbers with log \(N\)(H i)\(>\)20.3. Figure 2 shows that quasar absorbers with log\(N\)(H) \(\leq\) 20.3 (top x-axis) will mostly have \(\log\Sigma_{\rm Dust}\leq\) -6 (left y-axis). We note that these systems are not included by construction in the dust surface density distribution function presented here. For this reason, we choose to fit the function without taking the low \(\Sigma_{\rm Dust}\) values into account and with a simple power law of the form: \[f(\Sigma_{\rm Dust})=C\Sigma_{\rm Dust}^{-\delta} \tag{10}\] which we rewrite as: \[\log f(\Sigma_{\rm Dust})=-\delta\times\log\Sigma_{\rm Dust}+\log C=-1.92\times \log\Sigma_{\rm Dust}-3.65 \tag{11}\] for surface densities with \(\log\Sigma_{\rm Dust}\geq-6\) in units of g/cm\({}^{2}\). Here, \(\log f(\Sigma_{\rm Dust})\) is the number of systems with dust surface density \(\Sigma_{\rm Dust}\) per unit of dust surface density \(\Delta\Sigma_{\rm Dust}\). It is therefore expressed in number of systems cm\({}^{2}\)/g. C is the normalisation factor. The fit is also shown in Figure 4. Rees (1988) demonstrated that assuming randomly distributed lines-of-sight through spherical isothermal halos the column density distribution function, \(f\left(N\right)\), was shown to have a power law of slope \(\delta\)=5/3. Interestingly, Kim et al. (2001) report a slope \(\delta\sim\)1.5 for HI absorbers and Churchill et al. (2003); Richter et al. (2011) measure spacings ranging \(\delta\)=1.5-2.0 for MgII, FeII, MgI and CaII, respectively. Therefore, the slope of the dust surface density distribution function, \(\delta\)=1.92\(\pm\)0.13, is possibly steeper than for neutral gas and on the high-end with respected to metal absorbers. Finally, we note that this observed relation, which simulations predicting dust mass function (Pozzi et al., 2020; Millard et al., 2020) will be able to compute through 2D projection, provides new constraints on modern dust models (McKinnon et al., 2017; Graziani et al., 2020; Li et al., 2019; Hou et al., 2019; Baes et al., 2020). We caution that these UV-depletion results could potentially be incomplete due to observational biases affecting e.g., the dust surface density distribution. For example, in the Milky Way and the Magellanic Clouds, UV sight-lines are biased toward the less reddened stars/lower surface densities by the sensitivity of the Ultra-Violet telescopes. Similarly, such effects apply to Gamma-Ray Bursts and quasar samples so that the statier objects might be missed from current sample. Several results (Ellison et al., 2009) indicate that these effects are minimum, but one cannot fully exclude that thastier objects exists and have been missed from dust-depletion studies presented here. ## 4 Conclusions In this work we have looked at a observable, namely \(\Sigma_{\rm Dust}\), to put novel constraints on simulations of dust. Indeed, reproducing dust masses over cosmic times requires that dust grow in the interstellar medium, and therefore that the dust properties change significantly with environment, particularly density. To this end, we gathered observations from the Milky Way, Large and Small Magellanic Clouds and high-redshift galaxies traced by Gamma-Ray Burst host galaxies and quasar absorbers. By putting all these results together we can make a new appraisal of the dust surface density (dust column density) expressed in g per cm\({}^{2}\) or alternatively in M\({}_{\odot}\)/kpc\({}^{2}\) across cosmic times measured through dust depletion. We also contrast the observational measurements with recent hydrodynamical simulations. Our main results are: * The dust surface densities increase with cosmic time with a large scatter at any given redshift. Figure 4: **Dust surface density distribution function.** This function is computed in analogy with the gas column density distribution function. We note a turn-over at low dust surface densities to the left of the dotted grey line. We interpret this feature as due to incompleteness in the sample. Indeed, the data plotted here are focused on the larger \(N\left(\rm H\,\,\right)\) column density quasar absorbers. We fit the high dust surface density values, \(\log\Sigma_{\rm Dust}\geq-6\), with a simple power law of the form: \(\log f\left(\Sigma_{\rm Dust}\right)=-1.92\times\log\Sigma_{\rm Dust}-3.65\). This observed relation, which can be computed by spatially resolved simulations predicting dust mass functions through 2D projection, provides new constraints on modern dust models. * The dust surface densities are also a function of the total gas surface densities in the same systems, with increasing dust surface density increasing with total hydrogen surface density, although the scatter in the relation is 4 orders of magnitude. * There is a tight correlation between the dust column densities and the dust-corrected metallicity estimates at all cosmic times. * in analogy with the cold gas column density distribution function. We note a turn-over at low dust surface densities. We interpret this feature as due to incompleteness in the sample. We provide a fit to the observed distribution of the form: \(\log f(\Sigma_{\rm Dust})\) [number of systems per unit dex] = \(-1.92\times\log\Sigma_{\rm Dust}-3.65\) which proves steeper than for neutral gas and metal absorbers. ## Data Availability Data directly related to this publication and its figures is available on request from the corresponding author. ## Acknowledgements We are grateful to Omima Osman, Enrico Garaldi, Qi Li, Julia Roman-Duval and Sandra Savaglio for helpful comments. We thank the anonymous referee for their suggestions which improved the results presented here. This research was supported by the International Space Science Institute (ISSI) in Bern, through ISSI International Team project #564 (The Cosmic Baryon Cycle from Space). ADC acknowledges support from the Swiss National Science Foundation under grant 185692. JCH recognizes support from the US National Science Foundation through grant AST-1910255.
2307.09117
Synthesized complex-frequency excitation for ultrasensitive molecular sensing
Detecting trace molecules remains a significant challenge. Surface-enhanced infrared absorption (SEIRA) based on plasmonic nanostructures, particularly graphene, has emerged as a promising approach to enhance sensing sensitivity. While graphene-based SEIRA offers advantages such as ultrahigh sensitivity and active tunability, intrinsic molecular damping weakens the interaction between vibrational modes and plasmons. Here, we demonstrate ultrahigh-sensitive molecular sensing based on synthesized complex-frequency waves (CFW). Our experiment shows that CFW can amplify the molecular signals (~1.2-nm-thick silk protein layer) detected by graphene-based sensor by at least an order of magnitude and can be universally applied to molecular sensing in different phases. Our approach is highly scalable and can facilitate the investigation of light-matter interactions, enabling diverse potential applications in fields such as optical spectroscopy, metasurfaces, optoelectronics, biomedicine and pharmaceutics.
Kebo Zeng, Chenchen Wu, Xiangdong Guo, Fuxin Guan, Yu Duan, Lauren L Zhang, Xiaoxia Yang, Na Liu, Qing Dai, Shuang Zhang
2023-07-18T10:04:56Z
http://arxiv.org/abs/2307.09117v1
# Synthesized complex-frequency excitation for ultrasensitive molecular sensing ###### Abstract Detecting trace molecules remains a significant challenge. Surface-enhanced infrared absorption (SEIRA) based on plasmonic nanostructures, particularly graphene, has emerged as a promising approach to enhance sensing sensitivity. While graphene-based SEIRA offers advantages such as ultrahigh sensitivity and active tunability, intrinsic molecular damping weakens the interaction between vibrational modes and plasmons. Here, we demonstrate ultrahigh-sensitive molecular sensing based on synthesized complex-frequency waves (CFW). Our experiment shows that CFW can amplify the molecular signals (\(\sim\)1.2-nm-thick silk protein layer) detected by graphene-based sensor by at least an order of magnitude and can be universally applied to molecular sensing in different phases. Our approach is highly scalable and can facilitate the investigation of light-matter interactions, enabling diverse potential applications in fields such as optical spectroscopy, metasurfaces, optoelectronics, biomedicine and pharmaceutics. ## Introduction Sensors have emerged as indispensable analytical tools across a wide range of important fields, encompassing environmental monitoring, food safety, and public health[1, 2, 3, 4, 5]. They facilitate early disease diagnosis, personalized medicine, and rapid detection of toxic agents[6, 7, 8, 9]. However, significant challenges still exist in the effective detection of trace molecules, hindering the further development of sensors in these applications[6, 10, 11]. Many efforts have been made to improve the sensor sensitivity. Among the various methods explored, optical biosensors based on surface-enhanced infrared absorption (SEIRA) have attracted much attention due to their label-free nature, molecular specificity, and noninvasive performance[12, 13, 14, 9]. Through strong light-matter interactions achieved by surface-plasmon polaritons (SPPs), SEIRA can enhance the detection sensitivity of the molecular vibrational fingerprints in the infrared (IR) region[15, 16, 17]. SEIRA was first demonstrated in 1980 using Ag and Au thin films[18]. However, it was not widely adopted due to the limitation of nanofabrication techniques at the time[19]. The advancement of nanofabrication and new plasmonic materials (e.g., graphene, Ge, Si, oxides, and carbon nanotubes (CNTs)) have led to the revitalization of the research in this area in recent years[13]. In particular, plasmonic nanostructures have been proved to possess much greater enhancement of biomolecule signals than metallic thin films[17]. Compared to metal-based SEIRA, strong field confinement supported by two-dimensional (2D) Dirac fermion electronic states enables graphene-based SEIRA with excellent performance in molecular characterization for gas[20] and solid phase sensing[1, 21, 22]. Graphene can also enhance molecular IR absorption in aqueous solution[23]. Most importantly, active tunability of graphene plasmons broadens their detection frequency range for different molecular vibrational modes by changing the doping level via gate voltage[16, 24, 25]. These advantages make graphene-based SEIRA a unique platform for single-molecule detection. However, the intrinsic molecular damping largely reduces the interaction between the vibrational modes and plasmons. As a result, at lower concentrations, the spectra of plasmon-enhanced molecular signals become weaker and broader, and ultimately are overshadowed by noise. One way to compensate for molecular damping is to add optical gain materials[26, 27, 28]. However, this requires a complex setup which may not be compatible with the detection system. In addition, gain materials usually increase instability and noise[29, 30]. Another way is to use complex-frequency waves (CFW); theoretical studies have proved that CFW with temporal attenuation can restore information loss due to material losses[31, 32]. However, producing CFW in real optical systems remains a challenging task. A novel method for synthesizing CFW has recently been proposed[33]. This method involves treating CFW as a coherent combination of multiple real-frequency waves based on the concept of the Fourier transform. This multi-frequency approach has been recently applied to superimaging[33] and shown remarkable improvement in the imaging resolution, but its application to sensors has not yet been attempted. Here, we demonstrate dramatic signal enhancements of the molecular vibrational fingerprints governed by synthesized CFW. We first theoretically confirm that the truncated CFW synthesized by discretized real frequency waves in a limited range can effectively compensate for molecular damping, significantly improving trace molecular signals (\(\sim\)1.2-nm-thick silk protein layer). Synthesized CFWs are successfully applied to enhancing the molecular signals in the mid-IR extinction spectrum for biomolecules under different conditions, including direct measurement of multiple vibrational modes of deoxynivalenol (DON) molecules and graphene-based SEIRA of proteins in the solid phase and aqueous solution. The results show that our method can improve the sensitivity of various sensors by almost an order of magnitude and advance the quantitative detection of molecules. ## Theoretical mechanism Without loss of generality, we model a molecular layer using the Drude-Lorentz dispersion, \[\varepsilon(\omega)=1+\sum_{m}\frac{{\omega_{\rm pm}}^{2}}{{\omega_{m}}^{2}-{ \omega}^{2}-{\rm i}\gamma_{m}\omega} \tag{1}\] For simplicity, we assume the molecular layer has two vibrational modes, where the plasma frequencies \(\omega_{\rm p1}=\omega_{\rm p2}=128\;{\rm cm}^{-1}\), the damping rates \(\gamma_{1}=\gamma_{2}=\gamma_{\rm M}=60\;{\rm cm}^{-1}\), and the resonant frequencies \(\omega_{1}=1553\;{\rm cm}^{-1}\), \(\omega_{2}=1666\;{\rm cm}^{-1}\). Using finite-element method (FEM) simulation, we obtain the extinction spectra of the molecular layer (the light blue curve in Fig. 1a). Obviously, the key to making the resonant peaks more pronounced is to reduce the damping rate \(\gamma_{\rm M}\). If \(\omega\) is replaced by a complex frequency \(\widetilde{\omega}=\omega-{\rm i}\gamma_{\rm M}/2\), the permittivity of the molecular layer becomes a purely real value \(\varepsilon(\widetilde{\omega})=1+\frac{{\omega_{\rm p1}}^{2}}{({\omega_{1}}^ {2}-{\omega}^{2}-{\gamma_{\rm M}}^{2}/4)}+\frac{{\omega_{\rm p1}}^{2}}{({ \omega_{2}}^{2}-{\omega}^{2}-{\gamma_{\rm M}}^{2}/4)}\). This shows that CFW with suitable temporal attenuation can fully compensate for the damping of molecular vibrational modes. Due to the difficulty of generating CFW directly, we use a new method to synthesize the truncated CFW expressed as \(E_{T}(t_{0})=E_{0}{\rm e}^{-{\rm i}\widetilde{\omega}t_{0}}\theta(t_{0})\), where \(\widetilde{\omega}=\omega-{\rm i}\tau/2\), and \(\tau>0\) represents temporal attenuation. \(\theta(t)\) is the time truncation function to avoid energy divergence, where \(\theta(t_{0})=1\) for \(t_{0}\geq 0\), and \(\theta(t_{0})=0\) for \(t_{0}<0\). Note that the time truncation will lead to appearance of sidebands around the resonances in the complex frequency spectra, which can be eliminated via appropriate average of the signal over time (Supplementary Information Note II). Based on the Fourier transform, \(E_{T}(t_{0})\) can be expanded into the integral of the real frequency components: \(E_{T}(t_{0})=\frac{E_{0}}{2\pi}\int_{-\infty}^{+\infty}\frac{1}{{\rm i}( \widetilde{\omega}-{\omega}^{\prime})}{\rm e}^{-{\rm i}{\omega}^{\prime}t_{0}} {\rm d}{\omega}^{\prime}\), where \(1/{\rm i}(\widetilde{\omega}-{\omega}^{\prime})\) is the Fourier coefficient. Naturally, any response in the system excited by the truncated CFW can be expressed as the integral of the real frequency response \(F(\widetilde{\omega})\approx\int_{-\infty}^{+\infty}F({\omega}^{\prime}) \frac{1}{{\rm i}(\widetilde{\omega}-{\omega}^{\prime})}{\rm e}^{{\rm i}( \widetilde{\omega}-{\omega}^{\prime})t_{0}}{\rm d}{\omega}^{\prime}/2\pi\) in the quasi-steady state. In reality, for a sufficiently wide spectrum range, the integral can be discretized as, \[F(\widetilde{\omega})\approx\sum_{n}F(\omega_{n})\frac{1}{{\rm i}(\widetilde {\omega}-\omega_{n})}{\rm e}^{{\rm i}(\widetilde{\omega}-\omega_{n})t_{0}} \Delta\omega/2\pi \tag{2}\] Subsequently, we use equation (2) to calculate the extinction of the molecular layer at CFW. The extinction is represented as \(I(\omega)=1-|t_{\rm M}|^{2}\), where \(t_{\rm M}=t/t_{s}\), and \(t,t_{s}\) are the transmission coefficients through the substrate with and without the molecular layer, respectively. For thin layer systems, \(t_{\rm M}\) can be approximated as[34], \[t_{\rm M}(\omega)\approx\frac{1}{1-{\rm i}P(\omega)} \tag{3}\] Where \(P(\omega)=\frac{\chi_{e}(\omega)\omega d}{(n_{\rm s}+1)c}\), \(n_{\rm s}\) is the refractive index of the substrate, \(d\) is the molecular layer thickness and \(\chi_{e}(\omega)\) is the effective susceptibility. Considering the difficulty of phase measurement in practice, we can extract the phase \(\arg\left(t_{\rm M}\right)\) from the amplitude \(|t_{\rm M}|\) through Kramers-Kronig relations[35] (see details in Figure S1), \[\arg\bigl{(}t_{\rm M}(\omega)\bigr{)}=-\frac{1}{\pi}{\cal P}\int_{\mathbb{R}} \ \frac{\ln|t_{\rm M}(\omega)|}{\omega-\omega^{\prime}}\ {\rm d}\omega^{\prime} \tag{4}\] and then \(P(\omega)\) can be deduced from \(t_{\rm M}\) using equation (3). \[P(\omega)={\rm i}(\frac{1}{t_{\rm M}(\omega)}-1) \tag{5}\] Note that, similarly to equation (2), equation (4) can be discretized in actual calculation. Hence, the extinction \(I(\widetilde{\omega})\) for a CFW can be obtained by calculating the response \(P(\widetilde{\omega})\) from \(P(\omega)\), \[P(\widetilde{\omega})\approx\sum_{n}P(\omega_{n})\frac{1}{{\rm i}(\widetilde {\omega}-\omega_{n})}\,{\rm e}^{{\rm i}(\widetilde{\omega}-\omega_{n})t_{0}} \Delta\omega/2\pi \tag{6}\] \[I(\widetilde{\omega})=1-\frac{1}{|1-{\rm i}P(\widetilde{\omega})|^{2}} \tag{7}\] Here we do not directly calculate \(t_{\rm M}(\widetilde{\omega})\) to obtain \(I(\widetilde{\omega})\) because \(|t_{\rm M}(\omega)|\to 1\) as \(\omega\to\infty\), which would cause relatively large errors in equation (2) from outside the finite frequency range. On the contrary, \(|P(\omega)|\to 0\) as \(\omega\to\infty\), making the numerical errors of \(P(\widetilde{\omega})\) smaller. Further, we time-average \(P(\widetilde{\omega})\) to reduce the error caused by the truncation of the CFW, the limited frequency range and discretization of frequencies (see details in Supplementary Material Note II). Accordingly, we set \(\widetilde{\omega}=\omega-{\rm i}\gamma_{\rm M}/2\) to demonstrate the enhancement effect of CFW. Compared to the original signal \(I(\omega)\), the resonant peaks of \(I(\widetilde{\omega})\) (the dark blue curve in Fig. 1a) are significantly narrowed, which means that synthesized CFW can directly enhance the molecular vibrational fingerprints without additional assistance. At very low concentrations, the absolute response of the molecular layer would be too small to measure, so SEIRA is used to solve this issue. Here, we consider a graphene nanoribbon array with a period of \(\Lambda=200\) nm, and ribbon width \(w=80\) nm, where the surface conductivity of graphene \(\sigma\) can be calculated by the Kubo formula [35, 36, 37] (see Methods). The resonant frequency of graphene plasmon (GP) \(\omega_{\rm GP}\) is \(1553\) cm\({}^{-1}\) for the doped graphene Fermi energy \(E_{\rm f}=0.5\) eV (the light green curve in Fig. 1b). We assume that the molecular layer covering the graphene nanoribbon and study the near-field coupling between GP and molecular vibrational modes. The light red curve in Fig. 1c shows that the signals from such a thin molecular layer in the extinction spectra are very weak, even with the enhancement provided by GP. This phenomenon can be understood in terms of coupled harmonic oscillators [39]. Plasmon-phonon coupling generates two new hybrid modes, whose splitting distance and damping depend on their coupling strength and original damping rates \(\gamma_{\rm GP}\) and \(\gamma_{\rm M}\). Specially, the damping rates of the hybrid modes are equal to \((\gamma_{\rm GP}+\gamma_{\rm M})/2\) when the resonant frequencies of the plasmon and the molecular mode coincide (\(\omega_{\rm GP}=\omega_{\rm 1}\)). In the case of low concentrations, the hybrid-mode linewidth characterized by \(\gamma_{\rm GP}\) and \(\gamma_{\rm M}\) is relatively larger than the splitting distance caused by the weak coupling strength, resulting in a large overlap between the two hybrid-mode broad peaks and a small dip that is difficult to detect in the extinction spectra. Similarly, we use synthesized CFW to recover the molecular signals. Note that even if the decay constant of graphene \(\Gamma\) is generally much larger than \(\gamma_{\rm M}/2\), CFW can still partially compensate for \(\gamma_{\rm GP}\) (the dark green curve in Fig. 1b), thereby narrowing the linewidths of hybrid modes. It is numerically confirmed that owing to the compensation by synthesized CFW, the originally weak signals are greatly enhanced, and phonon-induced transparency (PIT) structure [40] (\(\omega_{\rm 1}\)) and Fano structure [41] (\(\omega_{\rm 2}\)) are clearly visible in the spectra (The dark red curve in Fig. 1c). We also studied the effect of CFW under different Fermi energies. For the graphene nanoribbon, the resonance of GP gradually shifts to higher frequency with the increase of \(E_{\rm f}\). Due to the relatively large damping, plasmon-phonon coupling has almost no effect on the linear dispersion of GP, such a weak perturbation produces almost no visible dip in the extinction spectrum (Fig. 1d). If we set \(\gamma_{\mathrm{M}}\) close to 0, GP dispersion will be strongly affected near the resonance frequencies of the molecular vibrational modes and GP, as shown by the strong anti-crossing behavior (Fig. 1e). We next obtain the spectrum at CFW (Fig. 1f) by applying Eq. 2 to the spectrum at real frequencies (Fig. 1d). The CFW spectrum exhibits strong anti-crossing behavior at the molecular vibrational resonance frequencies, similar to the case of negligible loss (Fig. 1e). Thus, synthesized CFW can effectively enhances GP-based molecular signals through the damping compensation mechanism. In addition, obtaining the phase \(\mathrm{arg}\left(t_{\mathrm{M}}\right)\) by Kramers-Kronig relations facilitate the applicability of the proposed synthesized CFW method. ### Enhancement of molecular fingerprint signals Based on the above theoretical analysis, we take measurements of molecular infrared spectra to showcase the effectiveness of the synthesized CFW method in enhancing sensitivity. In the experiment, Fourier transform infrared (FTIR) spectroscopy is used to measure the molecular infrared vibrational fingerprint spectrum (details in Methods), where the infrared beam excites the molecular vibrations and is absorbed at the specific resonance frequencies (Fig. 2a). We start with deoxynivalenol (DON) molecules, a mycotoxin from Fusarium fungi found in cereals which poses health risks to humans and animals. The optical micrograph (Fig. 2b) illustrates the preparation of DON samples on a Si substrate (details in Methods). It should be noted that the granular shape of DON molecules is mainly due to solvent evaporation and intermolecular interactions. Due to the large number of molecules in DON particles, the signal intensity after infrared spectroscopy measurement is relatively strong, as shown in the grey curve in Fig. 2c. However, the spin-coated molecules exhibit disorder and have a low-quality factor, resulting in a significant broadening of C-O-H bending modes (\(\delta\)(C-O-H)) which have fingerprints between 1400-1455 cm-1(as indicated by the dashed lines), making them difficult to discern in the extinction spectra. We employed CFW in conjunction with the Kramers-Kronig relation to process the original extinction spectrum (grey curve), obtaining the new spectrum (black curve) in Fig. 2c, clearly displaying the narrowing of the spectral linewidth and enhanced characteristic intensity. This enhancement has allowed us to identify molecular structures and properties more precisely and accurately, contributing significantly to our understanding of molecular spectroscopy. ### Enhancing the sensitivity of graphene-based sensors When the molecular layer is thin, or the number of molecules is very small, traditional infrared spectroscopy struggles to effectively probe molecular signals. Currently, graphene-based SEIRA is one of the most sensitive enhanced infrared spectroscopy methods. For implementation, we first soak a graphene-based infrared sensor in a silk protein solution at a concentration of 10 \(\upmu\)g/mL to enable the protein molecules to adhere to the surface of graphene nanoribbons. The examination of how CFW techniques can enhance the sensors' detection sensitivity is then carried out. Fig. 3a illustrates the schematic of the characterization of GP-enhanced molecular vibrational signal on the periodic graphene nanoribbons. The principle of graphene-based SEIRA is as follows: an infrared light beam irradiates the periodic graphene nanoribbons (Fig. 3b) to excite GP to achieve electromagnetic field enhancement; then, through dynamic back-gate tuning, the resonant frequency of GP is adjusted to be close to the molecular characteristic fingerprint vibrational frequency, resulting in phonon-induced transparency (PIT) in the extinction spectra, as shown in Fig. 3c. The dashed lines indicate the characteristic mode of the protein, representing the Amide I band (1626 cm\({}^{-1}\)). We then investigate the extinction spectrum of GP with different thicknesses ( \(\sim\)1.2 nm, \(\sim\)2.1 nm, \(\sim\)3.0 nm, and \(\sim\)5.8 nm ) of silk proteins on graphene nanoribbons (see details in Figure S3). As the thickness of silk protein increases, there is a corresponding increase in the intensity of the molecular characteristic vibration signal, and the dip of GP-enhanced molecular coupling gradually deepened, as shown in Fig. 3c. However, for silk protein thicknesses of less than 2 nm, the ultra-low coupled signal between graphene and silk protein makes it difficult to identify the vibration signals of the silk protein, which has been a common challenge encountered in the detection of trace proteins. Here, synthesized CFW method is utilized to greatly enhance the signal in the molecular protein growth process, allowing for clear identification of the dips generated at different thicknesses of silk protein, as shown in Fig. 3d. The dip depth \(\Delta h\) is used as the figure of merit to quantitatively evaluate the sensitivity of the graphene-based sensor[42, 43]. \(\Delta h\) is extracted for both real frequency spectra (Fig. 3c) and CFW spectra (Fig. 3d) and plotted in Fig. 3e, demonstrating that the sensitivity has increased by almost one order of magnitude using the CFW method. For the thinnest molecular layer (\(\sim\)1.2 nm), the enhancement factor reaches 15. These results highlight the potential of graphene-based sensors for providing highly sensitive and accurate detection of molecular fingerprints. **Enhancing the sensitivity of tunable graphene-based liquid phase sensors** We further apply our method to sensing molecules in aqueous solution, employing a liquid-phase GP-enhanced FTIR experimental setup as depicted in Fig. 4a, which can eliminate the water background outside the GP hotspot. This setup involves a GP-enhanced infrared sensor encapsulated in an infrared-transparent microfluidic system, allowing a transmittance measurement and a steady solution flowing path. In an aqueous environment, the abundant ions form an electric double layer (EDL) on charged surfaces-graphene. Thus, a liquid gate is applied, which enables a stable and tunable plasmon response of the graphene nanoribbons in an aqueous environment. Then, by injecting a bovine serum albumin (BSA) protein solution (1 mg/mL) into the microfluidic system for two hours, protein molecules become saturated and adsorbed onto the graphene nanoribbons. This leads to the appearance of two dips in the extinction spectrum corresponding to the amide I band (1655 cm\({}^{-1}\)) and amide II band (1545 cm\({}^{-1}\)) of the BSA protein, as shown in Fig. 4b. The resonant frequency of GP \(\omega_{\text{GP}}\) in the infrared fingerprint region can be dynamically adjusted by modulating the doped graphene Fermi energy \(E_{\text{f}}\) using the liquid gate \(\Delta V_{\text{g}}\). Increasing \(\Delta V_{\text{g}}\) from 1.1 V to 2.1 V leads to a blue shift in \(\omega_{\text{GP}}\). Due to significant damping and the presence of noise in an aqueous solution, the two characteristic resonances of the molecule (the dashed curves in Fig. 4b) appear to be indistinct, particularly at larger detuning between GP and the molecular vibrational modes. Here, by applying the CFW method, we observe significantly enhanced signals in the spectrum (the solid curves in Fig. 4b), clearly showing that with an increase of \(\Delta V_{\text{g}}\), the detuning first decreases and then increases, causing the line-shapes of the two dips to gradually transform from Fano to PIT, and then back to Fano resonances. At Fano resonances, the dips slightly deviate from the molecular vibrational modes, which is consistent with the theory (Fig. 1c). Moreover, we simulated the extinction in aqueous solution (the map in Fig. 4c). We show that the position of dips in the experimental spectra at CFW (the hollow points in Fig. 4c) conforms to the evolution trend of the simulation, further proving the rationality of CFW method. Therefore, synthesized CFW method is also suitable for enhancing the sensitivity of liquid-phase infrared sensors even under very challenging conditions. ## Conclusions In conclusion, we have applied a novel synthesized CFW method to compensate for the intrinsic damping of the detected molecules and sensors, resulting in a large enhancement in the signals of the molecular vibrational fingerprints. We demonstrate that for different experimental scenarios, including DON molecules without plasmonic enhancement and silk protein molecules and BSA protein solutions measured by graphene-based plasmonic sensors, synthesized CFW method can effectively enhance the characteristic signals, exhibiting its wide applications. Importantly, under the condition of low concentrations or thin thicknesses, the CFW method can dramatically improve the signals, which is beneficial to increase the upper limit of sensitivity for various sensors. The CFW technique presents a new platform for the sensing field, enabling the enhancement of sensing sensitivity in complex environments and laying the foundation for environmental monitoring, healthcare diagnosis, and new material development. ## Methods ### Simulation of transmission spectrum The thin layer system consists of the molecular layer and graphene nanoribbon and is simulated by finite-element method (FEM) using COMSOL Multiphysics software. In the simulation, a transverse magnetic wave is normally incident onto the thin layer with periodic boundary conditions, and then the transmission coefficient is obtained. In addition, the surface conductivity of graphene \(\sigma\) used in the simulation is calculated by the Kubo formula, \[\sigma=\frac{ie^{2}E_{\mathrm{f}}}{\pi\hbar^{2}(\omega+i\Gamma)}+\frac{ie^{2}} {4\pi\hbar}\ln\left[\frac{2|E_{\mathrm{f}}|-(\omega+i\Gamma)}{2|E_{\mathrm{f} }|+(\omega+i\Gamma)}\right]\] The temperature \(T\!=\!300\) K satisfies the approximate requirement of \(K_{\mathrm{B}}T\!\ll\!E_{\mathrm{f}}\), \(e\) is the electron charge, \(\omega\) is the angular frequency, \(\hbar\) is the reduced Planck constant, and \(E_{\mathrm{f}}\) is the doped graphene Fermi energy. The relaxation time \(\Gamma\!=\!ev_{\mathrm{f}}^{2}/\mu E_{\mathrm{f}}\), where \(v_{\mathrm{f}}\!=\!c/300\) is the Fermi velocity and \(\mu\!=\!700\) cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\) is the carrier mobility of graphene. ### Graphene Plasmonic IR Biosensing For silk protein detection, the proposed graphene plasmon infrared sensor was composed of connected graphene nanoribbon arrays patterned on a 285 nm SiO\({}_{2}\)/500 \(\upmu\)m substrate (from Silicon Valley Microelectronics, Inc.) using electron beam lithography (Vistec 5000\(+\)ES, Germany) and oxygen plasma etching (SENTECH, Germany). The monolayer graphene film was grown on copper foil by chemical vapor deposition and transferred to a SiO\({}_{2}\)/Si substrate using the wet transfer method. The graphene nanoribbon arrays were designed to have widths ranging from 50 nm to 100 nm, with a gap of 50 nm to 100 nm. A pair of electrodes (5 nm Ti and 50 nm Au) were patterned and evaporated onto the graphene using electron-beam lithography combined with electron beam evaporation (OHMIKER-50B, Taiwan). The back-gate was applied by connecting the electrode to the backside of the SiO\({}_{2}\)/Si substrate using an external circuit with the help of silver thread. The SourceMeter (Keithley 2636B) was utilized to supply varied gate voltages. For BSA protein solution detection, based on the previous setup of the graphene plasmon infrared sensor, a top-gate electrode was evaporated onto the substrate, which was connected to the solution but not to the graphene nanoribbons. Additionally, the source and drain electrodes were further passivated with a 50 nm PMMA layer to prevent direct interaction between the electrodes and protein molecules, as well as to minimize electrolyte leakage between the source and gate. The graphene plasmon infrared sensor was then encapsulated with a microfluidic system. Finally, the electrodes were led outside the microfluidic system and connected to the external circuit using silver thread. **Characterization of the Graphene Plasmon Infrared Sensor** The morphologies and thicknesses of the fabricated graphene nanoribbons were characterized by employing scanning electron microscopy (NOVA Nano SEM 430) and AFM (Bruker Multimode8) measurements. The transfer characteristic curve was determined by using a source meter (Keithley 2636B). The FTIR transmission measurements were performed with Thermo Fisher Nicolet iN10 with an IR microscope (15\(\times\) objective). The aperture was set as 100 \(\upmu\)m \(\times\) 200 \(\upmu\)m for each measurement, while the resolution was 4 cm\({}^{-1}\) and scans were 128. **The chemicals sampling** The DON solution was prepared by dissolving DON powder in alcohol at a concentration of 0.5 \(\upmu\)g/mL. Subsequently, it was dropped onto a Si substrate. Once the alcohol evaporates, the DON molecules remain deposited on the substrate. The 10 \(\upmu\)g/mL silk protein solution was prepared by diluting 50 mg/mL silk fibroin solution (from Sigma) 5000 times with deionized water. The BSA protein solution was prepared by dissolving Bovine albumin Fraction V (from Sigma) in deionized water. The different thicknesses of silk protein on graphene nanoribbons were prepared by soaking the graphene plasmon infrared sensor in silk protein solution for varying durations.
2308.07967
Boosting Cross-Quality Face Verification using Blind Face Restoration
In recent years, various Blind Face Restoration (BFR) techniques were developed. These techniques transform low quality faces suffering from multiple degradations to more realistic and natural face images with high perceptual quality. However, it is crucial for the task of face verification to not only enhance the perceptual quality of the low quality images but also to improve the biometric-utility face quality metrics. Furthermore, preserving the valuable identity information is of great importance. In this paper, we investigate the impact of applying three state-of-the-art blind face restoration techniques namely, GFP-GAN, GPEN and SGPN on the performance of face verification system under very challenging environment characterized by very low quality images. Extensive experimental results on the recently proposed cross-quality LFW database using three state-of-the-art deep face recognition models demonstrate the effectiveness of GFP-GAN in boosting significantly the face verification accuracy.
Messaoud Bengherabi, Douaa Laib, Fella Souhila Lasnami, Ryma Boussaha
2023-08-15T18:05:19Z
http://arxiv.org/abs/2308.07967v1
# Boosting Cross-Quality Face Verification using Blind Face Restoration ###### Abstract In recent years, various Blind Face Restoration (BFR) techniques were developed. These techniques transform low quality faces suffering from multiple degradations to more realistic and natural face images with high perceptual quality. However, it is crucial for the task of face verification to not only enhance the perceptual quality of the low quality images but also to improve the biometric- utility face quality metrics. Furthermore, preserving the valuable identity information is of great importance. In this paper, we investigate the impact of applying three state-of-the-art blind face restoration techniques namely, GFP-GAN, GPEN and SGPN on the performance of face verification system under very challenging environment characterized by very low quality images. Extensive experimental results on the recently proposed cross-quality LFW database using three state-of- the -art deep face recognition models demonstrate the effectiveness of GFP-GAN in boosting significantly the face verification accuracy. Face Verification, Blind Face Restoration, GFP-GAN, GPEN, SGPN. ## I Introduction Recent advances in deep learning techniques and the availability of very large-scale datasets have resulted in drastic performance improvement in facial recognition systems [1, 2, 3]. This progress makes the Face Recognition Technology (FRT) a prominent tool for identity verification and identification in various applications ranging from simple access control to intelligent video surveillance and advanced smart safe city applications [4]. However, the near-perfect accuracy surpassing 99.8% obtained on the Labeled Faces in the Wild (LFW) database [5] is not generalizable to more challenging realistic conditions, especially outdoor distant face recognition which still presents one of the main challenges in video surveillance and face-related forensic systems. Real-world scenarios pose unavoidable challenges. In these scenarios, images frequently suffer from various distortions such as noise [6], blur [7], and low resolution [8]. These degradations significantly hinder the ability of face recognition systems to accurately identify and distinguish facial features, resulting in a notable decline in their overall performance [9]. To address these aforementioned challenges, a range of solutions has been proposed where face enhancement techniques play a pivotal role. Face restoration aims to restore a high-quality facial image from its low-quality counterpart by eliminating both known and unknown degradations present in such images [10]. Traditionally, known degradations can be addressed through non-blind restoration techniques, which have been effective in targeting specific types of image distortions [11]. These approaches include deblurring techniques to remove blur [12, 13], denoising techniques to reduce noise [14], super-resolution techniques to enhance resolution [15, 16], and compression artifact removal techniques [17]. While these classical techniques yield reasonably reliable results, real-world captured images often exhibit complex and heterogeneous degradations, making the accurate estimation of the type of degradation a very difficult task. With the emergence of deep learning and the rapid advancements in Convolutional Neural Networks CNNs [18, 19] and deep Generative Adversarial Networks GANs [20][21], blind face restoration techniques [10] have emerged to address cases where the degradation type is unknown or multiple types of degradations coexist within the same image. These techniques can be classified into two main categories: non-priors restoration techniques and priors restoration techniques [11]. Non-priors restoration techniques focus on restoring degraded facial images without relying on any prior information about the degradation process [22, 23]. In contrast, priors' restoration techniques leverage prior knowledge to guide the restoration process. These priors can be in the form of reference information [18, 19], geometric constraints [24, 25], or generative priors [26, 20]. Previous studies primarily focused on evaluating the effectiveness of these methods in improving the perceptual quality of enhanced images using image quality assessment IQA [27] and face quality assessment FQA [28] metrics. However, an essential aspect that has often been overlooked is the preservation of identity information within the enhanced images and its impact on face recognition performance. In this study, we aim to bridge this gap by evaluating the influence of face restoration techniques in terms of their practical utility on the accuracy and reliability of face recognition systems. Our objective is to assess the extent to which these techniques not only enhance the visual quality of degraded facial images but also preserve the crucial identity information necessary for accurate face recognition. To achieve these objectives, we conducted a rigorous evaluation process that involved comparing the performance of state-of-the-art face recognition models, including AdaFace [29], MagFace [30], and ArcFace [31]. Our evaluation encompassed both the original low-quality images from the XQLFW [32] dataset and the enhanced images generated using blind face restoration techniques, including the Generative Facial Prior GAN (GFP-GAN) [20], the GAN Prior Embedded Network (GPEN) [21], and the Shape and Generative Prior integrated Network (SGPN) [26]. By evaluating the performance of face recognition models on restored images, we not only explore the potential of face restoration techniques in improving face recognition accuracy but also shed light on the extent to which these techniques preserve the identity information crucial for accurate recognition. The main contributions of this work are summarized as follows: First, we investigate the impact of applying Blind Face Restoration techniques on the face utility biometric quality metric [33] represented in this study by the amplitude of MagFace embedding vector [30]. Second, we quantify the identity preserving capability of each restoration technique by computing the similarity index obtained from the statistics of the cosine similarity between the original LFW images and the restored XQLFW images. Third, we investigate the impact of the three Blind Face Restoration techniques on the performance of three state-of-the-art face recognition models. By comparing their performance on the restored images, we gained valuable insights and shed light that BFR preprocessing techniques can significantly boost the performance of face verification systems under multiple image degradations if and only if the resulting restored faces possess both higher biometric quality and higher identity preserving similarity index. The rest of this paper is organized into three sections. The second section provides an overview of the three blind face restoration techniques used in our study. We discuss their principles, advantages, and limitations. The third section presents our proposal for a Facial Verification System Architecture specifically designed to operate in low-quality environments. In the fourth section, we present the experimental results obtained from our investigations. We analyze and discuss the findings, shedding light on the influence of these techniques on face verification accuracy and the overall biometric utility of face recognition. Finally, we conclude the paper with a comprehensive discussion and perspectives on the implications of our research. ## II Blind Face Restoration via GAN Priors Blind face restoration techniques based on generative priors, including GFPGAN, GPEN, and SGPN, employ the strength of Generative Adversarial Networks (GANs) [34] to generate realistic representations of the original image. By filling in the missing or altered information in damaged images, these GAN-generated representations capture intricate details of facial geometry, regional textures, and accurate color information. These representations then serve as valuable references for restoring degraded images, resulting in improved visual quality and enhanced facial appearance [20, 26, 21]. GFP-GAN [20] is an architecture capable of restoring facial details and enhancing image colors in a single pass. It combines a U-Net [35] degradation elimination module and a pre-trained StyleGAN2 [36]. The restoration process is guided by four cost functions: adversarial loss for realistic textures, reconstruction loss for preserving fine details and overall quality, Facial Component Loss for enhancing specific facial regions, and identity preserving loss for maintaining accurate identities. This guidance ensures that the restored images accurately represent the original identities. However, GFP-GAN may face challenges when dealing with images exhibiting extreme poses. GPEN [21] unlike conventional methods that aim to learn a direct mapping from low-quality (LQ) input images to high-quality (HQ) images,adopts a two-step approach: training a StyleGAN [37] on high-quality images to capture desired visual characteristics and generate realistic images, and using a decoder-encoder architecture to reconstruct global face structure and local facial details. It employs three cost functions: content loss to preserve fine features and color information, feature matching loss to enhance realism, and adversarial loss for more vivid details. This approach enables GPEN to generate high-quality images with rich details and reduced smoothness. SGPN [26] aims to restore faithfully both the shape and detail of the face. It consists of two main modules: the shape restoration module, which reconstructs facial geometry using 3DMM [38], and the Shape and Generative prior Integration module, which generates a high-quality image by combining reconstructed shape and texture information. SGPN shows promising performance, especially in extreme exposure images. However, it has been observed that SGPN may not fully preserve the identity in the restored images. ## III Methodology It is a common practice in modern face verification systems based on deep learning to start with face detection and alignment. These necessary steps are followed immediately by the embedding operation using one of stare-of-the-art pretrained models. The objective is to extract low dimensional and highly discriminative feature vectors. Generally, a simple cosine similarity scoring is used for matching. However, low-quality images suffering from distortions present less distinguishable features. As a result, face recognition models encounter difficulties when extracting feature vectors from these images, leading to a decrease in face verification accuracy [39]. To overcome this challenge, we propose the inclusion of a face restoration module to the conventional verification pipeline. The new architecture that incorporates blind face restoration is depicted in Fig. 1. By incorporating this face restoration module in the early stage of the face verification process, we ensure that the images, possessing improved quality, exhibit more distinguishable features. ## IV Experiments ### **Datasets** In our evaluation process, we employed two different datasets. For the initial phase of investigating the impact of the studied blind face restoration techniques on the verification performance of state of the art FR systems, we utilized the widely recognized LFW (Labelled Faces in the Wild) dataset [5] under the restricted protocol. In the subsequent phases, specifically for investigating the effectiveness of face restoration techniques, we employed the XQLFW (Cross-Quality Faces in the Wild) dataset which is a variant of LFW with synthetic degradations [32]. This dataset was designed to address the challenges of facial recognition in degraded image conditions. It includes a range of images with varying levels of degradation, such as low quality, low resolution, and low illumination. Fig. 2 shows a sample of positive and negative pairs from the two datasets. ### **Face Recognition Techniques** In this evaluation process, we utilized three state-of-the-art face recognition systems: ArcFace, MagFace1, and AdaFace 1. ArcFace [31] is a face recognition technique that improves upon the traditional Softmax loss by introducing an additive margin loss. By considering the angular margin between classes, ArcFace enhances the model's ability to discriminate and classify different identities, resulting in high accuracy in face recognition tasks. This technique has laid a solid foundation for advancements in the field. MagFace [30] builds upon ArcFace by refining the margin concept and incorporating additional considerations. It integrates magnitude considerations into the learning process, enabling the model to capture subtle variations in facial features caused by lighting conditions, facial expressions, and other factors. This enhancement allows MagFace to outperform previous approaches and achieve superior performance in face recognition tasks. AdaFace [29], also inspired by ArcFace, introduces an adaptive margin mechanism that adjusts the margin dynamically during training based on the quality of the input image. By considering image quality, AdaFace can handle variations in image clarity and other factors that affect the quality of facial features, such as occlusion [40]. Footnote 1: [https://github.com/leondgarse/Keras_insightface](https://github.com/leondgarse/Keras_insightface) ### **Implementation details** In our comprehensive evaluation of face restoration techniques, we conducted experiments in different image quality scenarios: on high-quality images, low-quality images, and re- stored images. For the first experiment on high-quality images using LFW dataset, face detection and alignment are accomplished using MTCNN [41]This experiment served as a baseline for the rest of the evaluation process. Similar procedure is executed when evaluating the verification performance on XQLFW and their restored versions. For blind face restoration implementation the **GFP-GAN v1.3.02**, **GPEN-BFR-5123**, and **SGPN4** pretrained models are employed in this study. Footnote 2: [https://github.com/TencentARC/GFPGAN](https://github.com/TencentARC/GFPGAN) Footnote 3: [https://github.com/yangxy/GPEN](https://github.com/yangxy/GPEN) Footnote 4: [https://github.com/TencentYoutResearch/FaceRestoration-sgpn](https://github.com/TencentYoutResearch/FaceRestoration-sgpn) ### **Experimental Results** #### **Preliminary Visual Inspection** As depicted in Fig. 3, the effectiveness of the three restoration techniques (GFP-GAN, GPEN, and SGPN) varies based on the degree of image degradation. Mildly degraded images are successfully restored quality by all the three techniques with a high perceptual image quality. However, as distortion increases, SGPN encounters difficulties in restoring facial components, particularly the eyes, while GPEN and GFP-GAN perform relatively better but Fig. 1: Architecture of the face verification pipeline incorporating blind face restoration. _Magn_ indicates the corresponding MagFace quality metric of the image. Fig. 2: Sample of positive and negative pairs from the LFW and the XQLFW datasets. with reduced naturalism and fidelity compared to the original images. This is evident in cases where non-existent glasses or changes in eye color are introduced. These artifacts yield to a potential loss of identity information. #### Iii-B2 **Identity Preserving Quantification** To quantify the potential of face restoration techniques in preserving the identity information within the enhanced images, the statistics of the cosine similarity between the restored XQLFW and the original LFW images are analyzed. We calculate the cosine similarity between the embeddings of the two images extracted using AdaFace based on the ResNet100 [2] backbone and trained on WebFace4m [3] database, MagFace based on ResNet50 backbones and trained on MS1MV2 [31] and ArcFace with ResNet50 backbone trained on the CASIA-WebFace [7] database. These statistics are represented via boxplots in Fig.4. It is easy to notice that for the three face embeddings, SGPN shows a high interquartile range or variability. The other BFR techniques exhibits lower interquartile range with higher medians. The **Cosine Similarity Median (CSM)** can be considered as a good metric for identity preservation. Table I summarizes the obtained CSM values and we can advocate that the GFP-GAN technique possess the highest identity preserving capability when used in conjunction AdaFace and ArcFace. Meanwhile, for the MagFace model, images restored using GPEN demonstrate the highest similarity to the ground truth images. The lower CSM's quantities obtained using SGPN confirm its key pitfall in preserving the valuable identity information, especially for severely degraded face images. #### Iii-B3 **Biometric Utility Quality via MagFace** In order to shed light on the impact of the studied BFR techniques concerning the biometric utility quality metrics, which are highly correlated to the recognition accuracy, the Magface metric, which simply measures the magnitude of the MagFace embedding, is employed. The magnitude of the face features vectors extracted using MagFace from the original XQLFW and its restored versions using GFP-GAN, GPEN and SGPN. The distribution plots depicting these magnitudes are presented in Fig. 5. For the XQLFW dataset, we observe a right-skewed distribution, indicating that the majority of images are located on the left side of the graph and have low magnitudes, thus indicating low quality. Regarding the GPEN and GFP-GAN graphs, we can observe a near-Normal distribution. The peak of the distribution for GPEN is centered on 19, while for GFP-GAN, it is centered on 18. These distributions suggest that there is a predominance of high- quality images. On the other hand, for SGPN, although there is a decrease in the number of low-quality images, the frequency of the high-quality images does not reach the same level as that of GPEN and GFP-GAN. This suggests that SGPN may have a comparatively lower performance compared to the other two BFR techniques in terms of improving the biometric quality of the sample. #### Iii-B4 **Effect on face verification performance** Previous findings support the hypothesis that GFP-GAN and, to a lesser extent, GPEN could be good candidates for boosting the face verification performance of low-quality images. Before investigating the verification accuracy after applying face restoration, we run our first experiments to see the performance drop when passing from LFW to XQLFW. The results depicted in table II show a significant drop in performance by around 12% for AdaFace, 16% and 20% for MagFace trained on ResNet100 and ResNet50 respectively, and even reaching 26% for ArcFace. We can observe that AdaFace is the most robust to degradation and most suited to handle low quality face verification. The drastic drop in performance across different face recognition models serves as compelling evidence for the importance of addressing image degradations. To this end, the best performing configuration for each loss function, namely AdaFace trained on webface4m, MagFace with ResNet50 backbone and ArcFace with ResNet50 backbone are selected for our last experiments to evaluate the impact of blind face restoration on verification accuracy. The results are presented in table III. The obtained results highlight the outperformance of GFP-GAN compared to the other BFR techniques. A significant Fig. 4: Boxplots of similarity index between embeddings extracted using (a)AdaFace, (b)MagFace and (c)ArcFace from LFW images and restored images using SGPN, GFP-GAN and GPEN face restoration techniques Fig. 3: Perceptual comparison between restored images using SGPN, GFP-GAN and GPEN face restoration techniques gain in performance is achieved across all the three models, with an absolute increase in accuracy ranging from 1.7% to 12.8%. However, it is important to note that the performance of the other models can exhibit both improvements and declines. Specifically, when evaluating GPEN, the performance on ArcFace demonstrates a significant 9% increase. There is a marginal improvement of 0.1% for MagFace, while there is a decrease of 2% for AdaFace. On the other hand, when evaluating SGPN, there is a decline in performance ranging from 4% to 6%. These unexpected findings suggest that the restoration models, while enhancing perceptual quality, may inadvertently cause a loss of identity-related information, resulting in a decrease in performance. It is important to mention that the obtained recognition results are perfectly inline with our analysis concerning similarity index and biometric quality. ## V Conclusion In this study, we examined the effectiveness of face restoration techniques based on gen- erative priors in preserving face identity information. Our focus was on their impact on improving the performance of face recognition models, specifically in the verification task. Our findings revealed that the AdaFace technique outperformed other methods in this task across images of varying qualities. Moreover, the GFP-GAN restoration technique excelled in enhancing visual quality and preserving identity information, enabling accurate verification of facial recognition patterns. To leverage these findings, we proposed a novel face verification system that integrated a face restoration module utilizing the GFP-GAN technique at the early stage of the verification process. This system aimed to enhance the face verification performance of the AdaFace, MagFace, and ArcFace face recognition techniques in challenging environments with complex image distortions. While significant performance improvements were achieved in the face verification task using the XQLFW dataset with synthetic degradations [32], it is crucial to conduct additional testing on real-world image databases to validate the effectiveness of the proposed solution in practical scenarios. Furthermore, it is imperative to carry out further research to gain valuable insights into the practical implications of employing blind face restoration methods in the field of face recognition.
2305.07025
Structural Anisotropy in Sb Thin Films
Sb thin films have attracted wide interests due to their tunable band structure, topological phases, and remarkable electronic properties. We successfully grow epitaxial Sb thin films on a closely lattice-matched GaSb(001) surface by molecular beam epitaxy. We find a novel anisotropic directional dependence of their structural, morphological, and electronic properties. The origin of the anisotropic features is elucidated using first-principles density functional theory (DFT) calculations. The growth regime of crystalline and amorphous Sb thin films was determined by mapping the surface reconstruction phase diagram of the GaSb(001) surface under Sb$_2$ flux, with confirmation of structural characterizations. Crystalline Sb thin films show a rhombohedral crystal structure along the rhombohedral (104) surface orientation parallel to the cubic (001) surface orientation of the GaSb substrate. At this coherent interface, Sb atoms are aligned with the GaSb lattice along the [1-10] crystallographic direction but are not aligned well along the [110] crystallographic direction, which results in anisotropic features in reflection high-energy electron diffraction patterns, surface morphology, and transport properties. Our DFT calculations show that the anisotropic features originate from the GaSb surface, where Sb atoms align with the Ga and Sb atoms on the reconstructed surface. The formation energy calculations confirm that the stability of the experimentally observed structures. Our results provide optimal film growth conditions for further studies of novel properties of Bi$_{1-x}$Sb$_x$ thin films with similar lattice parameters and an identical crystal structure as well as functional heterostructures of them with III-V semiconductor layers along the (001) surface orientation, supported by a theoretical understanding of the anisotropic film orientation.
Pradip Adhikari, Anuradha Wijesinghe, Anjali Rathore, Timothy Jinsoo Yoo, Gyehyeon Kim, Hyoungtaek Lee, Sinchul Yeom, Alessandro R. Mazza, Changhee Sohn, Hyeong-Ryeol Park, Mina Yoon, Matthew Brahlek, Honggyu Kim, Joon Sue Lee
2023-05-11T17:58:41Z
http://arxiv.org/abs/2305.07025v1
# Structural Anisotropy in Sb Thin Films ###### Abstract Sb thin films have attracted wide interests due to their tunable band structure, topological phases, and remarkable electronic properties. We successfully grow epitaxial Sb thin films on a closely lattice-matched GaSb(001) surface by molecular beam epitaxy. We find a novel anisotropic directional dependence of their structural, morphological, and electronic properties. The origin of the anisotropic features is elucidated using first-principles density functional theory (DFT) calculations. The growth regime of crystalline and amorphous Sb thin films was determined by mapping the surface reconstruction phase diagram of the GaSb(001) surface under Sb\({}_{2}\) flux, with confirmation of structural characterizations. Crystalline Sb thin films show a rhombohedral crystal structure along the rhombohedral (104) surface orientation parallel to the cubic (001) surface orientation of the GaSb substrate. At this coherent interface, Sb atoms are aligned with the GaSb lattice along the [\(\bar{1}\)10] crystallographic direction but are not aligned well along the [110] crystallographic direction, which results in anisotropic features in reflection high-energy electron diffraction patterns, surface morphology, and transport properties. Our DFT calculations show that the anisotropic features originate from the GaSb surface, where Sb atoms align with the Ga and Sb atoms on the reconstructed surface. The formation energy calculations confirm that the stability of the experimentally observed structures. Our results provide optimal film growth conditions for further studies of novel properties of Bi\({}_{1-x}\)Sb\({}_{x}\) thin films with similar lattice parameters and an identical crystal structure as well as functional heterostructures of them with III-V semiconductor layers along the (001) surface orientation, supported by a theoretical understanding of the anisotropic film orientation. ## 1 Introduction Group-VA elemental thin films (phosphorus, arsenic, antimony, and bismuth) have gained significant attention in recent years due to rich and promising properties such as high carrier mobilities, outstanding optical and thermodynamic responses, tunable band gap, and non-trivial topological phases [1, 2, 3, 4]. Among the group-VA elements, Sb and Bi are relatively heavy elements with a strong spin-orbit coupling. Multiple topological phases in Sb and Bi thin films, including quantum spin Hall insulator phase in the two-dimensional (2D) limit, three-dimensional (3D) topological insulator (TI) phase, and 3D higher-order TI phase, have been theoretically proposed, and some of the features have been experimentally demonstrated[5, 6, 7, 8]. In general, electronic band structures with non-trivial topology can be modified by strain, electric and magnetic fields, and thickness. In Sb thin films, it is theoretically predicted that the quantum confinement effect opens up a bulk band gap when film thickness is less than 7.8 nm where it enters into the 3D TI regime. Going even below a certain thickness transforms the topological phase into the quantum spin Hall state because of the surface coupling effect [5]. Antimonene, Sb analog of graphene, is a 2D hexagonal lattice of Sb atoms. In addition to the non-trivial topology of antimonene (quantum spin Hall state), remarkable properties including stability in air, high electron mobility, and thermoelectric and ferroelectric properties have attracted wide interests [9, 10, 11, 12]. Precise control over the film thickness is critical to investigate the quantum confinement effect in Sb thin films, and molecular beam epitaxy (MBE) is advantageous for layer-by-layer construction of topological quantum materials [13]. By using MBE, Sb thin films have been synthesized on various substrates since 1980s. Early studies of Sb films grown on GaAs(110), InP(110), and InP(001) focused on the use of Sb as a capping layer or a Schottky barrier [14, 15, 16, 17]. Moreover, deposition of Sb on direct band-gap semiconductors of InSb(111) and GaSb(111) has been investigated for the purpose of development of superlattices with an indirect narrow gap/direct gap heterostructures [18]. Recent reports on epitaxial Sb mostly focus on demonstration of ultrathin Sb films or antimonene layers for their novel 2D nature and topologically non-trivial properties. Due to the hexagonal lattice structure of antimonene, van der Waals 2D substrates such as graphene [19], as well as (111) surface orientation of copper [20] have been used for MBE growth of Sb layers. The most common crystalline structure for group V elemental solids is the rhombohedral structure, thus also for Sb. Under ultrahigh vacuum (UHV) environment, rhombohedral Sb(111) layers can be epitaxially grown on closely lattice-matched GaSb(111) with hexagonal lattice arrangement [18, 21]. However, on cubic (001) surface orientation, polycrystalline nucleation of Sb with rough surfaces was reported with no success in epitaxial growth of Sb layers [21]. In this work, we report wide-area Sb thin films coherently grown on cubic GaSb(001) surface by MBE. We carefully study surface kinetics and crystalline phase of Sb on closely lattice-matched cubic GaSb(001) surface, under UHV environment. We first delve into the surface kinetics of the GaSb(001) surface in the presence of Sb flux over a wide range of temperature from 450\({}^{\circ}\)C down to room temperature and find nucleation conditions of Sb films. We employ _in-situ_ reflection high-energy electron diffraction (RHEED) patterns to observe surface reconstruction on the GaSb(001) surface, as well as abrupt changes occurring at the surface when Sb layers starts to grow. We successfully grew crystalline Sb thin films coherent to the GaSb(001) atomic structures below 120\({}^{\circ}\)C. The Sb structure turned out to be rhombohedral along the (104) surface orientation parallel to the cubic (001) surface orientation of the GaSb substrate, confirmed by x-ray diffraction (XRD) and electron diffraction using transmission electron microscopy (TEM). Our DFT calculations show that the anisotropic features in Sb thin films, which refer to the directional dependence of their structural and morphological properties, originate from the reconstruction of the GaSb surface.No cubic phase of Sb was seen from any of the grown films, consistent with the unstable cubic phase of Sb at ambient conditions [22]. The observed Sb(104) planes are aligned to the GaSb(001) lattices along the [I10] direction, whereas mismatched lattices are expected along the [110] direction. This anisotropic lattice matching of the rhombohedral Sb(104) and cubic GaSb(001) layers result in 1) spotiness of RHEED along the [110] crystalline direction, 2) elongated formation of Sb structures along the [\(\bar{1}\)10] crystalline direction, observed by scanning electron microscopy (SEM) and atomic force microscopy (AFM), and 3) anisotropic transport with relatively lower resistance along the [\(\bar{1}\)10] crystalline direction in comparison to the [110] direction. The observed anisotropic features can be significantly reduced by growing Sb films at lower temperatures. Our DFT calculations show that the Sb(104) layers with observed anisotropy are stable due to the (1 \(\times\) 3) surface reconstruction of GaSb(001) surface. The successful demonstration of coherent, rhombohedral Sb thin films grown on cubic GaSb(001) substrates paves the way to embed crystalline Sb layers into well-developed and widely-used cubic semiconductor substrates for fundamental studies of topological nature of Sb thin films as well as for applications using the remarkable electronic, optical, and thermoelectric properties. This study can be further extended to studies of Bi\({}_{1-x}\)Sb\({}_{x}\) thin films on lattice-matched cubic substrates. Bi\({}_{1-x}\)Sb\({}_{x}\) has shown multiple topological phases, which have potential applications in spintronics and quantum computing [23, 24, 25]. ## 2 Results and Discussion ### Surface reconstruction and Sb film growth on GaSb(001) surface To achieve optimal growth conditions of Sb thin films, surface reconstruction phase diagram of the GaSb(001) surface was investigated. In ultrahigh vacuum chamber, the native oxide on the GaSb(001) surface was thermally desorbed, confirmed by the appearance of RHEED patterns, in the presence of Sb\({}_{2}\) flux. GaSb desorption temperature of 540\({}^{\circ}\)C was used to calibrate the pyrometer. On the desorbed surface, a GaSb homoepitaxial buffer layer was grown at 450\({}^{\circ}\)C and streaky (1 \(\times\) 3) RHEED patterns confirmed the smooth surface under Sb-rich condition. To obtain the surface reconstruction phase diagram of GaSb in the presence of Sb\({}_{2}\) flux, the Sb\({}_{2}\) flux was kept constant, and change in the RHEED patterns was tracked with decrement of substrate temperature. When the RHEED patterns significantly changed with deposition of Sb layer at lower temperatures, substrate temperature was raised above 400\({}^{\circ}\)C until GaSb (1 \(\times\) 3) RHEED patterns reappeared, and a thin GaSb layer was grown to obtain a smooth surface. This process was repeated with a change in the Sb\({}_{2}\) flux. Figure 1(a) shows the phase diagram for the surface reconstruction of the GaSb(001) surface under Sb\({}_{2}\) flux in the substrate temperature range from 450\({}^{\circ}\)C down to room temperature. The flux values are expressed in units of beam equivalent pressure (mbar) as measured by beam flux monitor of a Bayard-Alpert ionization gauge. The temperature values above 270\({}^{\circ}\)C were measured using a pyrometer focused on the sample while lower temperatures were from a thermocouple attached to a manipulator holding the sample on a tungsten sample holder. The data points on the plots indicate transitions of the RHEED patterns. The [(1 \(\times\) 3) \(\rightarrow\) (2 \(\times\) 5)] transition is the GaSb(001) surface reconstruction, consistent with previous reports [26, 27], indicating there is no Sb film grown on the surface. The RHEED pattern became blurry and dimmer between 200 \({}^{\circ}\)C and 250 \({}^{\circ}\)C, depending on the Sb\({}_{2}\) flux, indicating Sb atoms start to stick to the surface. Upon decreasing the substrate temperature, RHEED exhibited a sudden alteration, displaying distinctively spot-like patterns in the [110] direction and relatively indistinct but still streaky patterns in the [\(\bar{1}\)10] direction. This implies that in the [110] direction, the electron beam detected three-dimensional nanostructures, while in the [\(\bar{1}\)10] direction, it did not detect any significant three-dimensional features. This anisotropic spotty/streaky RHEED feature can be obtained with elongated 3D nanostructures on the surface, which is the case of the Sb thin films on the GaSb(001) surface as confirmed by surface morphology characterizations. Sb thin films grown in the transitional and the anisotropic RHEED regions were further characterized in the following sections. Figure 1: (a) Surface reconstruction phase diagram for GaSb(001) in the presence of Sb\({}_{2}\) flux and Sb film growth illustration showing no growth, amorphous Sb with rough interface, and anisotropic crystalline Sb film with clean interface on GaSb(001) as growth temperature decreases. Representative RHEED images observed in (b) (1 \(\times\) 3), (c) (2 \(\times\) 5), (d) transitional, and (e) anisotropic RHEED regions. The shaded part indicates the region of Sb film growth. Error bars is the standard error of mean from flux measurement on beam flux monitor. The upper and lower rows show the RHEED images in the [110] direction and the [\(\bar{1}\)10] direction, respectively. ### Crystal structure of Sb thin films To investigate the crystal structure of Sb thin films, two Sb films were prepared in the transitional and anisotropic RHEED regions based on the surface reconstruction phase diagram study. Two samples were grown at a manipulator temperature of 120\({}^{\circ}\)C in the anisotropic RHEED region and at 250 \({}^{\circ}\)C in the transitional RHEED region, respectively, with identical Sb\({}_{2}\) flux of 7.38 \(\times 10^{-7}\) mbar and growth time. While cooling down the substrate after GaSb buffer layer growth at 500\({}^{\circ}\)C, Sb\({}_{2}\) flux was closed below 400 \({}^{\circ}\)C to prevent any unattended Sb growth and reopened after the substrate reached the desired temperatures. High-resolution TEM (HRTEM) on a cross-section of the sample grown at 120\({}^{\circ}\)C shows the GaSb buffer layer, Sb film, and the protective Pt layer deposited by focused ion beam {Fig. 2(a)}. The interface between Sb and GaSb is abrupt. Through selected area electron diffraction (SAED) analysis {Fig. 2(b)}, the Sb film is determined to adopt the rhombohedral phase with a growth plane of (104)\({}_{r}\), where the subscript indicates the rhombohedral structure. The Sb growth plane of (104)\({}_{r}\) is parallel to GaSb growth plane of (001)\({}_{c}\) of the cubic structure, which is later confirmed with the XRD measurement as well. Figures 2(c) and (d) illustrate the proposed growth orientation of the rhombohedral Sb on cubic GaSb according to these findings. Along the [\(\bar{1}\)10] direction of GaSb, Sb atoms in the Sb film align well with Ga and Sb atoms in the GaSb layer, Figure 2: (a) HRTEM image of film stack for Sb film grown at 120\({}^{\circ}\)C. (b) SAED pattern acquired at the interface between GaSb buffer layer and Sb film. The diffraction spots for each structure are outlined in the red and green diamonds for Sb and GaSb, respectively. The (001)\({}_{c}\) growth plane of GaSb aligns with the (104)\({}_{r}\) growth plane of the Sb film. Illustrated crystal structure of rhombohedral Sb thin film on cubic GaSb layer, as seen in (c) along the [110]\({}_{c}\) zone axis and (d) along the [\(\bar{1}\)10]\({}_{c}\) zone axis. whereas the positions of the atoms in the two layers do not match along the [110] direction. In contrast to the rhombohedral Sb thin film in the anisotropic RHEED region, the sample grown at 250\({}^{\circ}\)C, in the transitional RHEED region turns out to be amorphous, and the interface between Sb and GaSb is rougher than that of sample grown at 120 \({}^{\circ}\)C {see Fig. 3}. A high-magnification high-angle annular dark-field scanning TEM (HAADF-STEM) image shows some crystallinity in the Sb film up to about 5 nm above the interface {Fig. 3(b)}, but it is mainly amorphous beyond that region. It is likely that in the transitional RHEED region, Ga atoms from GaSb layer diffuse into the Sb film, forming GaSb patches within the Sb film, as seen in Fig. 3(c).The orientations of the crystalline GaSb patches are different from each other, which is distinctive from the well-oriented single-crystalline GaSb layer below the interface. By using the same growth conditions (Sb flux and growth time), the thickness of sample grown at 250\({}^{\circ}\)C is around 25 nm whereas sample grown at 120\({}^{\circ}\)C is much thicker (100 nm). This indicates that in the transitional RHEED region, Sb atoms are partially desorbed and partially deposited on the GaSb surface. No Sb peak was observed on the XRD of sample grown at 120\({}^{\circ}\)C, consistent with the amorphous nature observed by HAADF-STEM imaging. To further achieve the optimal quality of the rhombohedral Sb thin films, four different samples Figure 3: (a) HAADF-STEM image of film stack for Sb film grown at 250\({}^{\circ}\)C. (b) High-magnification HAADF-STEM image near the interface between the Sb film and GaSb buffer layer, showing that the interface is rough. Top right inset is a fast Fourier transform (FFT) image of the Sb film away from the interface showing ring patterns, indicative of an amorphous structure. (c) Magnified image of one of several crystalline GaSb patches observed in the Sb film from the blue box in (b). (d) Magnified image of the GaSb buffer layer viewed along [110] from the green box in (b). expecting equal thickness of approximately 50 nm were grown with constant Sb flux of 7.38 \(\times 10^{-7}\) mbar and manipulator temperatures (T\({}_{m}\)) of 25\({}^{\circ}\)C, 60\({}^{\circ}\)C, 90\({}^{\circ}\)C and 120\({}^{\circ}\)C, respectively. The \((1\times 3)\) surface reconstruction of GaSb was seen before the growth of the Sb films. Figure 4(a) shows XRD peaks of the samples and all the films showed the crystalline nature of Sb. The peaks found at two values around 41\({}^{\circ}\) and 82\({}^{\circ}\) correspond with the Sb(104)\({}_{r}\) and Sb(208)\({}_{r}\) planes, respectively. The three tall peaks present in all the plots at 2\(\theta\) angles of 29\({}^{\circ}\), 61\({}^{\circ}\) and 98\({}^{\circ}\) correspond with the peaks of GaSb(002)\({}_{c}\), (004)\({}_{c}\) and (006)\({}_{c}\). The XRD results confirm that Sb grows in the rhombohedral structure with (104)\({}_{r}\) plane, which is parallel to the GaSb(001)\({}_{c}\) surface. The widths and heights of Sb XRD peaks vary depending on the growth temperatures. Particularly, the height of Sb(208)\({}_{r}\) peak decreases as the growth temperature increases. A rocking curve scan for each sample at 2\(\theta\) = 41\({}^{\circ}\) with identical measurement conditions revealed Gaussian-shaped rocking curve for all the samples. Consistent with the Sb XRD peak height variation, out of the four rocking curves, peaks from samples grown at 25\({}^{\circ}\)C and 60\({}^{\circ}\)C are significantly taller than those of samples grown at 90\({}^{\circ}\)C and 120\({}^{\circ}\)C, and Sb film grown at highest temperature shows the shortest peak {Figs. 4(b) and (c)}. Full-width half maxima (FWHM)) in Sb films grown at lower temperatures are narrower in comparison to those grown at higher temperatures, indicating higher film quality with fewer defects and curvature in films, consistent with the electron microscopy and RHEED results. ### Anisotropic surface morphology Surfaces of the samples grown at four different temperatures were scanned using SEM and AFM. SEM images in Figs. 5(a) through (d) show line-like features along the [\(\bar{1}\)10] direction in the samples grown at 60\({}^{\circ}\)C, 90\({}^{\circ}\)C, and 120\({}^{\circ}\)C while no prominent features were seen in sample grown at 25\({}^{\circ}\)C. The contrast of these lines appear to get stronger with increasing growth temperatures, which are consistent with the AFM results. The sample grown at 25\({}^{\circ}\)C shows the smoothest surface with Figure 4: (a) XRD for samples grown at 25\({}^{\circ}\)C, 60\({}^{\circ}\)C, 90\({}^{\circ}\)C, and 120\({}^{\circ}\)C. The peaks appearing at 2\(\theta\) values of 40\({}^{\circ}\) and 82\({}^{\circ}\) correspond to the Sb (104)\({}_{r}\) and (208)\({}_{c}\) planes while the other peaks are emerging from the GaSb(001) substrate. (b) Rocking curves for the samples grown at 25\({}^{\circ}\)C, 60\({}^{\circ}\)C, 90\({}^{\circ}\)C, and 120\({}^{\circ}\)C at 2\(\theta\) value of 41\({}^{\circ}\). The peaks for samples grown at lower temperatures are taller and sharper, while samples grown at higher temperatures give broader and shorter peaks, as plotted in (c) Sb peak intensity (black circles) and FWHM (blue squares) of the samples grown at four different temperatures. Dashed lines are guides to the eye. no clear tendency of directional structures whereas slight elongation of nanostructures on the top surface starts to be seen from the sample grown at 60\({}^{\circ}\)C {Figs. 5(e) and (f)}. Elongated Sb structures are more prominent in the samples with higher growth temperatures {Figs. 5(g) and (h)}. Surface roughness also increases with higher growth temperatures. The mean roughness values for the samples grown at 25\({}^{\circ}\)C, 60\({}^{\circ}\)C, 90\({}^{\circ}\)C, and 120\({}^{\circ}\)C are 0.446 nm, 0.479 nm, 0.626 nm and 1.156 nm, respectively. The elongated Sb formation along the [\(\bar{1}10\)] crystalline direction can be attributed to the anisotropic lattice matching of the rhombohedral Sb(104)\({}_{r}\) layer on the cubic GaSb(001)\({}_{c}\) substrate. More of these structures tend to form as the surface energy increases with the substrate temperature. ### Anisotropic electrical transport An anisotropy similiar to that seen in surface morphology was also observed in electrical transport. The electrical resistance of the above four samples was measured in the square van der Pauw geometry at temperatures as low as 4 K (Fig. 6). Longitudinal resistance \(R_{yy}\) along the direction of the elongated Sb structures, the [\(\bar{1}10\)] crystalline orientation, shows lower values compared to \(R_{xx}\) which is in the [110] crystalline orientation. Due to the nature of the van der Pauw geometry, both \(R_{xx}\) and \(R_{yy}\) have contributions of electrical currents flowing in both [110] and [\(\bar{1}10\)] directions. We assume \(R_{xx}\) has more contribution from the current along the [110] direction whereas \(R_{yy}\) has more contribution from the current along the [\(\bar{1}10\)] direction. The temperature range of interest is below 150 K, where charge carries in the GaSb buffer/substrate freeze, and its resistivity exponentially increases to be several orders of magnitude higher than that of Sb films. Samples grown at 25\({}^{\circ}\)C and 60\({}^{\circ}\)C show similar temperature dependence with metallic behaviors in both \(R_{xx}\) and \(R_{yy}\), and the ratio of \(R_{xx}\) over \(R_{yy}\) is in the range of 2.1 - 2.5 at 4 K {Fig. 6(a)}. The longitudinal resistance of both crystalline orientations becomes more anisotropic as the substrate temperature increases. In samples grown at 90\({}^{\circ}\)C and 120\({}^{\circ}\)C, the ratio of \(R_{xx}\) over \(R_{yy}\) dramatically increases to be 8 and 545, respectively, at 4 K. We attribute the anisotropic transport features of the Sb films Figure 5: (a-d) SEM and (e-h) AFM images for the samples grown at 25\({}^{\circ}\)C, 60\({}^{\circ}\)C, 90\({}^{\circ}\)C, and 120\({}^{\circ}\)C. Roughness of the grown film increases with the growth temperature. Line-like features (elongated Sb structures) along [\(\bar{1}10\)] direction is more distinct on the samples grown at higher temperatures. to the anistropic structure formation. The elongated, wire-like features along the [\(\bar{1}10\)] direction result in lower resistance in _R\({}_{yy}\)_. In contrast, electrons moving along the [110] direction see more grain boundaries and curvature on the surface, which result in higher resistance and non-metallic temperature dependence in _R\({}_{xx}\)_. In addition, longitudinal (_R\({}_{xx}\)_ and _R\({}_{yy}\)_) and transverse (_R\({}_{xy}\)_) resistances were measured with respect to the perpendicular magnetic field (_H_) at different temperatures. Figures 6(b) and (c) show representative longitudinal and transverse curves for the sample grown at 25 \({}^{\circ}\)C. Measurements conducted on the other three samples revealed similar characteristics. The longitudinal resistance as a function of magnetic field (_R_ vs _H_) displays a parabolic behavior in both _R\({}_{xx}\)_ and _R\({}_{yy}\)_ down to 4 K in all four samples. The transverse resistance as a function of magnetic field (_R\({}_{xy}\)_ vs _H_) exhibits a linear behavior. p-type carrier density of _n\({}_{3D}=9.82\times 10^{20}\)_cm\({}^{-3}\)_ and hole mobility of 327.9_cm\({}^{2}/Vs\)_ were obtained for the sample grown at 25 \({}^{\circ}\)C. The high carrier density and the metallic temperature dependence of R\({}_{xx}\) confirm the semimetallic nature of the Sb films. It is likely that one type of carrier (holes) dominate the transport mechanism, while the contribution from the other carrier (electrons) is negligible. ### Anisotropic Sb film growth: surface reconstruction and formation energies To gain further insight into the growth mechanisms of anisotropic Sb thin films, density functional theory (DFT) calculations were performed using the FHI-aims code [28, 29, 30, 31, 32], an all-electron code Figure 6: (a) Resistance (_R\({}_{xx}\)_ and _R\({}_{yy}\)_) vs temperature curves reveal anisotropic electrical properties in all four samples grown at 25\({}^{\circ}\)C, 60\({}^{\circ}\)C, 90\({}^{\circ}\)C, and 120\({}^{\circ}\)C by using the square van der Pauw geometry in the [110] and [\(\bar{1}10\)] directions, respectively. (b) Longitudinal resistance (_R\({}_{xx}\)_) vs magnetic field (_H_) shows a parabolic behavior. (c) Transverse resistance (_R\({}_{xy}\)_) vs magnetic field (_H_) shows a linear behavior at cryogenic temperatures. with localized numerical orbitals as the basis, tight basis sets, and the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional [33]. We employed the Hirshfeld scheme for the van der Waals interactions, which are important for the description of the layer interactions and also the interaction between the 2D film and the substrate [34]. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) [35] algorithm was used for atomic relaxations with a condition that the maximum force component is less than \(5\times 10^{-3}\) eV/. We conducted a full relaxation of the GaSb unit cell structure and obtained a cubic lattice parameter of 6.125. Using this structure, we generated a \(3/\sqrt{2}\times 1/\sqrt{2}\times 4\) (\(12.99\times 4.330\times 54.52\)\({}^{3}\), 35 vacuum) (\(001\))\({}_{c}\) slab, which underwent structure relaxation with fixed lattice parameters. Subsequently, a (1 \(\times\) 3) surface reconstructed (\(001\))\({}_{c}\) structure emerged, as shown in Fig. 7(a), aligning with findings from previous experimental studies[36, 37]. The 1 \(\times\) 3 supercell (12.99 \(\times\) 12.99 ) of the surface-reconstructed GaSb substrate was employed as the substrate for placing a fully relaxed rhombohedral Sb slab - we found the supercell of 2 \(\times\) 3 \(\times\) 2(104)\({}_{r}\) slab with 12.38 x 13.01 \({}^{2}\) matches well the lattice parameter of the substrate. The GaSb(\(001\))\({}_{c}\) substrate has a cubic structure with 4-fold symmetry. The bulk crystal structure along the [\(110\)]\({}_{c}\) orientation is identical to that along the [\(\bar{1}10\)]\({}_{c}\) orientation. The formation of elongated Sb structures along the [\(\bar{1}10\)]\({}_{c}\) orientation, but not along the [\(110\)]\({}_{c}\) orientation, on GaSb(\(001\))\({}_{c}\) substrate can be attributed to the surface reconstruction Figure 7: First-principles modeling of Sb films on GaSb substrates. (a) Atomic structures of (1 \(\times\) 3) reconstructed cubic GaSb(\(001\)). The (104) plane of the rhombohedral Sb film is well aligned with the (001) plane of the GaSb substrate in two different orientations shown in (b) and (c). The formation energy of the Sb(\(104\))\({}_{r}\) (b) on the reconstructed substrate (8.34 _meV_/Å\({}^{2}\)) is found to be lower than that of (c) the 90\({}^{\circ}\)-rotated plane (10.9 _meV_/Å\({}^{2}\)). The upper panel shows the atomic configurations of the GaSb at the interface with the 2D Sb. of the GaSb substrate. The anisotropic Sb thin films were grown on the GaSb surface with a \((1\times 3)\) surface reconstruction, as shown in Fig. 7(a). In this reconstructed surface, atoms on the top layers are distorted to minimize the surface free energy, leading to prominent features along the \([\bar{1}10]_{c}\) orientation. As a result, the Sb atoms are more likely to align with the Ga and Sb atoms on the reconstructed surface, resulting in the observed elongated Sb structures along the \([\bar{1}10]_{c}\) orientation. We considered two different cases for the incorporation of the \((104)_{r}\) slab: one in which the lowest Sb atoms of the Sb \((104)_{r}\) slab are aligned with the top Ga and Sb atoms of the GaSb substrate, and another with the \((104)_{r}\) slab rotated \(90^{\circ}\). Both structures were fully relaxed, and the stable configurations are shown in Fig. 7. The formation energy of the 2D Sb was defined as \(\Delta E=(E_{total}-N_{GaSb}\times E_{GaSb}^{3\times 1}-N_{Sb}\times E_{Sb}^{bulk})/A\), where \(E_{total}\) is the total energy of the system consisting of 2D Sb on the GaSb substrate, \(E_{GaSb}^{1\times 3}\) is the energy per atom of the \(1\times 3\)-reconstructed GaSb, \(E_{Sb}^{bulk}\) is the energy per atom of the rhombohedral Sb bulk, a parent structure of the 2D film, \(N_{GaSb}\) and \(N_{Sb}\) are the number of atoms in the GaSb substrate and the Sb film, and \(A\) is the in-plane area of the supercell of the 2D Sb on GaSb. The formation energy of the Sb\((104)_{r}\) on the reconstructed substrate (8.34 _meV_/A\({}^{2}\)) is found to be lower than that of the \(90^{\circ}\)-rotated plane (10.9 _meV_/A\({}^{2}\)). This lower formation energy suggests that the observed Sb\((104)_{r}\) plane is energetically more favorable, further explaining the preferential growth of elongated Sb structures along the \([\bar{1}10]_{c}\) orientation. These calculations help to evaluate the energetics and structural stability of different configurations, which ultimately affect the growth behavior of the Sb films on GaSb\((001)_{c}\) substrates. This preferential growth of Sb structures along the \([\bar{1}10]_{c}\) orientation is further supported by the analysis of the formation energies. The DFT calculations reproduced the experimentally verified reconstruction of the substrate, which is the key to the growth of the Sb\((104)_{r}\) films with preferred orientation. ## 3 Conclusions In summary, we successfully mapped the growth regime of amorphous and crystalline Sb thin films on GaSb\((001)\) surface. We found that there is a transitional region between \(250^{\circ}\)C and \(150^{\circ}\)C where Sb atoms start to nucleate with diffusion of Ga atoms to form GaSb patches and then amorphous Sb film, confirmed by TEM. By avoiding Sb nucleation across the transitional region during the cooling process, crystalline Sb thin films was coherently grown on GaSb\((001)\) below \(120^{\circ}\)C. The crystal structure of the crystalline Sb thin films was found to be rhombohedral with Sb\((104)_{r}\) plane parallel to the cubic GaSb\((001)_{c}\) plane. At the interface, atoms of the rhombohedral Sb layer closely align with the GaSb lattice along the \([\bar{1}10]\) zone axis, but not along the \([110]\) zone axis. This anisotropic lattice matching can be attributed to the (1 \(\times\) 3) surface reconstruction of the GaSb\((001)\) surface, as suggested by our DFT calculations. These calculations provide valuable insights into the formation energy and structural stability of different configurations, which in turn influence the growth behavior of Sb films on the GaSb\((001)_{c}\) substrates. The reduced formation energy of Sb\((104)_{r}\) on the reconstructed substrate further enhances the preferential growth of elongated Sb structures along the \([\bar{1}10]_{c}\) orientation, leading to streaky/spotty RHEED patterns and anisotropic electronic transport. Such anisotropy is more prominent on the samples grown at higher temperatures. The mean surface roughness of Sb thin film grown at room temperature is 2.5 times smaller than that of Sb thin film grown at \(120^{\circ}\)C. The ratio of resistance along \([110]\) direction over the resistance along \([\bar{1}10]\) is two orders of magnitude higher for Sb thin film grown at \(120^{\circ}\)C in comparison to the one grown at room temperature. The successful demonstration of epitaxial Sb thin films on cubic GaSb(001) substrates opens a new avenue to embed rhombohedral Sb films on various cubic substrate even with the fact that the cubic Sb phase is unstable. The systematic change in anisotropic features in the Sb thin films suggests optimal growth conditions for further studies and future application using Sb thin films. For topological phases induced by the quantum confinement effect, smooth surface of Sb thin films with minimal anisotropy is preferred to achieve uniform quantum confinement effect. For electrical and thermal transport, the crystalline orientation (\([\bar{1}10]\) versus \([110]\)) needs to be considered according to the device applications. ## Acknowledgement This work was supported by the Science Alliance at the University of Tennessee, Knoxville, through the Support for Affiliated Research Teams program, by the High-Potential Individuals Global Training Program (Task No. 2021-0-01580) through the Institute of Information and Communications Technology Planning & Evaluation (IITP) funded by the Republic of Korea Ministry of Science and ICT (MSIT) and by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), Materials Sciences and Engineering Division (MSED) (S. Y., M. B. and A.R.M) and by the U.S. Department of Energy (DOE), Office of Science, National Quantum Information Science Research Centers, Quantum Science Center (M.Y.). This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) and the Compute and Data Environment for Science (CADES) at the Oak Ridge National Laboratory, which are supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725 and of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award BES-ERCAP0024568.
2306.09558
The dark energy as a natural property of cosmic polytropes -- A tutorial
Theoretical results on a conventional approach to the dark energy (DE) concept are reviewed and discussed. According to them, there is absolutely no need for a novel DE component in the Universe, provided that the associated matter-energy content is represented by a perfect fluid whose volume elements perform polytropic flows. When the thermodynamic energy of this fluid's internal motions is also considered as a source of the universal gravitational field, it compensates the DE needed to compromise spatial flatness in an accelerating Universe. The cosmological model with matter-energy content in the form of a polytropic fluid not only interprets the observations associated to the recent history of Universe expansion, but successfully confronts with all the current cosmological issues, thus arising as a viable alternative to $\Lm$CDM model.
Kostas Kleidis, Nikolaos K. Spyrou
2023-06-16T00:21:09Z
http://arxiv.org/abs/2306.09558v1
# The dark energy as a natural property of cosmic polytropes - A tutorial ###### Abstract Theoretical results on a conventional approach to the dark energy (DE) concept are reviewed and discussed. According to them, there is absolutely no need for a novel DE component in the Universe, provided that the associated matter-energy content is represented by a perfect fluid whose volume elements perform polytropic flows. When the thermodynamic energy of this fluid's internal motions is also considered as a source of the universal gravitational field, it compensates the DE needed to compromise spatial flatness in an accelerating Universe. The cosmological model with matter-energy content in the form of a polytropic fluid not only interprets the observations associated to the recent history of Universe expansion, but successfully confronts with all the current cosmological issues, thus arising as a viable alternative to \(\Lambda\)CDM model. ## 1 Introduction According to a considerable amount of observational data accumulated in the last 25 years, it became evident that a uniformly distributed energy component, the so-called DE, is present in the Universe (see, e.g., [1, 2]). First, it was the high-precision distance measurements, performed with the aid of distant Supernova Type Ia (SNe Ia) events, which revealed that, in a dust Universe (i.e., under the assumption that the constituents of the Universe matter content do not interact with each other, so as their world lines remain eternally parallel), these standard candles look fainter (i.e., they are located farther) than what was theoretically predicted [3 - 31]. To interpret this result, Perlmutter et al. [2] and Riess et al. [9], following Carroll et al. [32], admited that the long sought cosmological constant, \(\Lambda\), differs from zero; hence, apart from matter, the Universe contains also a uniformly distributed amount of energy [33]. The need for an energy component that does not cluster at any scale was subsequently verified by observations of galaxy clusters [34], the integrated Sachs-Wolfe effect [35], baryon acoustic oscillations (BAOs) [36, 37], weak gravitational lensing [38, 39], and the Lyman-\(\alpha\) forest [40]. If this energy component is due to the cosmological constant, it would necessarily introduce a repulsive gravitational force [41]; hence, the unexpected dimming of the SNe Ia standard candles was accordingly attributed to a recent acceleration of the Universe expansion (see, e.g., [42, 43]). At the same time, high precision cosmic microwave background (CMB) observations suggested that our Universe is, in fact, a spatially-flat Robertson-Walker (RW) cosmological model [45 - 56]. This means that the overall energy density, \(\varepsilon\), of the Universe matter-energy content, in units of the critical energy density, \(\varepsilon_{c}=\rho_{c}c^{2}\) (the equivalent to the critical rest-mass density, \(\rho_{c}=\frac{3H_{0}^{2}}{8\pi G}\), where \(H_{0}\) is the Hubble parameter at the present epoch, \(G\) is Newton's gravitational constant, and \(c\) is the velocity of light), must be equal to unity, \(\Omega=\frac{\varepsilon}{\varepsilon_{c}}=1\), i.e., much larger than the measured value of the mass-density parameter, \(\Omega_{M}=\frac{\rho}{\rho_{c}}=0.302\pm 0.006\), where \(\rho\) is the rest-mass density [57]. Therefore, an extra amount of energy was also needed, to justify spatial flatness. Quantum vacuum could serve as such an energy basin, attributing an effective cosmological constant to the Universe, which would justify both spatial flatness and accelerated expansion [33], [41], [58]. Unfortunately, vacuum energy is \(10^{123}\) times larger than the associated measured quantity in curved spacetime [58]. Clearly, an approach other than the cosmological constant (namely, the DE) was needed to incorporate spatial flatness in an accelerating Universe; hence, (too) many models were proposed. An (only-) indicative list would involve quintessence [59], k-essence [60], and other (more exotic) scalar fields [61], tachyons [62], brane cosmology [63, 64], scalar-tensor gravity [65], \(f(R)\)-theory [66, 67], holographic principle [68 - 70], Chaplygin gas [71 - 74], Cardassian cosmology [75 - 77], multidimensional cosmology [78 - 81], mass-varying neutrinos [82, 83], cosmological principle deviations [84 - 87], and many other models (see, e.g., [8]), not to mention the associated cosmographic results [89 - 108]. In an effort to illuminate darkness, we point out that, long before the necessity of DE's invention, another dark component was (and still is) present in the composition of the Universe matter content, the long sought dark matter (DM). Today, there is absolutely no doubt as regards the existence of a non-luminous mass component in the Universe. The associated observational data involve high-precicion measurements of the flattened galactic rotation curves [109, 110], weak gravitational lensing (WGL) [111], and modulation of the strong lensing effects due to massive elliptical galaxies [112]. On galactic scale, it was found that their dark haloes extend almost half the distance to the neighboring cosmic structures [113, 114], while, at even larger scales, the total-mass of galaxy clusters is proved to be tenfold as compared to their baryonic mass [115 - 117]. The same is also true at the Universe level, as it is inferred from the combination of CMB observations [53] and light-chemicals' abundances [118]. In view of all the above, it is now well established that 85% of the Universe mass content is non-luminous [119]. The precise nature of DM constituents is still unknown. There are many candidates, from ordinary stellar-size black holes, to Bose-Einstein condensates and ultralight axions [120]. Another interesting candidate are the weakly interacting massive particles (WIMPs) [121 - 123], which can be relevant to a potential detection of DM, because they annihilate through standard-model channels [124, 125]. However, regarding WIMPs, only weak-scale physics is involved, and, therefore, we argued that, practically, they do not interact with each other. Nevertheless, a few years ago, particle detectors [126, 127] and the Wilkinson Microwave Anisotropy Probe (WMAP) [128] have revealed an unexpected excess of cosmic positrons, which might be due to WIMPs collisions (see, e.g., [129 - 139]). In other words, WIMPs can be slightly collisional [140 - 144]. A cosmological model of self-interacting matter content could in fact unify DM and DE between them [145 - 158]. In this framework, Kleidis and Spyrou [159 - 163] admitted that the potential collisions of WIMPs maintain a tight coupling between them and their kinetic energy is re-distributed. On this assumption, the DM itself acquires fluid-like properties, and, hence, the Universe evolution is now driven by a fluid whose volume elements perform hydrodynamic flows (and not by dust). In our defense, the same assumption has been used also in modeling dark galactic haloes, significantly improving the corresponding velocity dispersion profiles [164 - 170]. If this is the case, the thermodynamic energy of the DM fluid internal motions should also be considered as a component of the Universe matter energy content that drives cosmic expansion. We cannot help but wondering, whether it could also compensate for the extra DE needed to compromise spatial flatness or not. This review article is organized as follows: In Section 2, we consider a spatially-flat cosmological model whose evolution is driven by a (perfect) fluid of DM, the volume elements of which perform polytropic flows [160 - 163]. Accordingly, an extra energy amount - the energy of internal motions - arises naturally and compensates the extra DE needed to compromise spatial flatness. Such a cosmological model involves a free parameter, the associated polytropic exponent, \(\Gamma\). In the case where \(\Gamma<1\) the cosmic pressure becomes negative and the Universe accelerates its expansion below a particular value of the cosmological redshift parameter, \(z\), the so-called _transition redshift_, \(z_{tr}\). In Section 3, we demonstrate that the polytropic DM model so assumed can confront with all the major issues of cosmological significance, since, in the constant pressure (i.e., \(\Gamma=0\)) limit, it fully reproduces all the predictions and the associated observational results concerning the _infernous_\(\Lambda\)CDM model [160 - 162]. Finally, we conclude in Section 4. Polytropic flows in a cosmological DM fluid CMB has been proved a most valuable tool for reliable cosmological observations (see, e.g., [45 - 56]). At the present epoch, data arriving from various CMB probes strongly suggest that the Universe can be described by a spatially-flat RW model, i.e., \[ds^{2}=c^{2}dt^{2}-S^{2}(t)\left(dx^{2}+dy^{2}+dz^{2}\right)\;, \tag{1}\] where \(S(t)\) is the scale factor as a function of cosmic time, \(t\). The evolution of the cosmological model given by Eq. (1) depends on the exact form and the properties of its matter-energy content. According to Kleidis & Spyrou [159 - 163], in a Universe filled with interactive DM there is absolutely no need for an extra DE component. Indeed, provided that the collisions of the DM constituents are frequent enough, they can maintain a tight coupling between them so as their kinetic energy to be re-distributed. In this case, the Universe matter content acquires thermodynamic properties and the curved spacetime evolution is driven by a perfect (DM) fluid, instead of presureless dust [159]. Due to the cosmological principle, this fluid is practically homogeneous and isotropic at large scale, and, therefore, its pressure, \(p\), obeys an EoS of the form \(p=f(\rho)\)[160]. Now, the fundamental units of the Universe matter content are the volume elements of this (DM) fluid, i.e., closed thermodynamical systems with conserved number of particles [171]. Their motion in the interior of the cosmic fluid under consideration is determined by the conservation law \[T^{\mu\nu}_{\;\;;\nu}=0\;, \tag{2}\] where Greek indices refer to the four-dimensional spacetime, Latin indices refer to the three-dimensional space, the semicolon denotes covariant derivative, and \(T^{\mu\nu}\) is the energy-momentum tensor of the source that drives the Universe evolution. In the particular case of a perfect fluid, \(T^{\mu\nu}\) reads \[T^{\mu\nu}=(\varepsilon+p)u^{\mu}u^{\nu}-pg^{\mu\nu}\;, \tag{3}\] where \(u^{\mu}\) is the four-velocity (\(u_{\mu}u^{\mu}=1\)), \(g^{\mu\nu}\) is the Universe metric tensor, and \(\varepsilon\) is the total energy density of the fluid, which, now, is decomposed to \[\varepsilon=\epsilon(\rho,T)+\rho\;U(T) \tag{4}\] (see, e.g., [172], pp. 81 - 84 and 90 - 94). In Eq. (4), \(T\) is the absolute temperature, \(U(T)\) is the energy of this fluid's internal motions, and \(\epsilon(\rho,T)\) represents all forms of energy besides that of internal motions. In view of Eq. (4), Eqs. (2) represent the hydrodynamic flows of volume elements in the interior of a perfect-fluid source as they are traced by an observer comoving with cosmic expansion in a maximally symmetric cosmological model (see, e.g., [173], p. 91). The evolution of such a model (see, e.g., [173] pp. 61, 62) can be determined by the Friedmann equation of the classical Friedmann-Robertson-Walker (FRW) cosmology \[H^{2}=\frac{8\pi G}{3c^{2}}\varepsilon\:, \tag{5}\] where \[H=\frac{\dot{S}}{S} \tag{6}\] is the Hubble parameter in terms of \(S(t)\) and the dot denotes differentiation with respect to cosmic time. To solve Eq. (5), first we need to determine \(\varepsilon\), in other words \(\epsilon\) and \(U\). To do so, we can use the first law of thermodynamics in curved spacetime, \[dU+pd\left(\frac{1}{\rho}\right)={\cal C}dT \tag{7}\] (see, e.g., [172], p. 83), where \({\cal C}\) is the specific heat of the cosmic fluid, in connection with the zeroth component of Eq. (2), i.e., the continuity equation \[\dot{\varepsilon}+3\frac{\dot{S}}{S}(\varepsilon+p)=0\:. \tag{8}\] Finally, we need to decide on the form of the pressure as a function of \(\rho\). Accordingly, we admit that the volume elements of the Universe matter content perform polytropic flows [160 - 163]. Polytropic process is a reversible thermodynamic process in which the specific heat of a closed system evolves in a well-defined manner (see, e.g., [174], p. 2). For \({\cal C}=constant\), the system possesses only one independent state variable, the rest-mass density, and the EoS for a perfect fluid, \(p\propto\rho T\), results in \[p = p_{0}\left(\frac{\rho}{\rho_{0}}\right)^{\Gamma} \tag{9}\] \[T = T_{0}\left(\frac{\rho}{\rho_{0}}\right)^{\Gamma-1} \tag{10}\] (see, e.g., [160]), where \(p_{0}\), \(\rho_{0}\), and \(T_{0}\) denote the present-time values of pressure, rest-mass density, and temperature, respectively, and \(\Gamma\) is the polytropic exponent. In such a model, Eq. (7) yields \[U=U_{0}\left(\frac{\rho}{\rho_{0}}\right)^{\Gamma-1}, \tag{11}\] where \[U_{0}={\cal C}T_{0}+\frac{1}{\Gamma-1}\frac{p_{0}}{\rho_{0}} \tag{12}\] is the present-time value of the cosmic fluid internal energy. In view of Eqs. (4) and (11), Eq. (8) is written in the form \[\Gamma U_{0}\left(\dot{\rho}+3\frac{\dot{S}}{S}\rho\right)+\dot{\epsilon}+3 \frac{\dot{S}}{S}\epsilon-3(\Gamma-1)\rho_{0}{\cal C}T_{0}\frac{\dot{S}}{S} \left(\frac{\rho}{\rho_{0}}\right)^{\Gamma}=0\:. \tag{13}\] Since the total number of particles in a closed system (volume element) is conserved, we furthermore have \[\dot{\rho}+3\frac{\dot{S}}{S}\rho=0\Rightarrow\rho=\rho_{0}\left(\frac{S_{0}}{ S}\right)^{3} \tag{14}\] and, therefore, Eq. (13) results in \[\epsilon=\rho_{0}c^{2}\left(\frac{S_{0}}{S}\right)^{3}-\rho_{0}{\cal C}T_{0} \left(\frac{S_{0}}{S}\right)^{3\Gamma}. \tag{15}\] By virtue of Eqs. (11) - (15), the total energy density (4) of the polytropic DM model under consideration is written in the form \[\varepsilon=\rho_{0}c^{2}\left(\frac{S_{0}}{S}\right)^{3}+\frac{p_{0}}{\Gamma -1}\left(\frac{S_{0}}{S}\right)^{3\Gamma}=\rho c^{2}+\frac{1}{\Gamma-1}\:p \tag{16}\] and the Friedmann equation (5) results in \[\left(\frac{H}{H_{0}}\right)^{2}=\Omega_{M}\left(\frac{S_{0}}{S}\right)^{3} \left[1+\frac{1}{\Gamma-1}\frac{p_{0}}{\rho_{0}c^{2}}\left(\frac{S_{0}}{S} \right)^{3(\Gamma-1)}\right]. \tag{17}\] Extrapolation of Eq. (17) to the present epoch, yields the corresponding value of the polytropic DM fluid pressure, i.e., \[p_{0}=\rho_{0}c^{2}(\Gamma-1)\frac{1-\Omega_{M}}{\Omega_{M}}\:. \tag{18}\] In view of Eq. (18), for \(\Gamma<1\), the pressure (9) is negative and so might be the quantity \(\varepsilon+3p\), something that would lead to \(\ddot{S}>0\) (see, e.g., [43]). In other words, for \(\Gamma<1\), the polytropic DM model under consideration can accelerate its expansion. At the same time, Eq. (16) reads \[\varepsilon=\rho_{c}c^{2}\left[\Omega_{M}\left(\frac{S_{0}}{S}\right)^{3}+(1- \Omega_{M})\left(\frac{S_{0}}{S}\right)^{3\Gamma}\right]\:, \tag{19}\] the extrapolation of which to the present epoch suggests that the total energy density parameter of the polytropic DM model under consideration is exactly unity, i.e., \[\Omega_{0}=\frac{\varepsilon_{0}}{\varepsilon_{c}}=\frac{\rho_{c}c^{2}}{\rho_{c }c^{2}}\left[\Omega_{M}+(1-\Omega_{M})\right]=1\:. \tag{20}\] We see that, the polytropic DM model with \(\Gamma<1\) might be an excellent (conventional) solution to the DE issue, by compromising both spatial flatness (\(\Omega_{0}=1\)) and accelerated expansion (\(\varepsilon+3p<0\)) of the Universe in a unique theoretical framework. ## 3 Predictions and outcomes of the polytropic DM model In this Section, we explore the properties of a polytropic DM model with \(\Gamma<1\), in association to all the major issues of cosmological significance. To do so, unless otherwise is stated, in what follows we admit that \(\Omega_{M}=0.274\), as suggested by the _nine years WMAP survey_[54]. This value differs from the corresponding _Planck_ result, \(\Omega_{M}=0.308\)[55, 56], and/or the most recent observational one, \(\Omega_{M}=0.302\), of the _Dark Energy Survey_ (DES) consortium [57], while resting quite far also from its _Pantheon Compilation_ counterpart, \(\Omega_{M}=0.306\)[30]. It is evident that the exact value of \(\Omega_{M}\), as also of many other parameters of cosmological significance (see, e.g., [175]), is still a matter of debate. ### The accelerated expansion of the Universe Upon consideration of Eq. (18), Eq. (17) is written in the form \[\left(\frac{H}{H_{0}}\right)^{2}=\left(\frac{S_{0}}{S}\right)^{3}\left[\Omega _{M}+(1-\Omega_{M})\left(\frac{S}{S_{0}}\right)^{3(1-\Gamma)}\right] \tag{21}\] or, it terms of the cosmic scale factor, in the more convenient form \[\left[\frac{d}{dt}\left(\frac{S}{S_{0}}\right)^{3/2}\right]^{2}=\frac{1}{t_{ EdS}^{2}}\left\{\Omega_{M}+(1-\Omega_{M})\left[\left(\frac{S}{S_{0}}\right)^{3 /2}\right]^{2(1-\Gamma)}\right\}\:, \tag{22}\] where \(t_{EdS}=\frac{2}{3H_{0}}\) is the age of the Universe in the Einstein-de Sitter (EdS) model. Eq. (22), can be solved in terms of hypergeometric functions, as follows \[\left(\frac{S}{S_{0}}\right)^{\frac{3}{2}}\:{}_{2}F_{1}\left(\frac{1}{2(1- \Gamma)}\:,\:\frac{1}{2}\:;\:\frac{3-2\Gamma}{2(1-\Gamma)}\:;-\left(\frac{1- \Omega_{M}}{\Omega_{M}}\right)\left[\frac{S}{S_{0}}\right]^{3(1-\Gamma)} \right)=\sqrt{\Omega_{M}}\left(\frac{t}{t_{EdS}}\right) \tag{23}\] (cf. [176], pp. 1005 - 1008). For \(\Gamma<1\), the resulting hypergeometric series converges absolutely within the circle of (unit) radius \(\left|\frac{S}{S_{0}}\right|\leq 1\) (cf. [177], p. 556). There are two limiting cases of Eq. (23), of particular interest: (i) For \(\Omega_{M}=1\), it yields \(S=S_{0}\left(\frac{t}{t_{EdS}}\right)^{2/3}\), i.e., the scale factor of the EdS model. (ii) For \(\Gamma=0\), (i.e., in the \(\Lambda\)CDM-like limit), Eq. (23) is written in the form \[\left(\frac{S}{S_{0}}\right)^{\frac{3}{2}}\,_{2}F_{1}\left(\frac{1}{2}\,,\, \frac{1}{2}\;;\,\frac{3}{2}\;;\,-\left(\frac{1-\Omega_{M}}{\Omega_{M}}\right) \left[\frac{S}{S_{0}}\right]^{3}\right)=\sqrt{\Omega_{M}}\left(\frac{t}{t_{ EdS}}\right)\;, \tag{24}\] which, upon consideration of the identity \[{}_{2}F_{1}\left(\frac{1}{2}\,,\,\frac{1}{2}\;;\,\frac{3}{2}\,;\,-x^{2}\right) =\frac{1}{x}\sinh^{-1}(x) \tag{25}\] (cf. [176], Eq. 9.121.28, p. 1007 and [177], Eq. 15.1.7, p. 556), where in our case, \(x=\sqrt{\left(\frac{1-\Omega_{M}}{\Omega_{M}}\right)\left[\frac{S}{S_{0}} \right]^{3}}\), results in \[S(t)=S_{0}\left(\frac{\Omega_{M}}{1-\Omega_{M}}\right)^{1/3}\sinh^{2/3}\left( \sqrt{1-\Omega_{M}}\frac{t}{t_{EdS}}\right)\;. \tag{26}\] For \(1-\Omega_{M}=\Omega_{\Lambda}\), Eq. (26) represents the scale factor of the \(\Lambda\)CDM model (cf. Eq. 5 of [178]), as it should. On the other hand, at the present epoch, i.e., when \(t=t_{0}\) and \(S=S_{0}\), Eq. (23) reads \[\frac{t_{0}}{t_{EdS}}=\frac{1}{\sqrt{\Omega_{M}}}\,_{2}F_{1}\left(\frac{1}{2 (1-\Gamma)}\,,\,\frac{1}{2}\;;\,1+\frac{1}{2(1-\Gamma)}\,;\,-\frac{1-\Omega_ {M}}{\Omega_{M}}\right)\,. \tag{27}\] With the aid of Eq. (27) we can eliminate \(t_{EdS}\) from Eq. (23), to obtain the scale factor of the polytropic DM model (in units of \(S_{0}\)) as a function of cosmic time (in units of \(t_{0}\)), i.e., \[\left(\frac{S}{S_{0}}\right)^{3/2}\,\frac{{}_{2}F_{1}\left(\frac{1}{2(1- \Gamma)}\,,\,\frac{1}{2}\;;\,\frac{3-2\Gamma}{2(1-\Gamma)}\;;\,-\left(\frac{ 1-\Omega_{M}}{\Omega_{M}}\right)\left[\frac{S}{S_{0}}\right]^{3(1-\Gamma)} \right)}{{}_{2}F_{1}\left(\frac{1}{2(1-\Gamma)}\,,\,\frac{1}{2}\;;\,\frac{3-2 \Gamma}{2(1-\Gamma)}\;;\,-\frac{1-\Omega_{M}}{\Omega_{M}}\right)}=\frac{t}{t _{0}}\,. \tag{28}\] The evolution of \(S(t)\) (in units of \(S_{0}\)) parametrized by \(\Gamma<1\), is given in Fig. 1. We observe that, in all cases, there is a value of \(t<t_{0}\) (somewhere around \(t\simeq 0.75\,t_{0}\)), above which, the function \(S(t)\) becomes concave, i.e., \(\ddot{S}>0\). This is a very important result, indicating that the polytropic DM model with \(\Gamma<1\) definitely transits from deceleration to acceleration at a certain time, (quite) close to the present epoch, \(t_{0}\). ### The age of the Universe By construction, Eq. (27), represents the age, \(t_{0}\), of the polytropic DM Universe in units of \(t_{EdS}\). The behaviour of \(t_{0}\) as a function of the polytropic exponent \(\Gamma<1\), is presented in Fig. 2. In the \(\Lambda\)CDM-like (\(\Gamma=0\)) limit, Eq. (27) yields \[t_{0}=t_{EdS}\frac{1}{\sqrt{1-\Omega_{M}}}\sinh^{-1}\sqrt{\frac{1-\Omega_{M}} {\Omega_{M}}}\,. \tag{29}\] For \(\Omega_{M}=0.274\), Eq. (29) results in \(t_{0}=1.483\)\(t_{EdS}\), which, adopting that \(H_{0}\simeq 67.5\) km/sec/Mpc (see, e.g., [54], [57]), yields \(t_{0}=13.79\,Gys\). This, theoretically predicted value of \(t_{0}\), is in an excellent agreement with the corresponding observational result [54 - 57] for the age of the \(\Lambda\)CDM Universe. In fact, from Fig. 2 we see that, for every \(\Gamma<1\), the age of the polytropic DM model is always larger than that of its EdS counterpart, in other words, the polytropic DM model so assumed no longer suffers from what is referred to as the age problem. Figure 1: The scale factor, \(S\), of the polytropic DM model in units of its present-time value, \(S_{0}\), as a function of cosmic time \(t\) (in units of \(t_{0}\)), for \(\Gamma=0.5\) (orange), \(\Gamma=0\) (dashed), \(\Gamma=-0.5\) (blue), \(\Gamma=-1\) (red), and \(\Gamma=-2\) (green). For each and every curve, there is a value of \(t<t_{0}\) above which \(S(t)\) becomes concave, i.e., the polytropic DM Universe accelerates its expansion. ### Transition to acceleration In the polytropic DM model under consideration, the Hubble parameter (21) in terms of the cosmological redshift, \(1+z=\frac{S_{0}}{S}\), is written in the form \[H=H_{0}(1+z)^{\frac{3}{2}}\left[\Omega_{M}+\frac{1-\Omega_{M}}{(1+z)^{3(1-\Gamma )}}\right]^{1/2}. \tag{30}\] In view of Eq. (30), the deceleration parameter \[q(z)=\frac{dH/dz}{H(z)}(1+z)-1 \tag{31}\] reads \[q(z)=\frac{1}{2}\left[1-\frac{3(1-\Gamma)(1-\Omega_{M})}{\Omega_{M}(1+z)^{3(1- \Gamma)}+(1-\Omega_{M})}\right]\,. \tag{32}\] For \(z=0\) (i.e., at the present epoch), we obtain \[q_{0}=\frac{1}{2}\left[1-3(1-\Gamma)(1-\Omega_{M})\right]\,, \tag{33}\] Figure 2: The age of the polytropic DM model, \(t_{0}\), in units of \(t_{EdS}\), as a function of the polytropic exponent \(\Gamma<1\) (red solid line). Notice that, for every \(\Gamma<1\), we have \(t_{0}>t_{EdS}\), with \(t_{0}\) approaching \(t_{EdS}\) only in the isothermal (\(\Gamma\to 1\)) limit. The horizontal solid line denotes the age of the Universe in the \(\Lambda\)CDM-like (\(\Gamma=0\)) limit of the polytropic DM model, i.e., \(t_{0}=1.483\:t_{EdS}\). which, in the \(\Lambda\)CDM-like (i.e., \(\Gamma=0\)) limit, yields \(q_{0}=-0.54\). This result lies well within the associated observationally determined range of \(q_{0}\), i.e., \(q_{0}=-0.53^{+0.15}_{-0.13}\)[179], and, in fact, reproduces the corresponding (i.e., theoretically-derived) \(\Lambda\)CDM result, that is, \(q_{0}=-0.55\pm 0.01\)[180]. But what is more important, is that the condition \(q(z)\leq 0\) reveals a particular value of \(z\), the so-called transition redshift, \[z_{tr}=\left[(2-3\Gamma)\frac{1-\Omega_{M}}{\Omega_{M}}\right]^{\frac{1}{3(1- \Gamma)}}-1\:, \tag{34}\] below which, \(q(z)\) becomes negative, i.e., the Universe accelerates its expansion. In the \(\Lambda\)CDM-like (\(\Gamma=0\)) limit, Eq. (34) yields \(z_{tr}=0.744\), which (i) lies well-within range of the corresponding \(\Lambda\)CDM result, namely, \(z_{tr}=0.752\pm 0.041\)[29] and (ii) actually reproduces the associated result of Muccino et al. [181], i.e., \(z_{tr}=0.739^{+0.065}_{-0.089}\), obtained by applying a model-independent method to a number of SNeIa, BAOs, and GRB data. Furthermore, by virtue of Eq. (34), the condition \(z_{tr}\geq 0\) imposes a more stringent constraint on the potential values of \(\Gamma\), namely, \[\Gamma\leq\frac{1}{3}\left[2-\frac{\Omega_{M}}{1-\Omega_{M}}\right]\:. \tag{35}\] For \(\Omega_{M}=0.274\), Eq. (35) yields \(\Gamma\leq 0.541\). Apparently, the polytropic DM model with \(\Gamma\leq 0.541\) accelerates its expansion at cosmological redshifts lower than a transition value, without the need of any novel DE component. The behaviour of \(z_{tr}\), as a function of the parameter \(\Gamma\leq 0.541\), is presented in Fig. 3. ### The total EoS parameter In the \(\Lambda\)CDM-like (\(\Gamma=0\)) limit, our model actually reproduces the behaviour of the (so-called) total EoS parameter, \[w_{tot}\equiv\frac{p}{\varepsilon}\:, \tag{36}\] as a function of \(z\)[88]. For \(\Gamma=0\), upon consideration of Eqs. (14), (16), and (18), Eq. (36) yields \[w_{tot}\equiv\frac{p}{\varepsilon}=-\frac{1-\Omega_{M}}{1-\Omega_{M}+\Omega_ {M}(1+z)^{3}}\:, \tag{37}\] the behaviour of which, in terms of the cosmological redshift, is depicted in Fig. 4. Today, i.e., for \(z=0\), we have \(w_{tot}=-\left(1-\Omega_{M}\right)=-\Omega_{\Lambda}\), in complete correspondence to the \(\Lambda\)CDM result, \[w_{tot}=\frac{p_{tot}}{\rho_{tot}}=\frac{p_{\Lambda}}{\rho_{M}+\rho_{\Lambda} }=\frac{-\rho_{\Lambda}}{\rho_{M}+\rho_{\Lambda}}=\frac{-\Omega_{\Lambda}}{ \Omega_{M}+\Omega_{\Lambda}}=-\Omega_{\Lambda} \tag{38}\] (in connection, see, e.g., [88]). ### The range of values of the polytropic exponent The isentropic velocity of sound is defined as \[c_{s}^{2}=c^{2}\left(\frac{\partial p}{\partial\varepsilon}\right)_{\mathcal{S}} \tag{39}\] (see, e.g., [182], p. 52), where \(\left(\frac{\partial p}{\partial\varepsilon}\right)_{\mathcal{S}}\leq 1\), in order to avoid violation of causality [183]. In the polytropic DM model, the total energy density of the Universe matter-energy content is related to pressure by Eq. (16), whose partial differentiation yields the associated velocity of sound as a function of \(z\), \[\left(\frac{c_{s}}{c}\right)^{2}=-\frac{\Gamma(1-\Gamma)\frac{1-\Omega_{M}}{ \Omega_{M}}}{(1+z)^{3(1-\Gamma)}+\Gamma\frac{1-\Omega_{M}}{\Omega_{M}}}\;. \tag{40}\] Now, the condition for a positive (or zero) velocity-of-sound square imposes a major constraint on \(\Gamma\), i.e., \[\left(\frac{c_{s}}{c}\right)^{2}\geq 0\Leftrightarrow\Gamma\leq 0\;, \tag{41}\] Figure 3: The transition redshift, \(z_{tr}\), in the polytropic DM modelin terms of the associated exponent, \(\Gamma\) (blue solid curve). For \(\Gamma\leq-0.38\) (red dashed curve), the Universe enters into the phantom realm [160]. while, admitting that, today, DM is _cold_, i.e., at \(z=0\), \[\left(\frac{c_{s}}{c}\right)^{2}<\frac{1}{3}\,, \tag{42}\] we obtain \[\Gamma>-\frac{2}{3}\left[\sqrt{1+\frac{3}{4}\frac{\Omega_{M}}{1-\Omega_{M}}}-1 \right]=-0.1\;. \tag{43}\] Eqs. (41) and (43) significantly narrow the potential range of values of the polytropic exponent, which, from now on, rests in \[-0.1<\Gamma\leq 0\,. \tag{44}\] Hence, in the polytropic DM model under consideration, the associated polytropic exponent, if not zero, is definitely negative and very close to zero. Notice that, in view of Eq. (44), Eq. (9) is in excellent agreement with the associated result for a generalized Chaplygin gas, \(p\sim-\rho^{\alpha}\), arising from the combination of X-ray and SNe Ia measurements with data from Fanaroff-Riley type IIb radio-galaxies, namely, \(\alpha=-0.09^{+0.54}_{-0.33}\)[184]. Figure 4: The total EoS parameter, \(w_{tot}\),in terms of \(z\), in the context of the \(\Lambda\)CDM-like (i.e., \(\Gamma=0\)) limit of the polytropic DM model. Notice that, today (i.e., at \(z=0\)), \(w_{tot}\approx-0.7\), while, for larger values of \(z\), it approaches zero, in complete agreement to \(\Lambda\)CDM cosmology [88]. ### The jerk parameter A dimensionless third (time-)derivative of the scale factor, \(S(t)\), the so-called _jerk parameter_, \[j(S)=\frac{1}{SH^{3}}\frac{d^{3}S}{dt^{3}} \tag{45}\] (see, e.g., [185, 186]), can be used to demonstrate the departure of the polytropic DM model under consideration from its \(\Lambda\)CDM counterpart. The reason is that, for the \(\Lambda\)CDM model \(j=1\) for every \(z\). Hence, any deviation of \(j\) from unity enables us to constrain the departure of the model so assumed from the \(\Lambda\)CDM model in an effective manner [186]. In terms of the deceleration parameter, \(j\) is written in the form \[j(q)=q(2q+1)+(1+z)\frac{dq}{dz} \tag{46}\] (see, e.g., [187]), which, in the polytropic DM model, i.e., upon consideration of Eq. (32), yields \[j(z)=1-\frac{9}{2}\Gamma\frac{(1-\Gamma)}{1+\frac{\Omega_{M}}{1-\Omega_{M}}(1 +z)^{3(1-\Gamma)}}\:. \tag{47}\] Notice that, for \(\Gamma=0\), \(j=1\); hence, once again, the \(\Gamma=0\) limit of the polytropic DM model under consideration does reproduce the \(\Lambda\)CDM model. Now, by virtue of Eq. (41), the jerk parameter (47) reads \[j(z)=1+\frac{9}{2}|\Gamma|\frac{(1+|\Gamma|)}{1+\frac{\Omega_{M}}{1-\Omega_{M }}(1+z)^{3(1+|\Gamma|)}}\:, \tag{48}\] i.e., it is always positive. This is a very important result, since it guarantees that, at \(z_{tr}\), a (phase) transition of the Universe expansion from deceleration to acceleration actually takes place (in connection, see [186], [188]). Two values of \(j(z)\) are of particular interest: (i) Its present-time (\(z=0\)) value, given by \[j_{0}\equiv j(z=0)=1+\frac{9}{2}|\Gamma|(1+|\Gamma|)\:, \tag{49}\] which, in view of Eq. (44) results in \[1\leq j_{0}<1.495\:, \tag{50}\] clearly discriminating the \(\Gamma\neq 0\) polytropic DM model from its \(\Lambda\)CDM counterpart, and (ii) the value of the jerk parameter at transition (\(z=z_{tr}\)), which, upon consideration of Eq. (34), it is given by \[j_{tr}\equiv j(z_{tr})=1+\frac{3}{2}|\Gamma|\:. \tag{51}\] In this case, we address (once again) to Muccino et al. [181] to use the corresponding model-independent constraints on \(j_{tr}\), in order to estimate the value of the polytropic index, \(|\Gamma|\), in a model-independent way. Accordingly, adopting the best-fit value \(j_{tr}=1.028\) of [181], obtained by means of the DHE method (see [188]), Eq. (51) yields \[|\Gamma|=0.02\:,\] while, adopting the corresponding DDPE value [188], \(j_{tr}=1.041\), Eq. (51) results in \[|\Gamma|=0.03\:.\] Both values, not only favour a \(\Gamma\neq 0\) polytropic DM model, but also, are well-within range of Eq. (44), i.e., once again, compatibility of the polytropic DM model with observation is well established. In view of [186], we cannot help but wondering whether the polytropic DM model with a jerk parameter given by Eq. (48) is also compatible to the Union 2.1 Compilation of the SNeIa data or not. ### The Hubble diagram of the SNe Ia data Today, (too) many samples of SNe Ia data are used to scrutinize the viability of the DE models proposed. One of the most extended is the Union 2.1 Compilation [29], consisting of 580 SNe Ia events, being inferior only to the (so-called) Pantheon Compilation [30]. We shall use the former sample to demonstrate compatibilty of the theoretically derived (in the context of the polytropic DM model) formula for the distance modulus, \[\mu(z)=5\log\left(\frac{d_{L}}{Mpc}\right)+25 \tag{52}\] (see, e.g., [173], Eqs. 13.10 and 13.12, p. 359), where \[d_{L}(z)=c(1+z)\int_{0}^{z}\frac{dz^{\prime}}{H(z^{\prime})} \tag{53}\] is the luminosity distance of a light source measured in megaparsecs (see, e.g., [189], p. 76), with the observationally determined Hubble diagram of the SNe Ia standard candles [29]. Upon consideration of Eq. (30), Eq. (53) results in (see, e.g., [176], pp. 1005 - 1008) \[d_{L}(z)=\frac{2c}{H_{0}}\frac{1}{\sqrt{1-\Omega_{M}}}\frac{1+z}{2-3\Gamma} \left[(1+z)^{\frac{2-3\Gamma}{2}}\times\right.\] \[{}_{2}F_{1}\left(\frac{2-3\Gamma}{6(1-\Gamma)}\,,\,\frac{1}{2}\,;\, \frac{8-9\Gamma}{6(1-\Gamma)}\,;\,-\left[\frac{\Omega_{M}}{1-\Omega_{M}}\right](1 +z)^{3(1-\Gamma)}\right)-\] \[{}_{2}F_{1}\left(\frac{2-3\Gamma}{6(1-\Gamma)}\,,\,\frac{1}{2}\,; \,\frac{8-9\Gamma}{6(1-\Gamma)}\,;\,-\left[\frac{\Omega_{M}}{1-\Omega_{M}} \right]\right)\right]\;, \tag{54}\] where, once again \({}_{2}F_{1}\) is Gauss hypergeometric function. Using Eq. (54), we overplot \(\mu(z)\) on the Hubble diagram of the Union 2.1 Compilation [29] to obtain Fig. 5. We see that, in the polytropic DM model under consideration, the various theoretical curves representing the distance modulus fit the entire Union 2.1 dataset quite accurately. In other words, there is absolutely no disagreement between the theoretical prediction of the SNe Ia distribution in the polytropic DM model so assumed and the corresponding observational result. ### The CMB shift parameter The CMB shift parameter, \(\mathcal{R}\), is widely used as a probe of DE, due to the fact that different cosmological models can result in an almost identical CMB power spectrum, if they have identical values of \(\mathcal{R}\)[190]. For a spatially-flat cosmological model, the CMB shift Figure 5: Hubble diagram of the Union 2.1 Compilation SNeIa data. Overplotted are three theoretically-determined curves representing the distance modulus in the polytropic DM model, i.e., Eq. (54). parameter is given by \[{\cal R}=\sqrt{\Omega_{M}}\int_{0}^{z_{*}}\frac{dz}{H(z)/H_{0}}\,, \tag{55}\] where \(z_{*}\) is the value of the cosmological redshift at photon decoupling. In the polytropic DM model under consideration, i.e., by virtue of Eq. (30), Eq. (55) is written in the form \[{\cal R}=\int_{0}^{z_{*}}\frac{(1+z^{\prime})^{\frac{3}{2}|\Gamma|}\,dz^{\prime }}{\left[(1-\Omega_{M})+\Omega_{M}\left(1+z^{\prime}\right)^{3(1+|\Gamma|)} \right]^{1/2}}\,, \tag{56}\] which, in terms of hypergeometric functions (see e.g., [176], pp. 1005 - 1008), results in \[{\cal R}=\frac{2}{\left(2+3|\Gamma|\right)\sqrt{1-\Omega_{M}}} \left[(1+z_{*})^{\frac{2+3|\Gamma|}{2}}\times\right.\] \[\left.{}_{2}F_{1}\left(\frac{2+3|\Gamma|}{6(1+|\Gamma|)}\,,\, \frac{1}{2}\,;\,\frac{8+9|\Gamma|}{6(1+|\Gamma|)}\,;\,-\left[\frac{\Omega_{M} }{1-\Omega_{M}}\right](1+z_{*})^{3(1+|\Gamma|)}\,\right)-\right.\] \[\left.{}_{2}F_{1}\left(\frac{2+3|\Gamma|}{6(1+|\Gamma|)}\,,\, \frac{1}{2}\,;\,\frac{8+9|\Gamma|}{6(1+|\Gamma|)}\,;\,-\left[\frac{\Omega_{M} }{1-\Omega_{M}}\right]\right)\right]. \tag{57}\] To determine the value of \({\cal R}\), we adopt the _nine-year WMAP survey_ result [191], that, \(z_{*}=1091.64\pm 0.47\). Accordingly, for \(\Gamma=0\), Eq. (57) yields \[{\cal R}=1.7342\,, \tag{58}\] while, according to the _nine-year WMAP survey_[191], the value of the shift parameter in the standard \(\Lambda\)CDM cosmology is given by \[{\cal R}=1.7329\pm 0.0058\,. \tag{59}\] In other words, the theoretical value of the shift parameter in the \(\Lambda\)CDM-like limit of the polytropic DM model reproduces, to high accuracy, the corresponding result obtained by fitting the CMB data to the standard \(\Lambda\)CDM model; hence, in the limit of \(\Gamma=0\), the polytropic DM model under consideration may very well reproduce also the observed CMB spectrum. ### The spectral index of cosmological perturbations The dimensionless power spectrum of rest-mass density perturbations in an isotropic Universe is defined as \[\Delta^{2}(\delta)=\frac{1}{2\pi^{2}}k^{3}|\delta(k)|^{2}, \tag{60}\] where \(\delta=\frac{\delta\rho}{\rho}\) is the density contrast and \(k\) is the associated wavenumber (see, e.g., [189], pp. 464-469). In a similar fashion, the metric counterpart of Eq. (60) is given by \[\Delta^{2}(\phi)=\frac{1}{2\pi^{2}}k^{3}|\phi(k)|^{2}, \tag{61}\] where \(\phi\) denotes the perturbation around a spatially-flat metric [162]. Usually, \(\Delta^{2}(\delta)\) is parametrized as \[\Delta^{2}(\delta)\sim k^{3+n_{s}} \tag{62}\] (see, e.g., [192], pp. 291, 292), where \(n_{s}\) is the scalar spectral index [193]. Once again, we can test the validity of the polytropic DM model by reproducing the spectrum of rest-mass density perturbations in the associated \(\Lambda\)CDM-like limit. The reason is that, most of the observational data accumulated so far, are model dependent [175] and, currently, the most popular model is the so-called concordance, i.e., \(\Lambda\)CDM model [13]. Accordingly, as regards the dimensionless power spectrum of cosmological perturbations in the \(\Lambda\)CDM-like limit of the polytropic DM model under consideration, we have \[\frac{\Delta^{2}(\delta)}{\Delta^{2}(\phi)}=4\left[1+\frac{1}{3}\left(\frac{k _{ph}}{H}\right)^{2}\right]^{2}\;, \tag{63}\] where \(k_{ph}=k/S(t)\) is the associated physical wavenumber [162]. The behaviour of Eq. (63) as a function of \(k_{ph}\) (in units of \(H\)) is depicted in Fig. 6 (red solid line). Accordingly, we observe that for \(\left(\frac{k_{ph}}{H}\right)\geq 5\), i.e., for every physical wavelength less than the horizon length (dashed vertical line), the quantity \(\Delta^{2}(\delta)/\Delta^{2}(\phi)\) exhibits a prominent power-law dependence on \(k_{ph}\), of the form \[\frac{\Delta^{2}(\delta)}{\Delta^{2}(\phi)}\sim\left(\frac{k_{ph}}{H}\right)^{ 3.970} \tag{64}\] and, therefore, \[\Delta^{2}(\phi)\sim\frac{\Delta^{2}(\delta)}{\left(\frac{k_{ph}}{H}\right)^{ 3.970}}=\frac{\left(\frac{k_{ph}}{H}\right)^{n_{s}+3}}{\left(\frac{k_{ph}}{H} \right)^{3.970}}=\left(\frac{k_{ph}}{H}\right)^{n_{s}-0.970}. \tag{65}\] CMB anisotropy measurements (see, e.g., [52, 53]) and several physical arguments (see, e.g., [189], p. 466, [192], p. 292) suggest that the power spectrum of metric perturbations is scale invariant, i.e., \(\Delta^{2}(\phi)\sim k^{0}\). In this case, Eq. (65) yields \[n_{s}=0.970\;. \tag{66}\] In view of Eqs. (62) and (66), we see that, although in principle there is no reason why the rest-mass density spectrum should exhibit a power-law behaviour, in the context of the polytropic DM model it effectively does so, i.e., \[\Delta^{2}(\delta)\sim k_{ph}^{3+n_{s}^{eff}},\;\mbox{with}\;\;n_{s}^{eff}=0.9 70\;. \tag{67}\] What is more important, is that, the theoretically derived value (67) for the effective scalar spectral index of rest-mass density perturbations in the \(\Lambda\)CDM-like limit of the polytropic DM model, actually reproduces the corresponding observational (i.e., _Planck_) result, \(n_{s}^{obs}=0.968\pm 0.006\)[55, 56]. In short, matter perturbations of linear dimensions smaller than the Hubble radius, when considered in the \(\Lambda\)CDM-like (i.e., \(\Gamma=0\)) limit of the polytropic DM model under consideration, effectively exhibit a power-law behaviour of the form \(|\delta|^{2}\sim k^{n_{s}^{eff}}\), with the associated scalar spectral index being equal to \(n_{s}^{eff}=0.970\), i.e., very close to observation. ### Rest-mass energy - DE equality In view of Eq. (19), the rest-mass energy density, \(\varepsilon_{mat}=\rho c^{2}\), and the internal (dark) energy density, \(\varepsilon_{int}=\varepsilon-\varepsilon_{mat}\), of the polytropic DM model under consideration satisfy the relation \[\frac{\varepsilon_{int}}{\varepsilon_{mat}}=\frac{1-\Omega_{M}}{\Omega_{M}} \frac{1}{(1+z)^{3(1-\Gamma)}}\,. \tag{68}\] Figure 6: Small-scale perturbations i.e., Eq. (63), in the \(\Gamma=0\) limit of a polytropic DM model (red solid line). The straight dashed lines, each one of slope \(\alpha=3.970\) represent Eq. (64); hence, in this model, rest-mass density perturbations with \(\left(\frac{k_{ph}}{H}\right)\geq 5\) exhibit an effective power-law behaviour with a scalar spectral index equal to \(n_{s}^{eff}=0.970\). Eq. (68) suggests that, for \(\Gamma=0\), DE becomes equal to its rest-mass counterpart, not at transition (\(z_{tr}=0.744\)), but, quite later, at \(z_{eq}=0.384\), which is very close to the corresponding observationally determined value \(z_{eq}=0.391\pm 0.033\)[29], associated (once again) to \(\Lambda\)CDM model. ### It is not a coincidence The evolution of a spatially-flat FRW model is governed by Eqs. (5), (6), and (8). The combination of them results in \[\frac{\ddot{S}}{S}=-\frac{4\pi G}{3c^{2}}\left(\epsilon+3p\right) \tag{69}\] (see, e.g., [43, 44]); hence, the condition for accelerated expansion, \(\ddot{S}>0\), yields \[\varepsilon+3p<0\:. \tag{70}\] In the context of the polytropic DM model, condition (70) is written in the form \[\rho_{0}c^{2}(1+z)^{3}\left[1-(2+3|\Gamma|)\frac{1-\Omega_{M}}{\Omega_{M}} \frac{1}{(1+z)^{3(1+|\Gamma|)}}\right]<0\:, \tag{71}\] in view of which, such a model accelerates its expansion at cosmological redshifts lower than a particular value, namely, \[z<\left[(2+3|\Gamma|)\frac{1-\Omega_{M}}{\Omega_{M}}\right]^{\frac{1}{3(1+| \Gamma|)}}-1\equiv z_{tr}\:, \tag{72}\] in complete correspondence to Eq. (34). According to Eqs. (70) and (72), the assumption that the cosmological evolution can be driven by a polytropic DM fluid could most definitely explain why the Universe transits from deceleration to acceleration at \(z_{tr}\), without the need for any novel DE component or the cosmological constant. Instead, it would reveal a conventional form of DE, i.e., the one due to this fluid's internal motions, which, so far, has been disregarded [113]. ## 4 Discussion and Conclusions The possibility that the extra DE needed to compromise both spatial flatness and the accelerated expansion of the Universe actually corresponds to the thermodynamic internal energy of the cosmic fluid itself is reviewed and further scrutinized. In this approach, the Universe is filled with a perfect fluid of collisional DM, the volume elements of which perform polytropic flows [160 - 163]. In the distant past (\(z\gg 1\)) the polytropic DM model so assumed behaves as an EdS model, filled with dust (cf. Eq. 32), while, on the approach to the present epoch (\(t\simeq 0.75\,t_{0}\)), the internal physical characteristics of the cosmic fluid take over its dynamics (cf. Eq. 68). Their energy can compensate the DE needed to compromise spatial flatness (cf. Eq. 20), while, the associated cosmic pressure is negative (cf. Eq. 18). As a consequence, the polytropic DM model under consideration accelerates its expansion at cosmological redshifts lower than a transition value (cf. Eq. 34), in consistency with condition \(\varepsilon+3p<0\) (cf. Eq. 72). This model is characterized by a free parameter, the associated polytropic exponent \(\Gamma\). In fact, several physical arguments can impose successive constraints on \(\Gamma\), which, eventually, settles down to the range \(-0.1<\Gamma\leq 0\) (cf. Eq. 44), namely, if it is not zero (i.e., a \(\Lambda\)CDM-like model), it is definitely negative and very close to zero. The polytropic DM model under consideration can reproduce all the major observational results of conventional (i.e., \(\Lambda\)CDM) Cosmology, simply by means of a single fluid, i.e., without _a priori_ assuming the existence of any DE component and/or the cosmological constant. This model actually belongs to the broad class of the _unified DE models_, in which the DE effects are due to the particular properties of the (unique) cosmic fluid (in connection, see, e.g., [194, 195]). We can test the validity of the polytropic DM model so assumed, by reproducing all the current cosmological issues in the associated \(\Lambda\)CDM-like limit. The reason is that, most of the observational data accumulated so far are model dependent [175] and, currently, the most popular model is the \(\Lambda\)CDM model. In this context, our polytropic DM model can confront all major issues of cosmological significance, as, e.g., * The nature of the universal (dark) energy deficit needed to compromise spatial flatness: In the polytropic DM model under consideration it can be attributed to thermodynamic energy of the associated fluid internal motions (cf. Eqs. 19 and 20). * The accelerated expansion of the Universe: For \(t>0.75\;t_{0}\) (i.e., quite close to the present epoch), the solution of Friedmann equation that governs the evolution of the scale factor, \(S(t)\), in the polytropic DM model, becomes concave, i.e., \(\ddot{S}>0\) resulting in the acceleration of the Universe expansion (see, e.g., Fig. 1). * 57] for the age of the \(\Lambda\)CDM Universe (see, e.g., Fig. 2). * The value of the cosmological redshift parameter at which transition from deceleration to acceleration takes place, \(z_{tr}\): In the \(\Lambda\)CDM-like limit (i.e., \(\Gamma=0\)) of the polytropic DM model so assumed, we obtain \(z_{tr}=0.744\) (cf. Fig. 3), which lies well-within range of the corresponding \(\Lambda\)CDM result, namely, \(z_{tr}=0.752\pm 0.041\)[29], as well as in the associated model-independent range \(z_{tr}=0.739^{+0.065}_{-0.089}\)[181]. * The long-sought theoretical value of the deceleration parameter, \(q\), at the present epoch: In the \(\Lambda\)CDM-like limit of the polytropic DM model under consideration, \(q_{0}=-0.54\) (cf. Eq. 33, for \(\Gamma=0\)), that is fully compatible with the observational result, \(q_{0}=-0.53^{+0.17}_{-0.13}\)[179], associated to the \(\Lambda\)CDM model. * The behaviour of the total EoS parameter, \(w\): In the \(\Lambda\)CDM-like (i.e., \(\Gamma=0\)) limit of the polytropic DM model, today, \(w_{tot}\approx-0.7\) (cf. Fig. 4), while, as \(z\) grows, \(w_{tot}\to 0\), as suggested by \(\Lambda\)CDM cosmology [88]. * The resulting range of values of the polytropic index, \(-0.1<\Gamma\leq 0\): It is in excellent agreement with the associated result for a generalized Chaplygin gas, \(p\sim-\rho^{\alpha}\), arising from the combination of X-ray and SNe Ia measurements with data from Fanaroff-Riley type IIb radio-galaxies, namely, \(\alpha=-0.09^{+0.54}_{-0.33}\)[184]. * The behaviour of the associated jerk parameter, \(j(z)\): The polytropic DM model possesses a positive jerk parameter, with the aid of which (at transition) we can also estimate the value of the polytropic index, \(|\Gamma|\), in a model-independent manner [181], namely, \(|\Gamma|\in(0.02,\,0.03)\). * The Hubble diagram of the SNe Ia standard candles: In the polytropic DM model under consideration, the theoretically derived distance modulus fits the entire Union 2.1 dataset [29] with accuracy. In other words, there is absolutely no disagreement between the theoretical prediction of our model and the observed distribution of the distant SNe Ia events (cf. Fig. 5). * The CMB shift parameter: In the \(\Lambda\)CDM-like limit of the polytropic DM model, \({\cal R}=1.7342\), while, according to the _nine-year WMAP survey_, the value of the CMB shift parameter in the standard \(\Lambda\)CDM model is \({\cal R}=1.7329\pm 0.0058\)[191]. In other words, the value of the CMB shift parameter in the \(\Lambda\)CDM-like limit of the polytropic DM model actually reproduces the corresponding result obtained by fitting the CMB data to the standard \(\Lambda\)CDM model. It is, therefore, expected that, in the limit \(\Gamma=0\), the polytropic DM model under consideration may very well reproduce also the observed CMB spectrum. * And, in fact, it actually does so (cf. Eq. 67), since the theoretically derived value for the effective scalar spectral index of rest-mass density perturbations in the \(\Lambda\)CDM-like limit of the polytropic DM model, \(n_{s}^{eff}=0.970\), actually reproduces the corresponding observational _Planck_ result, \(n_{s}^{obs}=0.968\pm 0.006\)[55, 56]. In other words, matter perturbations of linear dimensions smaller than the horizon length, when considered in the \(\Lambda\)CDM-like (i.e., \(\Gamma=0\)) limit of a polytropic DM model, effectively exhibit a power-law behaviour of the form \(|\delta|^{2}\sim k^{n_{s}^{eff}}\), with the associated scalar spectral index being equal to \(n_{s}^{eff}=0.970\), i.e., very close to observation. * DE equality: In the \(\Lambda\)CDM-like limit of the polytropic DM model under consideration (cf. Eq. 68), DE becomes equal to its rest-mass counterpart at \(z_{eq}=0.384\), which is remarkably close to the corresponding observationally determined value \(z_{eq}=0.391\pm 0.033\)[29], associated to \(\Lambda\)CDM model. * Finally, the polytropic DM model can, most definitely, explain why the Universe transits to acceleration at \(z_{tr}\), without the need for any novel DE component or the cosmological constant, solely being consistent with the general relativistic condition, that, \(\varepsilon+3p<0\) (cf. Eqs. 70 and 72). Compatibility of the polytropic DM model with the observational constraints on all the parameters of cosmological significance needs to be further explored and scrutinized, in order to decide on the likelihood of this model over all other alternatives and, especially, the \(\Lambda\)CDM model. Clearly, the ultimate verification of any (unified or not) DE model would be the reproduction of the observed DM halo distributions and the associated galactic evolution. In this context, preliminary results regarding the evolution of small-scale density perturbations at low redshift values, suggest that, in the \(c_{s}^{2}\neq 0\) case of the polytropic DM model, the density-contrast profile, \(\delta(z)\), consists of _peaks and troughs_ that resemble the observed galaxy distribution (in \(z\)). Therefore, as regards the evolution of small-scale density perturbations in a polytropic DM model with \(c_{s}^{2}\neq 0\), a more elaborated study is necessary and it will be the scope of a future work. Finally, it is clear that this review article neither deals with nor takes into account the fundamental nature of the polytropic DM constituents, i.e., the field nature of the cosmic fluid. In this context, recent studies suggest that certain barotropic fluids may arise naturally from a \(k-\)essence lagrangian, involving a self-interacting (real or complex) scalar field [196]. In direct connection to the quantum origin of the polytropic DM fluid, one should also address the origin of the (extra) amount of _heat_, \({\cal C}dT\), offered to the volume elements, as suggested by Eq. (7). According to [76], this could be due to a long-range confining force between the DM particles. In our case, it would be of the form \(F=-Kr^{2+3|\Gamma|}\), where is the radial distance and \(K>0\) is a normalization constant (in connection, see Eqs. (80) and (89) of [76]). This force may be either of gravitational origin or a new force [141], [144]. However, it is not yet clear whether a system subject to a long-range confining force can reach thermodynamic equilibrium, hence, this is also a matter of debate that must be addressed in future studies. In any case, instead of treating any novel DE component and/or modified gravity theories as pillars of contemporary Cosmology, let us address a much simpler possibility: The polytropic flow of the conventional matter-energy content of the Universe, in connection to a potential self-interacting nature of DM [197]. As we have demonstrated in this review, the yet ignored thermodynamical content of the Universe could arise as a mighty and relatively inexpensive contestant for an extra (dark) energy candidate that could compensate both spatial flatness and accelerated expansion. In view of all the above, the cosmological model with matter content in the form of a self-interacting DM fluid whose volume elements perform polytropic flows looks very promising and should be further explored and scrutinized in the search for a viable alternative to \(\Lambda\)CDM model.
2307.06542
Quantum Image Denoising: A Framework via Boltzmann Machines, QUBO, and Quantum Annealing
We investigate a framework for binary image denoising via restricted Boltzmann machines (RBMs) that introduces a denoising objective in quadratic unconstrained binary optimization (QUBO) form and is well-suited for quantum annealing. The denoising objective is attained by balancing the distribution learned by a trained RBM with a penalty term for derivations from the noisy image. We derive the statistically optimal choice of the penalty parameter assuming the target distribution has been well-approximated, and further suggest an empirically supported modification to make the method robust to that idealistic assumption. We also show under additional assumptions that the denoised images attained by our method are, in expectation, strictly closer to the noise-free images than the noisy images are. While we frame the model as an image denoising model, it can be applied to any binary data. As the QUBO formulation is well-suited for implementation on quantum annealers, we test the model on a D-Wave Advantage machine, and also test on data too large for current quantum annealers by approximating QUBO solutions through classical heuristics.
Phillip Kerger, Ryoji Miyazaki
2023-07-13T03:11:09Z
http://arxiv.org/abs/2307.06542v3
# Quantum Image Denoising: A Framework via Boltzmann Machines, QUBO, and Quantum Annealing ###### Abstract We investigate a framework for binary image denoising via restricted Boltzmann machines (RBMs) that introduces a denoising objective in quadratic unconstrained binary optimization (QUBO) form and is well-suited for quantum annealing. The denoising objective is attained by balancing the distribution learned by a trained RBM with a penalty term for derivations from the noisy image. We derive the statistically optimal choice of the penalty parameter assuming the target distribution has been well-approximated, and further suggest an empirically supported modification to make the method robust to that idealistic assumption. We also show under additional assumptions that the denoised images attained by our method are, in expectation, strictly closer to the noise-free images than the noisy images are. While we frame the model as an image denoising model, it can be applied to any binary data. As the QUBO formulation is well-suited for implementation on quantum annealers, we test the model on a D-Wave Advantage machine, and also test on data too large for current quantum annealers by approximating QUBO solutions through classical heuristics. ## 1 Introduction Quantum annealing (QA) [15, 7, 2] is a promising technology for obtaining good solutions to difficult optimization problems, by making use of quantum interactions to aim to solve Ising or quadratic unconstrained binary optimization (QUBO) instances. Since Ising and QUBO instances are NP-hard, and many other combinatorial optimization problems can be reformulated as Ising or QUBO instances (see e.g. [9]), QA has the potential to become an extremely useful tool for optimization. As the capacities of commercially available quantum annealers continue to improve rapidly, it is of great interest to build models that are well-suited for this emerging technology. Furthermore, QA has promising machine learning applications surrounding Boltzmann Machines (BMs), as both QA and BMs are closely connected to the Boltzmann distribution. Boltzmann Machines are a type of generative artificial neural network that aim to learn the distribution of some training data set by fitting a Boltzmann distribution to the data, as described thoroughly in [10, SS20]. On the other hand, QA aims to produce approximate minimum energy (maximum likelihood) solutions to a Boltzmann distribution via finding the ground state of the associated Hamiltonian that determines the distribution. Hence, maximum likelihood type problems on BMs are a natural candidate for applying QA in a machine learning framework. We contribute to the goal of furthering useful applications of QA in machine learning in this paper by building an image denoising model particularly well-suited for implementation via QA. The task of image denoising is a fundamental problem in image processing and machine learning. In any means of collecting images, there is always a chance of some pixels being afflicted by noise that we wish to remove; see e.g. [4] for a good overview. Accordingly, many classical and data-driven approaches to the image denoising problem have been studied in the literature [5, 25, 11, 23, 6]. This paper studies a quantum binary image denoising model using Restricted Boltzmann Machines (RBMs henceforth) [10, SS20.2] that can take advantage of QA by formulating the denoising problem as a QUBO instance. Specifically, given a trained RBM, we introduce a penalty-based denoising scheme that admits a simple QUBO form, for which we derive the statistically optimal penalty parameter as well as a practically-motivated robustness modification. The denoising step only needs to solve a QUBO admitting a bipartite graph representation, and so is well-suited for QA. As QA has also shown promise for training BMs [1, 8], our full model lends itself well for denoising images using quantum annealers, and could thus play a role in the their future applications since QA can then be leveraged for _both_ the training and denoising steps. The model also shows promise in absence of QA, and our insights presented are not limited to the QA framework, as the QUBO formulation of the denoising problem and its statistical properties we prove may be of independent interest. The paper is organized as follows. Section 2 gives a summary of background on quantum annealing and Boltzmann Machines. Section 3 describes our main contribution of the image denoising model for QAs, and Section 4 shows some practical results obtained. **Remark 1.1**.: We frame our work as a binary image denoising method, although the framework does not depend on the data being images, and can be applied to the denoising of any binary data. This is because the framework does not use any spatial relationships between the pixels, and instead treats the image as a flattened vector whose distribution is to be learned. Hence, the denoising scheme can be applied as-is to any other binary data setting. ### Contributions and Organization We provide QUBO-based denoising method for binary images (applicable to general binary data) using restricted Boltzmann machines in Section 3. This is done by formulating the denoising objective in equation 3.1 by combining the energy function of the distribution learned by the RBM with a (parameterized) penalty term for deviations from a given noisy image. This objective turns out to have an equivalent QUBO formulation, which is shown in claim 1. In Theorem 3.4, we derive the optimal choice for the penalty parameter under the assumption that the true images follow the distribution learned by the RBM, which also recovers the maximum a posteriori estimate per Corollary 3.5, though our model is more flexible, and this flexibility allows for useful practical modifications. Theorem 3.6 shows that the denoising method yields a result that is _strictly_ closer (in expectation) to the true image than the noisy image is, under some additional assumptions. Given that these idealistic assumptions won't be met in reality, we propose a robustness modification in Section 3.3 that _improves_ performance empirically. In Section 4, as the method lends itself well to quantum annealing, we then implement the method on a D-Wave Advantage 5000-qubit quantum annealer, demonstrating strong empirical performance. Since only small datasets can be tested on the D-Wave machine due to the relatively low number of qubits, we also test the method on a larger dataset, for which we use simulated annealing on a conventional computer in place of quantum annealing to find good solutions the QUBO denoising objective. Though we highlight the method being well-suited for quantum annealers, we emphasize that it may be of independent interest to the machine learning and image processing communities at large. ### Related Work Closely related work of [17] uses a similar model as ours for the image reconstruction task, also solving QUBO formulations via quantum annelaing. In the reconstruction task, some subset of pixels is unknown (or obscured or missing), and needs to be restored, whereas our work considers denoising, where which pixels are noise-afflicted is unknown. [11] derives a maximum a posteriori (MAP) estimator for the noise free image as a denoising method in a particular model of binary images that is less general than ours, though we would recover their estimator under a particular choice of our penalty parameter if we were to apply our framework to their model (since we recover MAP in a more general setting). Further, RBMS and quantum annealing have been studied for the classification problem, for instance in [18] and [1]. Other research in the machine learning communities has also studied handling _label noise_, such as related work in [26], which studies the problem of training models in the presence of noisy labels, whereas our approach is entirely unsupervised (the data need not have any labels to begin with). ## 2 Background Quantum Annealers make use of quantum interactions with the primary goal of finding the ground state of Hamiltonian by initializing and then evolving a system of coupled qubits over time [14]. In particular, we may view QA as implementing the Ising spin-glass model [22] evolving over time. As the QUBO model is equivalent to the Ising model [9], and QUBO instances can be efficiently transformed to Ising instances, QA is well suited to provide good solutions to QUBO problems. A QUBO cost function, or energy function, takes the form \[f_{Q}(x):=\sum_{i,j}Q_{ij}x_{i}x_{j} \tag{2.1}\] where \(x_{i}\in\{0,1\}\), and \(Q\) is a symmetric, real-valued matrix. We will occasionally refer to \(Q_{ij}\) as the \(weight\) between \(x_{i}\) and \(x_{j}\). QUBO is well-known to be NP hard [3], and many combinatorial problems can be reformulated as QUBO instances. See [9, 20] for thorough presentation of QUBO formulations of various problems. A Boltzmann Distribution using the above QUBO as its energy function takes the form \[P_{Q}^{model}(x)=\frac{1}{z}\exp(-f(x,Q)), \tag{2.2}\] where \(z\) is a normalizing constant. Note that a parameter called inverse temperature has been fixed to unity and is not explicitly shown in the above expression. In this paper, we will focus on making use of Boltzmann Machines, a type of generative neural network that fits a Boltzmann Distribution to the training data via making use of latent variables. Specifically, we consider Restricted Boltzmann Machines (RBMs), which have seen significant success and frequent use in deep probabilistic models [10]. RBMs consist of an input layer of \(visible\) nodes, and a layer of latent, or \(hidden\) nodes, which each have zero intra-group weights. Let \(\mathbf{v}\in\{0,1\}^{v}\) and \(\mathbf{h}\in\{0,1\}^{h}\) denote the visible and hidden nodes, respectively. It will be convenient for us to write \(x=(\mathbf{v},\mathbf{h})\in\{0,1\}^{v+h}\) as their concatenation. The probability distribution represented by a RBM is then \[P_{Q}^{model}((\mathbf{v},\mathbf{h}))=\frac{1}{z}\exp(-f(( \mathbf{v},\mathbf{h}),Q)) \tag{2.3}\] with the restriction that \(Q_{ij}=Q_{ji}=0\) if \(i,j\in\{1,\dots,v\}\) or \(i,j\in\{v+1,\dots,v+h\}\). Hence, we have the simplified energy function \[f((\mathbf{v},\mathbf{h}),Q) =\sum_{i=1}^{v+h}\sum_{j=1}^{v+h}2Q_{ij}(\mathbf{v},\mathbf{h})_ {i}(\mathbf{v},\mathbf{h})_{j}=\sum_{i=1}^{v}\sum_{j=v+1}^{v+h}Q_{ij}\mathbf{ v}_{i}\mathbf{h}_{j}+\sum_{i=1}^{v}Q_{ii}\mathbf{v}_{i}^{2}+\sum_{i=v+1}^{v+h}Q_{ii} \mathbf{h}_{i}^{2}\] \[=\mathbf{h}^{T}W\mathbf{v}+b_{v}^{T}\mathbf{v}+b_{h}^{T}\mathbf{ h}=:f_{W,b_{v},b_{h}}(\mathbf{v},\mathbf{h}) \tag{2.4}\] where \(W\) is the \(v\crossh\) matrix consisting of the \(Q_{ij}\) weights between the visible and hidden nodes, and \(b_{v}\) and \(b_{h}\) are vectors of the diagonal entries \(Q_{ii}\), \(i\in\{1,\ldots,v\}\) corresponding to visible nodes, and \(Q_{ii},i\in\{n+1,...,v+h\}\) corresponding to hidden nodes, respectively. We will write the Boltzmann distribution with this energy function as \(P_{W,b_{v},b_{h}}\), noting that this is also \(P_{Q}^{model}\) for the appropriate \(Q\). It is well known that RBMs can universally approximate discrete distributions [10], making them a powerful model. They are also more easily trained than general Boltzmann Machines, usually through the contrastive divergence algorithm as described in [12], or variants thereof. ### Training Boltzmann Machines We first devote some discussion to the training of RBMs. Subsection 3.1 then describes how to denoise images via QUBO given a well-trained RBM. Continuing with the notation as in equation 2.4, the probability distribution represented by a RBM is \[P_{\theta}(\mathbf{v},\mathbf{h})=\frac{1}{z_{\theta}}\exp(-f_{\theta}).\] For simplicity, denote \(\theta=(W,b_{v},b_{h})\) as the model parameters henceforth. The normalizing constant \(z_{\theta}\) above is \[z_{\theta}=\sum_{\mathbf{v}\in\{0,1\}^{v}}\sum_{\mathbf{h}\in\{0,1\}^{h}} \exp(-f_{\theta}(\mathbf{v},\mathbf{h}))\] which is becomes intractable quickly even for relatively small values of \(v\) and \(h\). The common training approach aims to maximize the log-likelihood of the data. At a high-level, this will be done by approximating gradients and following a stochastic gradient scheme. However, since our data consists only of the visible nodes, we need to work with the marginal distribution of the visible nodes. This is given by \[P_{\theta}(\mathbf{v})=\sum_{\mathbf{h}}P_{\theta}(\mathbf{v},\mathbf{h})= \sum_{\mathbf{h}}\frac{\exp[-f_{\theta}(\mathbf{v},\mathbf{h})]}{z_{\theta}}\] Denote our set training data samples by \(V:=\{\mathbf{v}^{1},...,\mathbf{v}^{N}\}\). We will use superscripts to indicate training data samples, and reserve subscripts to denote entries of vectors. Then the log-likelihood is given by \[l_{\theta}(V) =\sum_{k=1}^{N}\log P_{\theta}(\mathbf{v}^{k})=\sum_{k=1}^{N} \log\sum_{\mathbf{h}}P_{\theta}(\mathbf{v}^{k},\mathbf{h})\] \[=\left(\sum_{k}\log\sum_{\mathbf{h}}\exp\bigl{(}-f_{\theta}( \mathbf{v}^{k},\mathbf{h})\bigr{)}\right)-N\cdot\log z_{\theta}\] \[=\left(\sum_{k}\log\sum_{\mathbf{h}}\exp\bigl{(}-f_{\theta}( \mathbf{v}^{k},\mathbf{h})\bigr{)}\right)-N\cdot\log\sum_{\mathbf{v}}\sum_{ \mathbf{h}}\exp(-f_{\theta}(\mathbf{v},\mathbf{h})) \tag{2.5}\] Now we can calculate the gradient with respect to \(\theta\) as \[\nabla l_{\theta}(V) =\sum_{k=1}^{N}\frac{\sum_{\mathbf{h}}\exp\bigl{(}-f_{\theta}( \mathbf{v}^{k},\mathbf{h})\bigr{)}\nabla(-f_{\theta}(\mathbf{v}^{k},\mathbf{h }))}{\sum_{\mathbf{h}}\exp(-f_{\theta}(\mathbf{v}^{k},\mathbf{h}))}-N\cdot \frac{\sum_{\mathbf{v},\mathbf{h}}\exp(-f_{\theta}(\mathbf{v},\mathbf{h})) \nabla(-f_{\theta}(\mathbf{v},\mathbf{h}))}{\sum_{\mathbf{v},\mathbf{h}}\exp (-f_{\theta}(\mathbf{v},\mathbf{h}))}\] \[=\frac{1}{N}\sum_{k=1}^{N}\mathbb{E}_{P_{\theta}(\mathbf{h}| \mathbf{v}^{k})}\left[(\mathbf{v}^{k})^{T}\mathbf{h}+\mathbf{v}^{k}+\mathbf{ h}\right]-\mathbb{E}_{P_{\theta}(\mathbf{v},\mathbf{h})}\left[\mathbf{v}^{t} \mathbf{h}+\mathbf{v}+\mathbf{h}\right]\] The first term can be computed exactly and efficiently from the data, since the conditional \(P_{\theta}(\mathbf{h}|\mathbf{v})\) admits the simple form \(P(\mathbf{h}_{j}=1|\mathbf{v})=logistic(b_{h}+(\mathbf{v}^{T}W)_{j})\); we refer the interested reader to [8] or [10] and will focus on the second term. Due to its intractability to compute (one would have to sum over all possibilities of \(\mathbf{v}\) and \(\mathbf{h}\)), the most promising approach is to approximate it by sampling from \(P_{\theta}(\mathbf{v},\mathbf{h})\). Classically, this is done via Gibbs sampling as described in [12]. However, recent research has also investigated using quantum annealers to sample from the relevant Boltzmann distribution, as suggested in [8], which would make QAs useful in the training process since obtaining good Gibbs samples can be expensive. We note that together with our framework, QAs show promise to become useful for both the RBM training and the denoising process in the implementation of our method. ## 3 Image Denoising as Quadratic Unconstrained Binary Optimization This section is devoted to showing how one can naturally frame the image denoising problem as a QUBO instance over a learned Boltzmann Distribution fit to the data. ### Denoising via QUBO Let us assume we are given a trained Restricted Boltzmann Machine described in Sec. 2. The model prescribes to each vector \(x\in\{0,1\}^{v+h}\) the cost \(f_{Q}(x)\) and corresponding likelihood \(P_{Q}^{model}(x)\) defined in Eqs. (2.1) and (2.3), respectively. We will here make the assumption that \(P_{Q}^{model}\) describes the distribution of our data. Hence, high likelihood vectors in \(P_{Q}^{model}\) correspond to low cost vectors of \(f_{Q}\). In particular, note that finding the maximum likelihood argument in (2.2) corresponds to finding a solution to the QUBO instance in (2.1). Now, supposing this model, our goal is to reconstruct an image that has been affected by noise. The visible portion of our vector will be considered to be a flattened image with \(v\) pixels, black or white corresponding to \(0\) or \(1\), respectively, in the binary entries of the vector. #### 3.1.1 Noise Model We now describe the noise assumptions we will conduct our analysis under. **Definition 3.1**.: For \(x\in\{0,1\}^{v}\), we define \(x\)_afflicted by salt-and-pepper noise of level \(\sigma\)_ as the random variable \(\tilde{X}_{x,\sigma}:=(x+\epsilon)mod2\), where \(\epsilon_{i}=B_{i}(p)\thicksim Bern(\sigma)\), independently. In other words, a binary image afflicted by salt-and-pepper noise has each pixel independently flipped with probability \(\sigma\). In particular, we are interested in \(\tilde{X}_{X,\sigma}\), where \(X\thicksim P_{Q}^{model}\), which is the compound random variable obtained by sampling \(X\) from the learned distribution of the data and then afflicting it with salt-and-pepper noise. For notational simplicity, will simply write \(\tilde{X}\) when the intended subscripts are clear from context. Note that salt-and-pepper noise is a natural noise model for binary data, since the only means in which pixels (or data entries, for general binary data) can be changed is by flipping the \(0-1\) value. Suppose we are given a realization \(\tilde{x}\in\{0,1\}^{v}\) of \(\tilde{X}_{X,\sigma}\). The reconstruction process aims to retrieve this original \(X\) using \(\tilde{x}\) and the trained model through \(Q\). The approach we will take begins from the intuition that \(X\) is likely to be a high-likelihood image that is close to \(\tilde{x}\). To enforce this "closeness" to \(\tilde{x}\) while searching for higher likelihood images in our model to remove noise, we add to the cost in (2.1) a penalty for deviations from \(\tilde{x}\) to formulate the following natural denoising cost function: \[f_{Q,\tilde{x},\rho}(x)=f_{Q}(x)+\rho\sum_{i,j}(x_{i}-\tilde{x}_{i})^{2} \tag{3.1}\] for some \(\rho>0\) that determines the penalty level. The intuition is that the minimizer of this function for a well-chosen \(\rho\) will change a restricted number of pixels to find an image that is similar to the noisy image, but has a lower cost, i.e. higher likelihood, under the model, in hopes of removing the noise. We show next that this minimizing (3.1) corresponds to solving a QUBO instance. **Claim 1**.: Defining \(\tilde{Q}^{\rho,\tilde{x}}\in\mathbb{R}^{(v+h)\times(v+h)}\) by setting \(\tilde{Q}^{\rho,\tilde{x}}_{ij}=Q_{ij}\) if \(i\neq j\) and \(\tilde{Q}^{\rho,\tilde{x}}_{ij}=Q_{ii}+\rho(1-2\tilde{x})\) if \(i=j\), we have \[argmin_{x}f_{Q,\tilde{x},\rho}(x)=argmin_{x}f_{\tilde{Q}^{\rho, \tilde{x}}}(x). \tag{3.2}\] Proof.: \[f_{Q,\tilde{x},\rho}(x) =f_{Q}(x)+\rho\sum_{i}(x_{i}-\tilde{x}_{i})^{2}=\sum_{i,j}Q_{ij}x _{i}x_{j}+\rho\sum_{i}x_{i}^{2}-2x_{i}\tilde{x}_{i}+\tilde{x}_{i}^{2}\] \[=\sum_{i\neq j}Q_{ij}x_{i}x_{j}+\sum_{i}Q_{ii}x_{i}^{2}+\rho(x_{i }^{2}-2x_{i}^{2}\tilde{x}_{i}+\tilde{x}^{2})\] \[=\sum_{i\neq j}Q_{ij}x_{i}x_{j}+\sum_{i}(Q_{ii}+\rho(1-2\tilde{x}_ {i}))x_{i}^{2}+\rho\tilde{x}_{i}^{2}=f_{\tilde{Q}^{\rho,\tilde{x}}}(x)+\sum_{ i}\rho\tilde{x}_{i}\] Noting that \(x_{i}=x_{i}^{2}\) for the above derivation since they are in \(\{0,1\}\) here. Since the \(\tilde{x}_{i}\) terms do not depend on \(x\), the claim follows. Hence, solving the QUBO in on the right hand side of equation 3.2 gives us the solution to 3.1. Claim 1 thus tells us that we simply need to modify the diagonal of the original matrix \(Q\) of our model by adding \(diag(1-2\tilde{x}_{1},...,1-2\tilde{x}_{n})\) and then solve the resulting QUBO to get the denoised image. We can then make use of quantum annealing to solve the resulting QUBO of 3.2, or use classical methods and heuristics like simulated annealing instead. We formally spell out the denoising procedure in algorithm QUBO_Denoise. QUBO_Denoise Input: A matrix \(Q\), a noisy image \(\tilde{x}\) sampled from the distribution of \(\tilde{X}_{X,\sigma}\) with \(X\thicksim P_{Q}^{model}\), and a penalty parameter \(\rho>0\). Output: A denoised image \(X^{*}_{\rho,\tilde{x},Q}\). 1. Set \(\tilde{Q}^{\rho,\tilde{x}}_{ij}=Q_{ij}\) if \(i\neq j\) and \(\tilde{Q}^{\rho,\tilde{x}}_{ij}=Q_{ii}+\rho(1-2\tilde{x})\) if \(i=j\). 2. Set \(X^{*}_{\rho,\tilde{x},Q}:=argmin_{x}f_{\tilde{Q}^{\rho,\tilde{x}}}(x)\). For the remainder of the paper, \(X^{*}_{\rho,\tilde{x},Q}\) will denote the denoised image obtained by applying QUBO_Denoise with noisy image \(\tilde{x}\), penalty parameter \(\rho\), and the distribution-defining matrix \(Q\). **Remark 3.2**.: Considering the entire process of sampling a noisy image and then denoising it, the measurability of \(X^{*}_{\rho,\tilde{X}_{X,\sigma},Q}\) is inherited from the measurability of \(\tilde{X}_{X,\sigma}\), which in turn inherits its measurability as compound random variable of the measurable noise and original image \(X\thicksim P_{Q}^{model}\). ### Optimal Choice of penalty parameter \(\rho\) The choice of the parameter \(\rho\) for the proposed image denoising model is clearly crucial to its success, since different choices will result in different solutions. If \(\rho\) is chosen to be too small, there is very little cost to flipping a pixel, and then many pixels may be flipped and the solution may not resemble the noisy image at all anymore. If \(\rho\) is too large, we may be too heavily penalizing flipping pixels, and thus may not be able to get rid of noise effectively. Hence, we now turn towards finding the optimal choice for \(\rho\). We will evaluate the choice of \(\rho\) via _expected overlap_: **Definition 3.3**.: The _expected overlap_ between two distributions \(P\) and a \(P^{\prime}\), is defined by \[d(P,P^{\prime}):=\mathbb{E}_{P}\mathbb{E}_{P^{\prime}}\left[n-\left\|X-X^{ \prime}\right\|_{1}\right],\] where \(X\thicksim P,X^{\prime}\thicksim P^{\prime}\). We will consider \(X\thicksim P_{Q}^{model}\), and \(X^{\prime}\) as \(X^{*}_{\rho,\tilde{X}_{X,\sigma},Q}\) the corresponding denoised image, and will also call \(d(P,P^{\prime})\) the _expected overlap between \(X\) and \(X^{\prime}\)_. To keep notation simple, for the remainder of this section allow us to write \(\tilde{X}\) in place of \(\tilde{X}_{X,\sigma}\), with \(X\) and \(\sigma\) being clear from context. Our main positive result concerning the choice of \(\rho\) is summarized in the following theorem: **Theorem 3.4**.: Let \(X\thicksim P_{Q}^{model}\) as in 2.2 and \(\tilde{X}\) be the noisy image. Then choosing \(\rho=log\dfrac{1-\sigma}{\sigma}\) to obtain \(X^{*}_{\rho,\tilde{X},Q}\) is optimal with respect to maximizing the expected overlap between \(X\) and \(X^{*}_{\rho,\tilde{X},Q}\). Proof.: Let \(X\thicksim P_{Q}^{model}\), and \(\tilde{X}\) be \(X\) afflicted by salt-and-pepper noise of level \(\sigma\). Then since \(\tilde{X}_{X,\sigma}\) is obtained by flipping pixels with probability \(\sigma\), we have the conditional probability \[\begin{split} P_{\sigma}(\tilde{X}=\tilde{x}|X=x)=& \prod_{i=1}^{v}\left\{\sigma(\tilde{x}_{i}-x_{i})^{2}+(1-\sigma)[1-( \tilde{x}_{i}-x_{i})^{2}]\right\}\\ =&\dfrac{\exp\left[-\beta_{\sigma}\sum_{i=1}^{v}( \tilde{x}_{i}-x_{i})^{2}\right]}{(1+e^{-\beta_{\sigma}})^{v}},\end{split} \tag{3.3}\] where \(\beta_{\sigma}:=\log\frac{1-\sigma}{\sigma}\). In order to infer the original image \(X\) from the noisy one \(\tilde{X}\), we utilize the Bayes formula and calculate the conditional probability \(P_{\beta_{\sigma},Q}^{\text{post}}(X=x|\tilde{X}=\tilde{x})\). \[\begin{split} P_{\beta_{\sigma},Q}^{\text{post}}(x|\tilde{x})=& \dfrac{P_{\sigma}(\tilde{X}=\tilde{x}|X=x)P_{Q}^{model}(x)}{\sum_{\{x\}}P_{ \sigma}(\tilde{x}|x)P_{Q}^{model}(x)}\\ =&\dfrac{\exp\left[-\beta_{\sigma}\sum_{i=1}^{v}( \tilde{x}_{i}-x_{i})^{2}-\sum_{i,j=1}^{v+h}Q_{ij}x_{i}x_{j}\right]}{\sum_{\{x \}}\exp\left[-\beta_{\sigma}\sum_{i=1}^{v}(\tilde{x}_{i}-x_{i})^{2}-\sum_{i,j= 1}^{v+h}Q_{ij}x_{i}x_{j}\right]}.\end{split} \tag{3.4}\] Note that \(x\) includes pixels for hidden nodes, which is fine here. Our approach finds the state which is most likely under this distribution, which is realized by annealing for the above QUBO with the \(\beta_{\sigma}\) term. The overlap of two vectors \(x^{*}\) and \(x\) is given by \[m(x,x^{*}):=\dfrac{1}{v+h}\sum_{i=1}^{v+h}(2x_{i}-1)(2x_{i}^{*}-1), \tag{3.5}\] the proportion of shared entries. We consider the average (over the noise) of solutions, \(\bar{X}_{\rho,\tilde{x},Q}\) with \[(\bar{X}_{\rho,\tilde{x},Q})_{i}=\theta\left(\sum_{\{x\}}P_{\tilde{Q}}^{model} (x)x_{i}-\dfrac{1}{2}\right), \tag{3.6}\] where \(\theta(x)=1\) if \(x>0\), otherwise \(0\), noting that the right hand side represents the inferred pixel value based on the expectation from \(P_{\tilde{Q}}^{model}\). We have formally distinguished \(P_{\tilde{Q}}^{model}(x)\) from \(P_{\rho,Q}^{\text{post}}(x|\tilde{x})\), but in fact they are the same. Note that \[2(\bar{X}_{\rho,\tilde{x},Q})_{i}-1=\text{sign}\left(\sum_{\{x\}}P_{\tilde{Q}}^ {model}(x)(2x_{i}-1)\right), \tag{3.7}\] where \(\text{sign}(x)\) is the sign of \(x\). Let \(\alpha_{\sigma,Q}:=-\beta_{\sigma}\sum_{i}(\bar{x}_{i}-x_{i})^{2}-\sum_{i,j}Q_{ij} x_{i}x_{j}\) for conciseness. In order to evaluate the statistical performance of our method with coefficient \(\rho\) of penalty term, we calcuclate the average of overlap as \[\begin{split} M_{\beta_{\sigma},Q}(\rho):=&\sum_{ \{\bar{x}\},\{x\}}P_{\sigma}(\tilde{x}|x)P_{Q}^{model}(x)m(\bar{X}_{\rho,\bar{x },Q}),x)\\ =&\frac{1}{(1+e^{\beta_{\sigma}})^{v}}\frac{1}{z} \frac{1}{v+h}\sum_{i}\sum_{\{\tilde{x}\},\{x\}}e^{\alpha_{\sigma,Q}}[2(\bar{X }_{\rho,\tilde{x},Q})_{i}-1](2x_{i}-1).\end{split} \tag{3.8}\] A sum in the right hand side of the above equation holds \[\begin{split}&\sum_{\{x\}}e^{\alpha_{\sigma,Q}}[2(\mathbb{E}(X^{*}_ {\rho,\tilde{x},Q})_{i}-1](2x_{i}-1)\leq\left|\sum_{\{x\}}e^{\alpha_{\sigma,Q}} [2(\mathbb{E}(X^{*}_{\rho,\tilde{x},Q})_{i}-1](2x_{i}-1)\right|\\ \leq&\left|\sum_{\{x\}}e^{\alpha_{\sigma,Q}}(2x_{i} -1)\right|=\sum_{\{x\}}e^{\alpha_{\sigma,Q}}(2x_{i}-1)\frac{\sum_{\{x^{\prime} \}}e^{-\beta_{\sigma}\sum_{i}(\bar{x}_{i}-x_{i}^{\prime})^{2}-\sum_{i,j}Q_{ij} x_{i}^{\prime}x_{j}^{\prime}}(2x_{i}^{\prime}-1)}{\left|\sum_{\{x^{\prime}\}}e^{- \beta_{\sigma}\sum_{i}(\bar{x}_{i}-x_{i}^{\prime})^{2}-\sum_{i,j}Q_{ij}x_{i}^{ \prime}x_{j}^{\prime}}(2x_{i}^{\prime}-1)\right|}\\ =&\sum_{\{x\}}e^{\alpha_{\sigma,Q}}(2x_{i}-1)\text{ sign}\left(\sum_{\{x^{\prime}\}}P_{Q}^{model}(x^{\prime})(2x_{i}^{\prime}-1) \right)=\sum_{\{x\}}e^{\alpha_{\sigma,Q}}[2(\bar{X}_{\rho,\tilde{x},Q})_{i}-1 ](2x_{i}-1).\end{split} \tag{3.9}\] Hence, the averaged overlap holds \[\begin{split} M_{\beta_{\sigma},Q}(\rho)\leq&\frac{ 1}{(1+e^{\beta_{\sigma}})^{v}}\frac{1}{Z_{1,Q}}\frac{1}{v+h}\sum_{i}\sum_{\{ \tilde{x}\},\{x\}}e^{-\beta_{\sigma}\sum_{i}(\bar{x}_{i}-x_{i})^{2}-\sum_{i,j} Q_{ij}x_{i}x_{j}}[2(\bar{X}_{\rho,\tilde{x},Q})_{i}-1](2x_{i}-1)\\ =& M_{\beta_{\sigma},Q}(\beta_{\sigma}).\end{split} \tag{3.10}\] This inequality means that the averaged overlap is maximized when \(\rho=\beta_{\sigma}=\log\frac{1-\sigma}{\sigma}\). This theorem is based on a known fact in statistical physics of information processing [21] and translates the fact into the setting of our problem. Notably, the optimal choice of \(\rho\) does _not_ depend on the distribution of the data, but only on the noise level, for which in many real world cases one may have good estimates. The proof of the theorem also reveals the following corollary: **Corollary 3.5**.: Under the same assumptions of Theorem 3.4, setting \(\rho:=\log\frac{1-\sigma}{\sigma}\) makes \(X^{*}_{\rho,\bar{X},Q}\) the maximum a posteriori estimator for the original noise-free image \(X\). The corollary follows from observing that the energy function in the numerator of the posterior distribution (3.4) is exactly (3.1) with \(\rho:=\frac{1-\sigma}{\sigma}\), noting that minimizing (3.1) is equivalent to maximizing (3.4). However, this framework allows for additional flexibility in choosing the \(\rho\) parameter that is absent in standard MAP estimation. In fact, in sections 3.3 and 4.1 we go on to demonstrate that in practice, choosing a larger \(\rho\) may be beneficial for robustness of the method. Though Theorem 3.4 derives the optimal choice of \(\rho\), it does not give any guarantees that the method will yield an improvement in expected overlap, even under its assumptions. Next, we prove a theorem to show that in the case of visible units being independent of one another, our image denoising method produces in expectation _strict_ denoising improvements with respect to the expected overlap. For \(c>0\) and a model distribution \(P_{Q}^{model}\) as in 2.2, let \(\mathcal{I}_{c}\) be the set of indices \(i\) such that \(|Q_{ii}|>c\). These indices correspond to components of \(X\) that are either \(0\) or \(1\) with probability at least \(\frac{1}{1+e^{-c}}\), depending on whether \(Q_{ii}\) is positive or negative, respectively. **Theorem 3.6**.: Suppose that \(Q\) is diagonal, \(X\thicksim P_{Q}\), and that \(\tilde{X}\) is \(X\) afflicted by salt-and-pepper noise of level \(\sigma\). With \(\mathcal{I}_{c}\) as defined above for \(c>0\), setting \(\rho\geq\log\bigl{(}\frac{1-\sigma}{\sigma}\bigr{)}\), and assuming that \(\mathcal{I}_{\rho}\neq\emptyset\), the expected overlap of the denoised image and the true image is strictly larger than the expected overlap of the noisy image and the true image, i.e. \[\mathbb{E}\left[\sum\mathbb{I}((X^{*}_{\rho,\tilde{X},Q})_{i}=X_{i})\right]> \mathbb{E}\left[\sum\mathbb{I}(\tilde{X}_{i}=X_{i})\right]. \tag{3.11}\] Proof.: Let \(\mathcal{I}^{0}_{c}:=\{i\in\mathcal{I}_{c}:Q_{ii}>0\},\mathcal{I}^{1}_{c}:=\{i \in\mathcal{I}_{c}:Q_{ii}<0\}\). Intuitively, these are the indices which are likely to be zero or one, respectively. Further, letting \(x^{\dagger i}\) denote the vector obtained by flipping entry \(i\) of x, we have that \(|f_{Q}(x)-f_{Q}(x^{\dagger i})|=Q_{ii}>c\) if and only if \(i\in\mathcal{I}_{c}\). Hence, this reveals that \(x^{*}\) solves (3.1) by setting \(x^{*}_{i}=1\;\;\forall i\in\mathcal{I}^{1}_{\rho},x^{*}_{i}=0\;\;\forall i\in \mathcal{I}^{0}_{\rho},\;\text{and}\;x^{*}_{i}=\tilde{x}_{i}\) otherwise, since the value of \(f_{Q}\) of (2.1) is reduced by more than \(\rho\), so that the overall penalized objective (3.1) improves despite the \(\rho\) penalty accrued by the pixel flips. Now, let \(X\thicksim P_{Q}^{model}\). Let us compute \(P((X^{*}_{\rho,\tilde{X},Q})_{i}=X_{i})\). The cases where this happens are: \(i\in I^{0}_{\rho}\) and \(X_{i}=0\), \(i\in I^{1}_{\rho}\) and \(X_{i}=1\), or \(i\notin I_{\rho}\) and pixel \(i\) was not flipped by the noise. We know that if \(i\in\mathcal{I}^{b}_{\rho},P(X_{i}=b)\geq\frac{1}{1+e^{-\rho}}\), for \(b\in\{0,1\}\), so \(P((X^{*}_{\rho,\tilde{X},Q})_{i}=X_{i})\geq\frac{1}{1+e^{-\rho}}\) for these. For \(i\notin\mathcal{I}_{\rho},P((X^{*}_{\rho,\tilde{X},Q})_{i}=X_{i})=1-\sigma\), where \(\sigma\) is the probability that the pixel was flipped by the noise. On the other hand, \(P(\tilde{X}_{i}=X_{i})=1-\sigma\;\;\forall i.\) We characterize \[\mathbb{E}\left[\sum\mathbb{I}((X^{*}_{\rho,\tilde{X},Q})_{i}=X_ {i})\right] >\mathbb{E}\left[\sum\mathbb{I}(\tilde{X}_{i}=X_{i})\right] \tag{3.12}\] \[\sum P((X^{*}_{\rho,\tilde{X},Q})_{i}=X_{i}) >\sum P(\tilde{X}_{i}=X_{i})=n\cdot(1-\sigma) \tag{3.13}\] For the left-hand side, assuming \(\mathcal{I}_{\rho}\neq\emptyset\), we have \[\sum P((X^{*}_{\rho,\tilde{X},Q})_{i}=X_{i})>\sum_{i\in\mathcal{I}_{\rho}}\frac {1}{1+e^{-\rho}}+\sum_{i\notin\mathcal{I}_{\rho}}(1-\sigma)=|\mathcal{I}_{ \rho}|\cdot\frac{1}{1+e^{-\rho}}+(n-|\mathcal{I}_{\rho}|)(1-\sigma)\] so that (3.12) holds when \[|\mathcal{I}_{\rho}|\cdot\frac{1}{1+e^{-\rho}}+(n-|\mathcal{I}_{ \rho}|)(1-\sigma)\geq n(1-\sigma) \tag{3.14}\] \[\iff|\mathcal{I}_{\rho}|\neq 0\text{ and }\frac{1}{1+e^{-\rho}}\geq 1 -\sigma\iff\rho\geq log(\frac{1-\sigma}{\sigma})\text{ and }\mathcal{I}_{\rho}\neq\emptyset, \tag{3.15}\] and the theorem is proven. The assumption that matrix \(Q\) is diagonal is equivalent to the components of \(X\) being independent, which is not realistic with real data. However, since in the RBM model the visible units are independent conditioned on the hidden units, we still consider this independent case to be informative to the denoising method. In fact, if the hidden states were fixed (or known, or recovered correctly), Theorem 3.6 would apply. We leave it as a tantalizing open question to generalize this result beyond the independent case. The assumption of nonemptiness of \(\mathcal{I}_{\rho}\) is a natural one for the denoising task; indeed, when \(\mathcal{I}_{\rho}\) is empty, no entries of Q are large in magnitude, which is equivalent to the entries of \(X\) being close to uniformly distributed. In that case, intuitively of course it should not be possible to guarantee that we can denoise an image well if it looks like noise to begin with. ### Robust Choice of \(\rho\) The optimal choice of \(\rho\) as derived in Theorem 3.4 relies on the assumption that the observed data comes from the learned distribution, or equivalently that the distribution generating our data has been perfectly learned by the RBM. However, in practice we will always only approximately learn the data distribution. Hence, we do not want to rely too heavily on the exact distribution we have learned when we denoise the images. One may hope to have a more robust method by only changing the value of a pixel when there is some confidence in the model that the pixel should be flipped. We may thus want to penalize flipping pixels slightly more than we should under the idealistic setting of Theorem 3.4, which corresponds to choosing a larger \(\rho\) value than \(\log\frac{1-\sigma}{\sigma}\), or equivalently using a smaller \(\sigma^{\prime}<\sigma\) value when setting \(\rho:=\log\frac{1-\sigma^{\prime}}{\sigma^{\prime}}\). We opt for the latter as a means of intentionally biasing \(\rho\) to make the approach more robust for application. Figures 2 and 3 in Section 4 show the effect this proposed robustness modification has, demonstrating indeed that choosing a larger \(\rho\) via intentionally using a smaller \(\sigma\) yields positive results. If the true noise level is \(\sigma\), our experiments demonstrate that setting to roughly \(\rho:=\frac{1-0.75\sigma}{0.75\sigma}\) has a positive effect on performance. ## 4 Empirical Results This section contains results from implementing the previously described method and comparing it against other denoising approaches. Datasets and code are available on the first author's GitHub page for the purpose of easy reproducibility. ### Results with Quantum Annealing In this subsection, we present empirical results obtained by implementing our model on a quantum annealer, D-Wave's Advantage_system4.1, which has 5000 qubits and enables embedding of a complete bipartite graph of size \(172\times 172\). Hence, we use \(12\times 12\) pixel images here so that the visible layer is of size 144. We test the method on two different datasets with very differently structured data. The first dataset is a \(12\times 12\) version of the well-known MNIST dataset [19], created by downsizing the original dataset with nearest-neighbor image downscaling and binarizing pixels. The second dataset we use is a \(12\times 12\) pixel Bars-and-Stripes (BAS) dataset, as has been used in closely related work [17, 8], where an 8x8 version was used to accomodate a smaller 2000 qubit machine, D-Wave 2000Q used there. Each image consists of binary pixels with either each row or each column sharing the same values, so that each image consists of either "bars" or "stripes". For both datasets we train the RBM by using the classical Contrastive Divergence algorithm first presented in [12]. The number of hidden units was set to 50 and 64 for BAS and MNIST, respectively. For the BAS data, 4000 images were generated as training data, and 1000 as test data, while for MNIST, we simply used the full MNIST provided training set of 60,000 images and test set of 10,000 images. Noisy images were generated by adding salt-and-pepper noise of level \(\sigma\) to images from the test dataset. Given a noisy image, we are then able to embed and solve the resulting denoising QUBO of 3.2 onto a D-Wave quantum annealer, Advantage_system4.1. A function of D-Wave's Ocean software, find_embedding, is utilized to find appropriate mappings from variables in a QUBO to physical qubits on D-Wave's Pegasus graph. A variable in QUBO is often mapped to multiple physical qubits, called chain, that are strongly connected to each other to behave like a single variable. A mapping can be used for every noisy images for each dataset, since their QUBO have the same graph stracuture. Figure 1: Examples of the denoising process using our method showing the true, noisy, and denoised images across different noise levels. We have prepared in advance 50 sets of the different mappings for each dataset and choose a mapping from the pool at random to embed QUBO of each image. This random selection is done to avoid possible artificial effects on the denoising performance from using only a particular mapping. Parameters for embedding and annealing, i.e., chain_strength and annealing_time, are tuned to maximize the performance. In particular, we set chain_strength as the product of a coefficient \(c_{0}\) and the maximum abstract value among the elements of each QUBO matrix, where we tune \(c_{0}\). The adopted values of the parameters are different between MNIST and BAS but the same values for all the range of \(\sigma\). We set (\(c_{0}\), annealing_time) = (0.6, 50 \(\mu\)s), (0.5, 40 \(\mu\)s) for BAS and MNIST, respectively. The number num_reads of reads of annealing is 100 for each noisy image. We calculate the average of solution of each pixel over the reads to approximate Eq. (3.6) and use it to evaluate the overlap that is proportion of pixels in denoised images that matched the original image. We denoise 200 noisy images for each \(\sigma\), which are randomly selected from the pool of test images for each sigma. Note also that for each value of sigma, the different methods compared use the same set of (randomly selected) noisy test images. Figures 3 and 3 first investigate the robust choice of \(\rho\) as discussed in Section 3.3. This is done by using a biased value of \(\sigma\) when setting \(\rho=\log\frac{1-\sigma}{\sigma}\), instead setting \(\rho:=\log\frac{1-b\sigma}{b\sigma}\) for some bias factor \(b\). The denoising performance for \(b\in\{1.25,1,0.75,0.5\}\) are shown, with 95% confidence intervals obtained by bootstrapping. Note that using a bias factor \(b=1\) means using the true value of \(\sigma\) for determining \(\rho\). Based on the empirical performance, using a bias factor of around 0.75 seems to give an improved performance compared to using a bias factor of 1 in both data sets. A bias factor of 0.5 seems to perform quite well across most noise regimes as well, with largely overlapping confidence regions to the 0.75 parameter setting, though in the low-noise setting for the BAS dataset we observe an adverse effect. The authors thus suggest a setting of 0.75 for the bias factor. Next, in figures 5 and 5, we compare our method to popular other denoising methods for binary images on the \(12\times 12\) MNIST and bars-and-stripes datasets, respectively, across different noise levels. When comparing to other methods, a crucial factor is that we choose \(\rho\) based off of \(\sigma\), but in practice \(\sigma\) may be unknown. In light of this, we include two versions of our method in these comparisons. First, we use our method with \(\rho:=\log\frac{1-\sigma}{\sigma}\), using the true value of \(\sigma\) without introducing the recommended bias factor. Secondly, we simulate the situation in which the true \(\sigma\) is unknown, and instead we only have a guess for \(\sigma\). To simulate having an approximate guess for \(\sigma\), for each image afflicted by noise of level \(\sigma\), we sample \(\sigma^{\prime}\) uniformly from an interval of size \(\sigma/2\) centered at sigma. We then set \(\rho:=\log\frac{1-0.75\sigma^{\prime}}{0.75\sigma^{\prime}}\), using a bias factor of 0.75 on with this "guessed" value of \(\sigma\). This is a significantly more realistic way of testing our method, since it gives an idea of how well the method may perform when the true noise level present in the noisy images is unknown and must be guessed. Our implementation here only assumes that the practitioner roughly knows the magnitude of the noise. For example, if the true noise is \(\sigma=0.2\), here we sample \(\sigma^{\prime}\) uniformly from \([0.15,0.25]\) to simulate the guess. We compare our method to Gibbs denoising with an RBM [25, section 3.2], median filtering [13], Gaussian filtering [24, chapter 5], and a graph-cut method [11] for denoising. For the Gibbs denoising, we use the same well-trained RBM as for our QUBO-based method, and parameters of the method were carefully tuned for best performance to use 20 Gibbs iterations to then construct the denoised image as the exponentially weighted average of the samples with decay factor 0.8. For the graph-cut method, the recommended parameter setting in the reference of \(\beta=0.5\) is used. Overall, the QUBO-based method performs quite strongly. Across all noise regimes in the MNIST data, and in most noise regimes in the bars-and-stripes dataset, the method outperforms the others. In particular, for the MNIST data the 95% confidence region for the QUBO method entirely dominates the others. Indeed, we see the good performance that our analysis from Section 3 suggests, even when the true \(\sigma\) is unknown and instead guessed. Using a guessed \(\sigma\) and the robustness modification of Section 3.3 makes the method perform as well (if not slightly better) as knowing the true \(\sigma\) without the robustness modification. Only in the noise regime of \(\sigma\geq 0.2\) in the BAS data does Gibbs denoising outperform our method. In Figure 1, we also provide examples of applying our denoising method to noisy images across different noise levels. ### Testing on Larger Images Though we see the the straightforward implementability of our method on quantum annealers as a strong positive, a current drawback on using QAs is the limited data size that can be handled to accomodate their still small qubit capacities. Of course we can still instead test our method on larger datasets by obtaining solutions to the denoising QUBO 3.1 using other means. In Figure 6, we implement our method on a binarized version of the popular MNIST dataset [19] by using simulated annealing [16] to find solutions to (3.1). We particularly choose to test on the full-size MNIST dataset since we could only use a downscaled version on the QA due to size limitations on the input data, so this experiment serves to test our method without this downscaling. All methods are implemented as described in 4.1, and again for our method we use a guessed \(\sigma\) to simulate the unknown \(\sigma\) case and bias the guess for robustness. ## 5 Conclusion and Future Work We investigated an image denoising framework via a penalty-based QUBO denoising objective that shows promise both theoretically through its statistical properties and practically through its empirical performance together with the proposed robustness modification. The method is well-suited for implementability on a quantum annealer, providing an important application of QAs within machine learning through the fundamental image denoising task. Good results are still obtained on larger datasets when the QUBO is only classically approximated by simulated annealing instead, revealing the approach to be promising even in the absence of QAs. As RBMs form a core building block of many deep generative models such as deep Boltzmann machines or deep belief networks [10], a natural next step is to attempt to incorporate this approach into these more complex models, though current hardware limitations on existing quantum annealers are restrictive. Further, since our method takes advantage of QAs for the denoising step, further research into making use of QAs for the training process of RBMs would yield a full image denoising model where both the model training and image denoising make use of QA. ## Funding PK was supported in part by g-RIPS Sendai, Cyberscience Center at Tohoku Univ., and NEC Japan, in early stages of the work. PK is grateful to the USRA Feynman Academy internship program, support from the NASA Academic Mission Services (contract NNA16BD14C), and funding from DARPA under DARPA-NASA agreement SAA2-403688. ## Acknowledgments The early stage of this work is based on the work in the g-RIPS Sendai 2021 program. The authors thank Y. Araki, E. Escobar, T. Mihara, V. Q. H. Huynh, H. Kodani, A. T. Lin, M. Shirane, Y. Susa, and H. Suito for collaboration in the program. The authors also acknowledge H. Kobayashi and M. Sato for the use of the computing environment in the program. P.K. thanks Y. Sukurdeep for helpful feedback and discussions. Figure 6: Proportion of pixels in denoised images that were correctly denoised, for different denoising methods on the MNIST dataset, with 95% confidence intervals shaded.
2310.14589
Real eigenvector distributions of random tensors with backgrounds and random deviations
As in random matrix theories, eigenvector/value distributions are important quantities of random tensors in their applications. Recently, real eigenvector/value distributions of Gaussian random tensors have been explicitly computed by expressing them as partition functions of quantum field theories with quartic interactions. This procedure to compute distributions in random tensors is general, powerful and intuitive, because one can take advantage of well-developed techniques and knowledge of quantum field theories. In this paper we extend the procedure to the cases that random tensors have mean backgrounds and eigenvector equations have random deviations. In particular, we study in detail the case that the background is a rank-one tensor, namely, the case of a spiked tensor. We discuss the condition under which the background rank-one tensor has a visible peak in the eigenvector distribution. We obtain a threshold value, which agrees with a previous result in the literature.
Naoki Sasakura
2023-10-23T05:54:37Z
http://arxiv.org/abs/2310.14589v3
# Real eigenvector distributions of random tensors with backgrounds and random deviations ###### Abstract As in random matrix theories, eigenvector/value distributions are important quantities of random tensors in their applications. Recently, real eigenvector/value distributions of Gaussian random tensors have been explicitly computed by expressing them as partition functions of quantum field theories with quartic interactions. This procedure to compute distributions in random tensors is general, powerful and intuitive, because one can take advantage of well-developed techniques and knowledge of quantum field theories. In this paper we extend the procedure to the cases that random tensors have mean backgrounds and eigenvector equations have random deviations. In particular, we study in detail the case that the background is a rank-one tensor, namely, the case of a spiked tensor. We discuss the condition under which the background rank-one tensor has a visible peak in the eigenvector distribution. We obtain a threshold value, which agrees with a previous result in the literature. pacs: + Footnote †: preprint: Preprint number: YITP-23-131 A13, A45, B83, B86 Introduction Eigenvalue distributions are important quantities in random matrix models. The most well-known is the Wigner semi-circle law of the eigenvalue distribution, which models energy spectra of strongly interacting many-body systems [1]. Eigenvalue distributions are also used as an important technique in solving matrix models [2; 3]. Topological changes of eigenvalue distributions provide insights into the QCD dynamics [4; 5]. It would be natural to ask how such knowledge about random matrices can be generalized to random tensors. Random tensor models [6; 7; 8; 9] were originally introduced to extend random matrix models, which are successful as two-dimensional quantum gravity, to higher dimensional quantum gravity. Recently random tensor models also play interesting roles in various other subjects (See for instance [10]). While physically interesting matrices like hermitian can be one-to-one mapped to sets of eigenvalues by symmetry transformations, this cannot be done in general for tensors. However, we sometimes encounter what we may call tensor eigenvectors/values [11; 12; 13; 14] in studies. A well-known example is the distribution of the energy spectra of the spherical \(p\)-spin model [15; 16] for spin glasses, which was comprehensively analyzed in [17]. In fact this is the same problem as obtaining the real eigenvalue1 distribution of a real symmetric random tensor. Tensor eigenvector/value problems also appear in other contexts, such as AdS/CFT [18], classical gravitational systems [19], and applied mathematics for technologies [14]. Footnote 1: More precisely, they are Z-eigenvalues in the terminology of [11; 14]. Considering its wide appearance, it is worth effort to systematically understand properties of tensor eigenvectors/values. Our focus is on their distributions for Gaussian random tensors. Some interesting results have already been obtained in the literature. In [20; 21] the expectation values of numbers of real eigenvalues of random tensors were computed. In [22] the maximum eigenvalues of random tensors were estimated in the large-\(N\) limit2. In [23], the Wigner semi-circle law was extended to a form for random tensors. In [24; 25; 26] the present author computed real eigenvalue distributions of random tensors by quantum field theoretical methods. Footnote 2: Throughout this paper, \(N\) denotes the range of indices of tensors, namely, an index takes values, \(1,2,\cdots,N\). In the last works above by the present author, the procedure is to first rewrite the eigenvector problems to partition functions of quantum field theories with quartic interactions, and then compute the partition functions. There are some merits in this procedure; it is general, powerful, and intuitive. As far as tensors have Gaussian distributions, one can in principle extend the procedure to obtain quantum field theories of wide range of other tensor problems, such as complex eigenvalue/vector distributions, tensor rank decompositions, etc. Then, once such quantum field theories have been obtained, one can use various well-developed quantum field theoretical techniques, such as Schwinger-Dyson equations as in [25], etc. Moreover, it is generally more intuitive to compute partition functions than to directly treat systems of eigenvector/value polynomial equations. For instance, in the large-\(N\) analysis of [25], there exists a phase transition point between perturbative and non-perturbative regimes of the quantum field theory, and this point corresponds to the edge of the eigenvalue distribution. The purpose of the present paper is to apply this quantum field theoretical procedure to a slightly different setup than the previous works [24; 25; 26]. We assume the random tensors have mean values, namely, backgrounds. This is a useful setup in the research of data analysis, in which backgrounds are signals and deviations around them are noises [27]. It is an important question under what conditions signals can be recovered from data contaminated by noises [27; 28; 29]. We also introduce random deviations to eigenvector equations3. This simulates solving approximately eigenvector equations, for instance, by the Monte Carlo method or simulated annealing. As we will see, also in this generalized setup, the distributions can be rewritten as partition functions of quantum field theories with quartic interactions, and the partition functions can be computed explicitly, even exactly in some cases. Footnote 3: This particular case will also be analyzed in detail in [30]. This paper is organized as follows. In Section 2, we introduce a real eigenvector equation with a tensor mean background and deviations to the equation, and obtain an integral expression of the eigenvector distribution. In Section 3, we derive the quantum field theory expressing a "signed" distribution of the eigenvectors. This distribution is not authentic but is weighted with an extra sign associated to each eigenvector. This distribution is easier to compute, because the quantum field theory contains only a pair of fermions. In particular, when the background is taken to be a rank-one tensor (a spiked tensor), we obtain an exact expression of the distribution in terms of hypergeometric functions. In Section 4 we derive the quantum field theory expression of the (authentic) distribution of the eigenvectors. In particular we explicitly derive the distribution for the spiked tensor case by using an approximation taking advantage of the quantum field theoretical expression. In Section 5, we compare the expressions of the distributions obtained in the previous sections with Monte Carlo simulations. We obtain very good agreement, including for the case treated by the approximation. In Section 6, we consider the large-\(N\) limit, especially paying attention to whether the rank-one tensor background has a visible peak in the distributions. We derive the scaling and the range of parameters in which this happens. The threshold value is shown to agree with that of [29]. The last section is devoted to summary and future prospects. ## 2 Real tensor eigenvector equation with backgrounds and deviations In this paper we restrict ourselves to order-three tensors4 for simplicity. We consider the following eigenvector equation [11; 12; 13; 14] with a background tensor \(Q\) and a deviation vector \(\eta\), Footnote 4: Namely, tensors have three indices. \[(Q_{abc}+C_{abc})v_{b}v_{c}=v_{a}+\eta_{a}. \tag{1}\] Here the indices take \(a,b,c=1,2,\ldots,N\), and repeated indices are assumed be summed over unless otherwise stated throughout this paper. We assume that \(Q,C\) are real symmetric order-three tensors and \(v,\eta\) are real vectors: \[\begin{split}& Q_{abc}=Q_{bac}=Q_{bca}\in\mathbb{R},\\ & C_{abc}=C_{bac}=C_{bca}\in\mathbb{R},\\ & v_{a},\ \eta_{a}\in\mathbb{R}.\end{split} \tag{2}\] While \(Q\) is an externally given background tensor, \(C_{abc}\) is a random tensor with Gaussian distribution of a zero mean value. The vector \(\eta\) describes a deviation of the eigenvector equation, and is a random real vector with Gaussian distribution of a zero mean value. We will compute the distributions of \(v\), namely the distributions of the real "eigenvector" solutions to (1). Note that, if we ignore the background \(Q\) and the deviation \(\eta\), the setup goes back to the cases previously studied in [24; 25; 26]. For given \(Q,C,\eta\), the distribution of \(v\) is given by \[\begin{split}\rho(v,Q,C,\eta)&=\sum_{i=1}^{\# \mathrm{sol}(Q,C,\eta)}\prod_{a=1}^{N}\delta(v_{a}-v_{a}^{i})\\ &=|\det M(v,Q,C)|\prod_{a=1}^{N}\delta\left(v_{a}+\eta_{a}-(Q_{abc }+C_{abc})v_{b}v_{c}\right)\end{split} \tag{3}\] where \(v^{i}\) (\(i=1,2,\ldots,\#\mathrm{sol}(Q,C,\eta)\)) are all the real solutions to (1), and \(|\det M(v,Q,C)|\) is the absolute value of the determinant of the matrix, \[M(v,Q,C)_{ab}=\frac{\partial}{\partial v_{a}}\left(v_{b}+\eta_{b}-(Q_{bcd}+C_ {bcd})v_{c}v_{d}\right)=\delta_{ab}-2(Q_{abc}+C_{abc})v_{c}, \tag{4}\] which is the Jacobian factor associated to the change of the variables of the delta functions in (3). When \(C,\eta\) have Gaussian distributions with zero mean values, the eigenvector distributions are computed by taking the average over \(C,\eta\): \[\begin{split}\rho(v,Q,\beta)&=\left\langle\rho(v,Q,C, \eta)\right\rangle_{C,\eta}\\ &=\frac{1}{AA^{\prime}}\int_{\mathbb{R}\#C}dC\int_{\mathbb{R}^{N} }d\eta\,e^{-\alpha C^{2}-\frac{1}{4\beta}\eta^{2}}\left|\det M(v,Q,C)\right| \prod_{a=1}^{N}\delta\left(v_{a}+\eta_{a}-(Q_{abc}+C_{abc})v_{b}v_{c}\right), \end{split} \tag{5}\] where \(\alpha,\beta>0\), \(\#C\) is the number5 of the independent components of \(C\), \(C^{2}=C_{abc}C_{abc}\), \(\eta^{2}=\eta_{a}\eta_{a}\), \(A=\int_{\mathbb{R}^{\#C}}dC\,e^{-\alpha C^{2}}\), and \(A^{\prime}=\int_{\mathbb{R}^{N}}d\eta\,e^{-\frac{1}{4\beta}\eta^{2}}\). Here a slightly complicated introduction of \(\beta\) is for later convenience. By using the well known formula, \(\frac{1}{2\pi}\int_{\mathbb{R}}d\lambda\,e^{i\lambda x}=\delta(x)\), the integration of the delta functions over \(\eta\) in (5) can be rewritten as Footnote 5: Explicitly, \(\#C=N(N+1)(N+2)/6\). \[\frac{1}{A^{\prime}}\int_{\mathbb{R}^{N}}d\eta\,e^{-\frac{1}{4\beta}\eta^{2}} \prod_{a=1}^{N}\delta\left(v_{a}+\eta_{a}-(Q_{abc}+C_{abc})v_{b}v_{c}\right)= \frac{1}{(2\pi)^{N}}\int_{\mathbb{R}^{N}}d\lambda\,e^{-\beta\lambda^{2}+i \lambda_{a}(v_{a}-(Q_{abc}+C_{abc})v_{b}v_{c})}. \tag{6}\] Therefore, by putting this into (5), we obtain \[\rho(v,Q,\beta)=\frac{1}{(2\pi)^{N}A}\int_{\mathbb{R}^{\#C}}dC\int_{\mathbb{R }^{N}}d\lambda\,\,\left|\det M(v,Q,C)\right|e^{-\alpha C^{2}-\beta\lambda^{2} +i\lambda_{a}(v_{a}-(Q_{abc}+C_{abc})v_{b}v_{c})}. \tag{7}\] The part \(\left|\det M(v,Q,C)\right|\) in (7) needs a special care, because taking an absolute value is not an analytic function. In Section 3, we will consider the case that we ignore taking the absolute value. This makes the problem easier and treatable by introducing only a pair of fermions, but is still non-trivial and interesting. In Section 4, we will fully treat (7) by introducing both bosons and fermions. ## 3 Signed distributions ### Quantum field theory expression The quantity we will compute in this section is defined by ignoring taking the absolute value in (7): \[\rho^{\text{signed}}(v,Q,\beta)=\frac{1}{(2\pi)^{N}A}\int_{\mathbb{R}^{\#C }}dC\int_{\mathbb{R}^{N}}d\lambda\det M(v,Q,C)\,e^{-\alpha C^{2}-\beta\lambda ^{2}+i\lambda_{a}(v_{a}-(Q_{abc}+C_{abc})v_{b}v_{c})}. \tag{8}\] Following backward the derivation in Section 2, the distribution corresponds to a "signed" distribution, \[\rho^{\text{signed}}(v,Q,C,\eta)=\sum_{i=1}^{\#\text{sol}(Q,C,\eta)}\text{sign} \left(\text{det}M(v^{i},Q,C)\right)\prod_{a=1}^{N}\delta(v_{a}-v_{a}^{i}), \tag{9}\] which has an extra sign of \(\text{det}M(v^{i},Q,C)\) dependent on each solution \(v^{i}\), compared with (3). Note that the quantity (8) is a generalization of the signed distribution computed in [24] to the case with backgrounds and deviations. Though the quantity has no clear connections to (7), it provides a simpler playground, and we will obtain an exact final expression with the confluent hypergeometric functions of the second kind (or hermite polynomials). The determinant factor in (8) can easily be rewritten in a quantum field theoretical form by introducing a fermion pair, \(\bar{\psi}_{a},\psi_{a}\) (\(a=1,2,\cdots,N\)); \(\text{det}\,M=\int d\bar{\psi}d\psi\,e^{\bar{\psi}M\psi}\)[31]. This technique to incorporate determinants in quantum field theories is common in treating disordered systems in statistical physics6. Then (8) can be rewritten as Footnote 6: See for instance [32] and references therein. \[\rho^{\text{signed}}(v,Q,\beta)=\frac{1}{(2\pi)^{N}A}\int_{\mathbb{R}^{\#C}} dC\int_{\mathbb{R}^{N}}d\lambda\int d\bar{\psi}d\psi\,e^{S^{\text{signed}}}, \tag{10}\] where \[S^{\text{signed}}_{\text{bare}}=-\alpha C^{2}-\beta\lambda^{2}+i\lambda_{a} (v_{a}-(Q_{abc}+C_{abc})v_{b}v_{c})+\bar{\psi}_{a}\left(\delta_{ab}-2(Q_{abc}+C _{abc})v_{c}\right)\psi_{b}. \tag{11}\] Since \(C\) and \(\lambda\) appear at most quadratically in (11), they can be integrated out by Gaussian integrations. We will first integrate over \(C\) and then over \(\lambda\). Though the integrations are straightforward, the actual computation is a little cumbersome, because of the anti-commuting nature of the fermions and the necessity of symmetrization for the indices of \(C_{abc}\). However, we can take a shortcut by taking some results from [24], where there are no \(Q\) or \(\eta\). Now, new terms in \(S^{\text{signed}}_{\text{bare}}\) compared to [24] are those depending on \(Q\) and \(\beta\), and are explicitly given by \[S^{\text{signed}}_{\text{new}}=-\beta\lambda^{2}-i\lambda_{a}Q_{abc}v_{b}v_{ c}-2Q_{abc}\bar{\psi}_{a}\psi_{b}v_{c}. \tag{12}\] Since the new terms do not contain \(C\), the integration over \(C\) proceeds in the same way as in [24]. This integration cancels the overall factor \(A^{-1}\) in (10), and also generates various terms being added to the action. Collecting the terms depending on \(\lambda\) among the generated ones, \(i\lambda_{a}v_{a}\) in (11), and the terms depending on \(\lambda\) in (12), we obtain the \(\lambda\)-dependent part of the action as \[S_{\lambda}^{\text{signed}}=-\frac{v^{4}}{12\alpha}B_{ab}\lambda_{a}\lambda_{b} +i\lambda_{a}(v_{a}+D_{a}^{\text{signed}}-D_{a}^{Q}), \tag{13}\] where \(D_{a}^{Q}=Q_{abc}v_{b}v_{c}\), and \(D^{\text{signed}}\) can be taken from [24]7, Footnote 7: \(v_{a}+D_{a}^{\text{signed}}\) corresponds to \(D_{a}\) of [24]. \[D_{a}^{\text{signed}}=\frac{1}{3\alpha}\left(\bar{\psi}_{a}\,\psi\cdot v\,v^{ 2}+\bar{\psi}\cdot v\,\psi_{a}\,v^{2}+\bar{\psi}\cdot v\,\psi\cdot v\,v_{a} \right). \tag{14}\] Here we very frequently use an abusive notation \(v^{p}:=|v|^{p}\) for simplicity throughout this paper, since whether \(v\) means vector or scalar quantities are always obvious from contexts. The matrix \(B\) is given by \[B=3\left(1+\frac{4\alpha\beta}{v^{4}}\right)I_{\parallel}+\left(1+\frac{12 \alpha\beta}{v^{4}}\right)I_{\perp}, \tag{15}\] where \(I_{\parallel}\) and \(I_{\perp}\) are the projection matrices to the parallel and the transverse subspaces against \(v\colon I_{\parallel ab}=v_{a}v_{b}/v^{2},\ I_{\perp ab}=\delta_{ab}-v_{a}v_{b }/v^{2}.\) Then the integration over \(\lambda\) with the action (13) generates an action, \[\begin{split}\delta S_{\lambda}^{\text{signed}}=&- \frac{N}{2}\log\frac{v^{4}}{12\pi\alpha}-\frac{1}{2}\log\det B\\ &-\frac{3\alpha}{v^{4}}\left((v_{a}+D_{a}^{\text{signed}})B_{ab }^{-1}(v_{b}+D_{b}^{\text{signed}})-2(v_{a}+D_{a}^{\text{signed}})B_{ab}^{ -1}D_{b}^{Q}+D_{a}^{Q}B_{ab}^{-1}D_{b}^{Q}\right),\end{split} \tag{16}\] where the inverse of \(B\) is given by \[B^{-1}=\frac{b_{\parallel}}{3}I_{\parallel}+b_{\perp}I_{\perp} \tag{17}\] with \[\begin{split}& b_{\parallel}=\frac{v^{4}}{v^{4}+4\alpha\beta},\\ & b_{\perp}=\frac{v^{4}}{v^{4}+12\alpha\beta}.\end{split} \tag{18}\] When we consider the case with \(Q=\beta=0\), the distribution (10) should agree with the previous result of [24]. Therefore it is enough for us to compute the additional part which appears only when \(Q\neq 0\) or \(\beta\neq 0\). By subtracting \(\delta S_{\lambda}^{\text{signed}}\) for \(Q=\beta=0\) in (16) and using (17), we obtain \[\begin{split}&\delta S_{\lambda}^{\text{signed}}-\delta S_{ \lambda}^{\text{signed}}(Q=\beta=0)=\frac{1}{2}\log b_{\parallel}+\frac{N-1} {2}\log b_{\perp}-\frac{3\alpha}{v^{4}}\bigg{[}\frac{b_{\parallel}-1}{3}(v+D_ {\parallel}^{\text{signed}})^{2}\\ &+(b_{\perp}-1)D_{\perp}^{\text{signed}}\cdot D_{\perp}^{\text {signed}}-\frac{2b_{\parallel}}{3}(v+D_{\parallel}^{\text{signed}})D_{ \parallel}^{Q}-2b_{\perp}D_{\perp}^{\text{signed}}\cdot D_{\perp}^{Q}+\frac{ b_{\parallel}}{3}(D_{\parallel}^{Q})^{2}+b_{\perp}D_{\perp}^{Q}\cdot D_{\perp}^{Q} \bigg{]},\end{split} \tag{19}\] where \(D_{\parallel}^{\text{signed}}=v\cdot D^{\text{signed}}/|v|,\ D_{\perp}^{ \text{signed}}=I_{\perp}D^{\text{signed}},\ D_{\parallel}^{Q}=v\cdot D^{Q}/ |v|,\ D_{\perp}^{Q}=I_{\perp}D^{Q}\). The previous result in [24] is given by \[\rho^{\text{signed}}(v,Q=0,\beta=0)=3^{\frac{N-1}{2}}\pi^{-\frac{N}{2}}\alpha ^{\frac{N}{2}}\int d\bar{\psi}d\psi\,e^{S_{\bar{\psi}\psi}}, \tag{20}\] where \[S_{\bar{\psi}\psi}=-\frac{\alpha}{v^{2}}-2N\log v+\bar{\psi}_{\perp}\cdot\psi_ {\perp}-\bar{\psi}_{\parallel}\psi_{\parallel}-\frac{v^{2}}{6\alpha}\left( \bar{\psi}_{\perp}\cdot\psi_{\perp}\right)^{2} \tag{21}\] with \(\psi_{\parallel}=v\cdot\psi/|v|,\ \psi_{\perp}=I_{\perp}\psi\), etc. Adding (19) and the last term in (12) to (21) and doing some straightforward computations, we finally obtain \[\begin{split}\rho^{\text{signed}}(v,Q,\beta)=& 3^{\frac{N-1}{2}}\pi^{-\frac{N}{2}}\alpha^{\frac{N}{2}}(v^{4}+4 \alpha\beta)^{-\frac{1}{2}}(v^{4}+12\alpha\beta)^{-\frac{N-1}{2}}\exp\left[- \frac{\alpha v^{2}}{v^{4}+4\alpha\beta}\right]\\ &\cdot\exp\left[\frac{2\alpha b_{\parallel}vD_{\parallel}^{Q}- \alpha b_{\parallel}(D_{\parallel}^{Q})^{2}-3\alpha b_{\perp}D_{\perp}^{Q} \cdot D_{\perp}^{Q}}{v^{4}}\right]\int d\bar{\psi}d\psi\,e^{S^{\text{signed}} },\end{split} \tag{22}\] where \[\begin{split} S^{\text{signed}}&=\left(-2b_{ \parallel}+1+\frac{2b_{\parallel}D_{\parallel}^{Q}}{v}\right)\bar{\psi}_{ \parallel}\psi_{\parallel}+\frac{2b_{\perp}}{v}D_{\perp}^{Q}\cdot\left(\bar{ \psi}_{\perp}\psi_{\parallel}+\bar{\psi}_{\parallel}\psi_{\perp}\right)+\bar{ \psi}_{\perp}\cdot\psi_{\perp}\\ &\quad-2Q_{abc}\bar{\psi}_{a}\psi_{b}v_{c}+\frac{2v^{2}(b_{\perp}- 1)}{3\alpha}\bar{\psi}_{\parallel}\psi_{\parallel}\bar{\psi}_{\perp}\cdot\psi_ {\perp}-\frac{v^{2}}{6\alpha}\left(\bar{\psi}_{\perp}\cdot\psi_{\perp}\right) ^{2}.\end{split} \tag{23}\] Some details of the derivation are explained in Appendix A. ### Rank-one \(Q\) To study the formula (22) with (23) more explicitly, let us consider the case that \(Q\) is a rank-one tensor, \[Q_{abc}=q\,n_{a}n_{b}n_{c}, \tag{24}\] where \(q\) is real and \(n\) is a normalized real vector \(\left(|n|=1\right)\). This is a setup called a spiked tensor [27]. In the general situation, the vector \(n\) is a linear combination of \(v\) and another vector \(n_{1}\), which is a normalized vector transverse to \(v\) (namely, \(v\cdot n_{1}=0,\,|n_{1}|=1\)). Then the transverse subspace to \(v\) can further be divided into the subspace parallel to \(n_{1}\) and the \(N-2\)-dimensional subspace transverse to both \(v\) and \(n_{1}\). We denote the projector to the latter by \(I_{\perp_{2}}\). Then the transverse fermions, \(\bar{\psi}_{\perp},\psi_{\perp}\), can further be decomposed into \(\bar{\psi}_{\perp_{1}}=n_{1}\cdot\bar{\psi}\) and \(\bar{\psi}_{\perp_{2}}=I_{\perp_{2}}\bar{\psi}\) and similarly for \(\psi_{\perp}\). Note that \(\bar{\psi}_{\perp}\cdot\psi_{\perp}=\bar{\psi}_{\perp_{1}}\psi_{\perp_{1}}+ \bar{\psi}_{\perp_{2}}\cdot\psi_{\perp_{2}}\), etc. For (24), \(D_{\parallel}^{Q}=qv^{2}n_{\parallel}^{3},\,\,\,D_{\perp}^{Q}=qv^{2}n_{ \parallel}^{2}n_{\perp}n_{1}\), where \(n_{\parallel}=v\cdot n/|v|,\,\,\,n_{\perp}=n_{1}\cdot n\). We also notice \[Q_{abc}v_{c}\bar{\psi}_{a}\psi_{b}=qn_{a}n_{b}n_{c}v_{a}\psi_{b}\psi_{c}=qvn_{ \parallel}^{3}\bar{\psi}_{\parallel}\psi_{\parallel}+qvn_{\parallel}^{2}n_{ \perp}(\bar{\psi}_{\parallel}\psi_{\perp_{1}}+\bar{\psi}_{\perp_{1}}\psi_{ \parallel})+qvn_{\parallel}n_{\perp}^{2}\bar{\psi}_{\perp_{1}}\psi_{\perp_{1}}. \tag{25}\] Putting these into (22) and (23), we obtain \[\begin{split}\rho_{\text{spiked}}^{\text{signed}}(v,n,q,\beta)=& 3^{\frac{N-1}{2}}\pi^{-\frac{N}{2}}\alpha^{\frac{N}{2}}(v^{4}+4 \alpha\beta)^{-\frac{1}{2}}(v^{4}+12\alpha\beta)^{-\frac{N-1}{2}}\\ &\cdot\exp\left[\frac{-\alpha v^{2}+2\alpha qv^{3}n_{\parallel}^ {3}-\alpha q^{2}v^{4}n_{\parallel}^{6}}{v^{4}+4\alpha\beta}-\frac{3\alpha q^ {2}v^{4}n_{\parallel}^{4}n_{\perp}^{2}}{v^{4}+12\alpha\beta}\right]\int d\bar {\psi}d\psi\,e^{S_{\text{spiked}}^{\text{signed}}},\end{split} \tag{26}\] where \[\begin{split} S_{\text{spiked}}^{\text{signed}}&=- \left(\frac{v^{4}-4\alpha\beta}{v^{4}+4\alpha\beta}+\frac{8\alpha\beta qvn_{ \parallel}^{3}}{v^{4}+4\alpha\beta}\right)\bar{\psi}_{\parallel}\psi_{ \parallel}-\frac{24\alpha\beta qvn_{\parallel}^{2}n_{\perp}}{v^{4}+12\alpha \beta}\left(\bar{\psi}_{\parallel}\psi_{\perp_{1}}+\bar{\psi}_{\perp_{1}}\psi _{\parallel}\right)+(1-2qvn_{\parallel}n_{\perp}^{2})\bar{\psi}_{\perp_{1}} \psi_{\perp_{1}}\\ &+\bar{\psi}_{\perp_{2}}\cdot\psi_{\perp_{2}}-\frac{8\beta v^{2}} {v^{4}+12\alpha\beta}\bar{\psi}_{\parallel}\psi_{\parallel}\left(\bar{\psi}_ {\perp_{1}}\psi_{\perp_{1}}+\bar{\psi}_{\perp_{2}}\cdot\psi_{\perp_{2}}\right) -\frac{v^{2}}{6\alpha}\left(\bar{\psi}_{\perp_{1}}\psi_{\perp_{1}}+\bar{\psi}_ {\perp_{2}}\cdot\psi_{\perp_{2}}\right)^{2}.\end{split} \tag{27}\] It is not difficult to explicitly compute the fermion integration in (26). As is shown in Appendix B, we obtain \[\begin{split}\int d\bar{\psi}d\psi\,e^{S_{\text{spiked}}^{\text {signed}}}=2^{N-6}(-d_{2})^{\frac{N-5}{2}}\bigg{[}&-8d_{2}(-b_{2} ^{2}+d_{1}+b_{3}(b_{1}+d_{1})+2b_{1}d_{2})\,U\left(\frac{3-N}{2},\frac{3}{2},- \frac{1}{4d_{2}}\right)\\ &+2(N-3)(b_{3}d_{1}+2b_{1}d_{2}+6d_{1}d_{2})\,U\left(\frac{5-N}{2},\frac{5}{2},-\frac{1}{4d_{2}}\right)\\ &-d_{1}(N-3)(N-5)\,U\left(\frac{7-N}{2},\frac{7}{2},-\frac{1}{4d _{2}}\right)\bigg{]},\end{split} \tag{28}\] where \(U\) denotes the confluent hypergeometric function of the second kind, and \(b_{i},d_{i}\) are the coefficients of the terms in (27): \[\begin{split} b_{1}&=-\left(\frac{v^{4}-4\alpha\beta }{v^{4}+4\alpha\beta}+\frac{8\alpha\beta qvn_{\parallel}^{3}}{v^{4}+4\alpha \beta}\right),\ b_{2}=-\frac{24\alpha\beta qvn_{\parallel}^{2}n_{\perp}}{v^{4}+ 12\alpha\beta},\ b_{3}=1-2qvn_{\parallel}n_{\perp}^{2},\\ d_{1}&=-\frac{8\beta v^{2}}{v^{4}+12\alpha\beta}, \ d_{2}=-\frac{v^{2}}{6\alpha}.\end{split} \tag{29}\] The result (26) with (28) gives the exact expression of the signed distribution. ## 4 Distributions ### Quantum field theory expression In this subsection we compute the (authentic) distribution by considering the determinant factor \(|\det M|\) as it is. We take the same procedure as was employed in [26]. We first introduce bosons and fermions to rewrite \(|\det M|\): \[\begin{split}|\det M|&=\lim_{\epsilon\to+0}\frac{ \det(M^{2}+\epsilon I)}{\sqrt{\det(M^{2}+\epsilon I)}}\\ &=(-\pi)^{-N}\int d\bar{\psi}d\psi d\bar{\varphi}d\varphi d\phi d \sigma\,e^{-\sigma^{2}-2i\sigma M\phi-\epsilon\phi^{2}-\bar{\varphi}\varphi- \bar{\psi}M\varphi-\bar{\varphi}M\psi+\epsilon\bar{\psi}\psi},\end{split} \tag{30}\] where \(I\) is an identity matrix of \(N\)-by-\(N\), \(\phi_{a},\sigma_{a}\) are real bosons, \(\bar{\psi}_{a},\psi_{a},\bar{\varphi}_{a},\varphi_{a}\) are fermions, and \(\bar{\psi}\psi=\bar{\psi}_{a}\psi_{a}\), etc. Here we have introduced a positive infinitesimal parameter \(\epsilon\) to regularize the expression, since \(M\) may have zero eigenvalues. As in the second line, writing the limit is suppressed to simplify the notation hereafter, assuming implicitly taking this limit at ends of computations. In fact the limit turns out to be straightforward in all the computations of this paper. We have introduced two sets of bosons and fermions to make the exponent linear in \(C\) (\(M\) contains \(C\) linearly) for later convenience of the integration over \(C\). By performing similar processes as in Section 3, we obtain \[\rho(v,Q,\beta)=\frac{(-1)^{N}}{2^{N}\pi^{2N}A}\int dCd\lambda d\bar{\psi}d \psi d\bar{\varphi}d\varphi d\phi d\sigma\,e^{S_{\rm bare}}, \tag{31}\] where \[\begin{split} S_{\rm bare}=&-\alpha C^{2}-\beta \lambda^{2}+i\lambda_{a}(v_{a}-(C_{abc}+Q_{abc})v_{b}v_{c})\\ &-\sigma^{2}-2i\sigma_{a}\left(\delta_{ab}-2(Q_{abc}+C_{abc})v_{ c}\right)\phi_{b}-\epsilon\phi^{2}\\ &-\bar{\varphi}\varphi-\bar{\psi}_{a}\left(\delta_{ab}-2(Q_{abc}+ C_{abc})v_{c}\right)\varphi_{b}-\bar{\varphi}_{a}\left(\delta_{ab}-2(Q_{abc}+C_{ abc})v_{c}\right)\psi_{b}+\epsilon\bar{\psi}\psi.\end{split} \tag{32}\] As in Section 3, there are no new terms depending on \(C\) compared with the previous case for \(Q=\beta=0\) in [26], and therefore the integration over \(C\) can be performed as in the previous computation there. Then we obtain a similar form of the action for \(\lambda\) as in Section 3: \[S_{\lambda}=-\frac{v^{4}}{12\alpha}\lambda_{a}B_{ab}\lambda_{b}+i\lambda_{a}(v_{a }-D_{a}-D_{a}^{Q}), \tag{33}\] where \(B,D^{Q}\) are already defined in (15) and below (13), respectively. Here \(D\) can be taken from [26]8: Footnote 8: Here \(D\) is the sum \(D+\tilde{D}\) of [26]. \[D_{a}=\frac{v^{3}}{3\alpha}\Big{[}(\bar{\psi}_{\parallel}\varphi_{\parallel}+ \bar{\varphi}_{\parallel}\psi_{\parallel})\hat{v}_{a}+\bar{\psi}_{a}\varphi_{ \parallel}+\bar{\psi}_{\parallel}\varphi_{a}+\bar{\varphi}_{a}\psi_{\parallel}+ \bar{\varphi}_{\parallel}\psi_{a}+2i\left(\hat{v}_{a}\sigma_{\parallel}\phi_{ \parallel}+\sigma_{a}\phi_{\parallel}+\sigma_{\parallel}\phi_{a}\right)\Big{]}, \tag{34}\] where \(\hat{v}_{a}=v_{a}/|v|\). Comparing (33) with (13), the change is to replace \(D^{\text{signed}}\) with \(-D\). By using (19) with this replacement and adding the \(Q\)-dependent but \(\lambda\)-independent terms in (32), we obtain \[\begin{split}\rho(v,Q,\beta)=& 3^{\frac{N-1}{2}}\pi^{- \frac{3N}{2}}\alpha^{\frac{N}{2}}(v^{4}+4\alpha\beta)^{-\frac{1}{2}}(v^{4}+12 \alpha\beta)^{-\frac{N-1}{2}}\exp\left[-\frac{\alpha v^{2}}{v^{4}+4\alpha\beta }\right]\\ &\cdot\exp\left[\frac{2\alpha b_{\parallel}vD_{\parallel}^{Q}- \alpha b_{\parallel}(D_{\parallel}^{Q})^{2}-3\alpha b_{\perp}D_{\perp}^{Q} \cdot D_{\perp}^{Q}}{v^{4}}\right]Z,\end{split} \tag{35}\] where \(Z\) is a partition function of a quantum field theory, \[Z=(-1)^{N}\int d\bar{\psi}\cdots d\sigma\,e^{S_{0}+S_{Q,\beta}}. \tag{36}\] Here \(S_{0}\) is the former result in [26] corresponding to \(Q=\beta=0\), which is explicitly given in Appendix C, and \[\begin{split} S_{Q,\beta}=&\frac{2\alpha(b_{ \parallel}-1)v-2\alpha b_{\parallel}D_{\parallel}^{Q}}{v^{4}}D_{\parallel}- \frac{6\alpha b_{\perp}}{v^{4}}D_{\perp}\cdot D_{\perp}^{Q}+2Q_{abc}v_{c}\left( \bar{\psi}_{a}\varphi_{b}+\bar{\varphi}_{a}\psi_{b}+2i\sigma_{a}\phi_{b}\right) \\ &-\frac{\alpha(b_{\parallel}-1)}{v^{4}}D_{\parallel}^{2}-\frac{3 \alpha(b_{\perp}-1)}{v^{4}}D_{\perp}\cdot D_{\perp},\end{split} \tag{37}\] where \(D_{\parallel}=v\cdot D/|v|,\ D_{\perp}=I_{\perp}D\). Note that the first three terms are some corrections to the kinetic terms, and the latter to the four-interaction terms. As for \(D_{\parallel}\) and \(D_{\perp}\), we have more explicit expressions from (34), \[\begin{split}& D_{\parallel}=\frac{v^{3}}{\alpha}\left(\bar{\psi}_{ \parallel}\varphi_{\parallel}+\bar{\varphi}_{\parallel}\psi_{\parallel}+2i \sigma_{\parallel}\phi_{\parallel}\right),\\ & D_{\perp}=\frac{v^{3}}{3\alpha}\left(\bar{\psi}_{\perp}\varphi _{\parallel}+\bar{\psi}_{\parallel}\varphi_{\perp}+\bar{\varphi}_{\perp}\psi_ {\parallel}+\bar{\varphi}_{\parallel}\psi_{\perp}+2i(\sigma_{\parallel}\phi_{ \perp}+\sigma_{\perp}\phi_{\parallel})\right).\end{split} \tag{38}\] The four-interaction terms in (37) have the form of self-products. One can make it quadratic by using the formula \(\frac{1}{\sqrt{\pi}}\int_{\mathbb{R}}dg\,e^{-g^{2}+2Ag}=e^{A^{2}}\). The result is \[Z=(-1)^{N}\pi^{-\frac{N}{2}}\int dg_{\parallel}dg_{\perp}d\bar{\psi}\cdots d \sigma\,e^{S_{0}+S_{Q,\beta,g}}, \tag{39}\] where \(g_{\parallel}\) is one dimensional, \(g_{\perp}\) is \(N-1\) dimensional, and9 Footnote 9: Note that \(b_{\parallel},b_{\perp}<1\). \[\begin{split} S_{Q,\beta,g}=&-g_{\parallel}^{2}-g_ {\perp}^{2}+\left(\frac{2\alpha(b_{\parallel}-1)v-2\alpha b_{\parallel}D_{ \parallel}^{Q}}{v^{4}}+\frac{2\sqrt{\alpha(1-b_{\parallel})}}{v^{2}}g_{ \parallel}\right)D_{\parallel}-\frac{6\alpha b_{\perp}}{v^{4}}D_{\perp}\cdot D _{\perp}^{Q}\\ &+\frac{2\sqrt{3\alpha(1-b_{\perp})}}{v^{2}}D_{\perp}\cdot g_{ \perp}+2Q_{abc}v_{c}\left(\bar{\psi}_{a}\varphi_{b}+\bar{\varphi}_{a}\psi_{b}+ 2i\sigma_{a}\phi_{b}\right),\end{split} \tag{40}\] which contains only quadratic terms of the fields. ### Rank-one \(Q\) In this subsection we consider the rank-one tensor \(Q\) in (24) to explicitly perform the integration over the fields in (35). #### 4.2.1 A general formula By putting (24) into (35), one obtains \[\begin{split}\rho(v,Q,\beta)=& 3^{\frac{N-1}{2}}\pi^{- \frac{3N}{2}}\alpha^{\frac{N}{2}}(v^{4}+4\alpha\beta)^{-\frac{1}{2}}(v^{4}+12 \alpha\beta)^{-\frac{N-1}{2}}\exp\left[-\frac{\alpha v^{2}}{v^{4}+4\alpha\beta }\right]\\ &\cdot\exp\left[\frac{2\alpha qv^{3}n_{\parallel}^{3}-\alpha q^{2 }v^{4}n_{\parallel}^{6}}{v^{4}+4\alpha\beta}-\frac{3\alpha q^{2}v^{4}n_{ \parallel}^{4}n_{\perp}^{2}}{v^{4}+12\alpha\beta}\right]Z,\end{split} \tag{41}\] where the partition function \(Z\) can be computed either by (36) with (37) or by (39) with (40). Let us first put (24) into (37). After a lengthy but straightforward computation using the same decomposition as in Section 3.2, we get \[\begin{split} S_{q,n,\beta}:=& S_{Q=qnnn,\beta}\\ =& 2(qvn_{\parallel}^{3}-1)(1-b_{\parallel})\left( \bar{\psi}_{\parallel}\varphi_{\parallel}+\bar{\varphi}_{\parallel}\psi_{ \parallel}+2i\sigma_{\parallel}\phi_{\parallel}\right)\\ &+2qvn_{\parallel}^{2}n_{\perp}(1-b_{\perp})\left(\bar{\psi}_{ \perp_{1}}\varphi_{\parallel}+\bar{\psi}_{\parallel}\varphi_{\perp_{1}}+\bar{ \varphi}_{\perp_{1}}\psi_{\parallel}+\bar{\varphi}_{\parallel}\psi_{\perp_{1}} +2i(\sigma_{\parallel}\phi_{\perp_{1}}+\sigma_{\perp_{1}}\phi_{\parallel}) \right)\\ &+2qvn_{\parallel}n_{\perp}^{2}\left(\bar{\psi}_{\perp_{1}} \varphi_{\perp_{1}}+\bar{\varphi}_{\perp_{1}}\psi_{\perp_{1}}+2i\sigma_{\perp _{1}}\phi_{\perp_{1}}\right)\\ &+\frac{8\beta v^{2}}{v^{4}+4\alpha\beta}\left(-\bar{\psi}_{ \parallel}\psi_{\parallel}\bar{\varphi}_{\parallel}\varphi_{\parallel}+2i(\bar{ \psi}_{\parallel}\varphi_{\parallel}+\bar{\varphi}_{\parallel}\psi_{\parallel} )\sigma_{\parallel}\phi_{\parallel}-2\sigma_{\parallel}^{2}\phi_{\parallel} ^{2}\right)\\ &+\frac{8\beta v^{2}}{v^{4}+12\alpha\beta}\bigg{(}\bar{\psi}_{ \perp}\cdot\bar{\varphi}_{\perp}\psi_{\parallel}\varphi_{\parallel}+\psi_{ \perp}\cdot\varphi_{\perp}\bar{\psi}_{\parallel}\bar{\varphi}_{\parallel}- \bar{\psi}_{\perp}\cdot\psi_{\perp}\bar{\varphi}_{\parallel}\varphi_{ \parallel}-\bar{\psi}_{\perp}\cdot\varphi_{\perp}\bar{\psi}_{\parallel} \varphi_{\parallel}\\ &\qquad\qquad\qquad\qquad\qquad-\bar{\varphi}_{\perp}\cdot \varphi_{\perp}\bar{\psi}_{\parallel}\psi_{\parallel}-\bar{\varphi}_{\perp} \cdot\psi_{\perp}\bar{\varphi}_{\parallel}\psi_{\parallel}\\ &\qquad\qquad\qquad\qquad\qquad+2i\left(\bar{\psi}_{\perp} \varphi_{\parallel}+\bar{\psi}_{\parallel}\varphi_{\perp}+\bar{\varphi}_{ \perp}\psi_{\parallel}+\bar{\varphi}_{\parallel}\psi_{\perp}\right)\cdot( \sigma_{\parallel}\phi_{\perp}+\sigma_{\perp}\phi_{\parallel}\right)\\ &\qquad\qquad\qquad\qquad-2\left(\sigma_{\parallel}^{2}\phi_{ \perp}\cdot\phi_{\perp}+\phi_{\parallel}^{2}\sigma_{\perp}\cdot\sigma_{\perp}+ 2\sigma_{\parallel}\phi_{\parallel}\phi_{\perp}\cdot\sigma_{\perp}\right) \bigg{)}.\end{split} \tag{42}\] As for (40), we obtain \[\begin{split} S_{q,n,\beta,g}:=& S_{Q=qnnn,\beta,g}\\ =&-g_{\parallel}^{2}-g_{\perp}^{2}+2\left((qvn_{ \parallel}^{3}-1)(1-b_{\parallel})+v\sqrt{\frac{1-b_{\parallel}}{\alpha}}g_{ \parallel}\right)\left(\bar{\psi}_{\parallel}\varphi_{\parallel}+\bar{\varphi}_ {\parallel}\psi_{\parallel}+2i\sigma_{\parallel}\phi_{\parallel}\right)\\ &+\left(2qvn_{\parallel}^{2}n_{\perp}(1-b_{\perp})\right)\left( \bar{\psi}_{\perp_{1}}\varphi_{\parallel}+\bar{\psi}_{\parallel}\varphi_{\perp _{1}}+\bar{\varphi}_{\perp_{1}}\psi_{\parallel}+\bar{\varphi}_{\parallel}\psi_{ \perp_{1}}+2i(\sigma_{\parallel}\phi_{\perp_{1}}+\sigma_{\perp_{1}}\phi_{ \parallel})\right)\\ &+2qvn_{\parallel}n_{\perp}^{2}\left(\bar{\psi}_{\perp_{1}} \varphi_{\perp_{1}}+\bar{\varphi}_{\perp_{1}}\psi_{\perp_{1}}+2i\sigma_{\perp _{1}}\phi_{\perp_{1}}\right)\\ &+2v\sqrt{\frac{1-b_{\perp}}{3\alpha}}g_{\perp}\cdot\left(\bar{ \psi}_{\perp}\varphi_{\parallel}+\bar{\psi}_{\parallel}\varphi_{\perp}+\bar{ \varphi}_{\perp}\psi_{\parallel}+\bar{\varphi}_{\parallel}\psi_{\perp}+2i( \sigma_{\parallel}\phi_{\perp}+\sigma_{\perp}\phi_{\parallel})\right).\end{split} \tag{43}\] In the following subsections, we will consider \(N=1\), \(N=2\) and large-\(N\) cases. #### 4.2.2 \(N=1\) In this case we ignore all the transverse components, and also set \(n_{\parallel}=1\). By putting these to (35), (39), (43) and (C2), and doing some straightforward computations, we obtain \[\rho(v,q,\beta)=\pi^{-1}\alpha^{\frac{1}{2}}(v^{4}+4\alpha\beta)^{-\frac{1}{2}} \exp\left[\frac{-\alpha v^{2}+2\alpha qv^{3}-\alpha q^{2}v^{4}}{v^{4}+4\alpha \beta}\right]\left(\sqrt{\pi}a\operatorname{Erf}\left(\frac{a}{b}\right)+b\,e^{ -\frac{a^{2}}{b^{2}}}\right), \tag{44}\] where \[\begin{split} a&=1+2(qv-1)(1-b_{\parallel}),\\ b&=2v\sqrt{\frac{(1-b_{\parallel})}{\alpha}}.\end{split} \tag{45}\] The details of the derivation are given in Appendix D. #### 4.2.3 \(N=2\) In this case the transverse direction is exhausted by one-dimension, namely, \(\bot\)=\(\bot_{1}\) and \(\bot_{2}\) is null. A special fact about this case is that the four-interaction terms in (C2) have a form of a square: \[V_{F}+V_{B}+V_{BF}=\frac{v^{2}}{3\alpha}\left(\bar{\psi}_{\bot_{1}}\varphi_{ \bot_{1}}+\bar{\varphi}_{\bot_{1}}\psi_{\bot_{1}}+2i\sigma_{\bot_{1}}\phi_{ \bot_{1}}\right)^{2}. \tag{46}\] Therefore we can rewrite this part of the action as \[e^{V_{F}+V_{B}+V_{BF}}=\frac{1}{\sqrt{\pi}}\int dg\,e^{-g^{2}+2vg \left(\bar{\psi}_{\bot_{1}}\varphi_{\bot_{1}}+\bar{\varphi}_{\bot_{1}}\psi_{ \bot_{1}}+2i\sigma_{\bot_{1}}\phi_{\bot_{1}}\right)/\sqrt{3\alpha}}, \tag{47}\] whose exponent contains only quadratic terms of the fields. Using this for (39), (43) and (C2), we obtain \[Z_{N=2}=\pi^{-\frac{3}{2}}\int dg_{1}dg_{2}dg_{3}\int d\bar{ \psi}\cdots d\sigma\,e^{-g_{1}^{2}-g_{2}^{2}-g_{3}^{2}+K_{\parallel\bot_{1}}}, \tag{48}\] where \[\begin{split} K_{\parallel\bot_{1}}=&-\bar{\varphi }_{\parallel}\varphi_{\parallel}+\epsilon\bar{\psi}_{\parallel}\psi_{ \parallel}-{\sigma_{\parallel}}^{2}-\epsilon{\phi_{\parallel}}^{2}-\bar{ \varphi}_{\bot_{1}}\varphi_{\bot_{1}}+\epsilon\bar{\psi}_{\bot_{1}}\psi_{ \bot_{1}}-{\sigma_{\bot_{1}}}^{2}-\epsilon{\phi_{\bot_{1}}}^{2}\\ &+a_{1}\left(\bar{\psi}_{\parallel}\varphi_{\parallel}+\bar{ \varphi}_{\parallel}\psi_{\parallel}+2i\sigma_{\parallel}\phi_{\parallel}\right) \\ &+a_{2}\left(\bar{\psi}_{\parallel}\varphi_{\bot_{1}}+\bar{\psi}_{ \bot_{1}}\varphi_{\parallel}+\bar{\varphi}_{\parallel}\psi_{\bot_{1}}+\bar{ \varphi}_{\bot_{1}}\psi_{\parallel}+2i\left(\sigma_{\parallel}\phi_{\bot_{1} }+\sigma_{\bot_{1}}\phi_{\parallel}\right)\right)\\ &+a_{3}\left(\bar{\psi}_{\bot_{1}}\varphi_{\bot_{1}}+\bar{ \varphi}_{\bot_{1}}\psi_{\bot_{1}}+2i\sigma_{\bot_{1}}\phi_{\bot_{1}}\right) \end{split} \tag{49}\] with \[\begin{split} a_{1}&=2b_{\parallel}-1+2qv(1-b_{ \parallel})n_{\parallel}^{3}+2v\sqrt{\frac{1-b_{\parallel}}{\alpha}}g_{1},\\ a_{2}&=2qv(1-b_{\bot})n_{\parallel}^{2}n_{\bot}+ 2v\sqrt{\frac{1-b_{\bot}}{3\alpha}}g_{2},\\ a_{3}&=-1+2qvn_{\parallel}n_{\bot}^{2}+2v\sqrt{ \frac{1}{3\alpha}}g_{3}.\end{split} \tag{50}\] Then the integration (48) over the fields generates a square root of a determinant, and we obtain \[Z_{N=2}=\sqrt{\pi}\int dg_{1}dg_{2}dg_{3}\,e^{-g_{1}^{2}-g_{2}^{2}-g_{3}^{2}} \left|a_{2}^{2}-a_{1}a_{3}\right|. \tag{51}\] #### 4.2.4 Large \(N\) For \(N>2\) we will not obtain exact expressions of the distributions. We will rather obtain an expression which is a good approximation for large \(N\). For large \(N\) the degrees of freedom carried by the \(\perp_{2}\) fields will dominate over those of the \(\|\perp_{1}\) fields, since the former is \((N-2)\)-dimensional, while the latter is 2-dimensional. Therefore the dynamics of the \(\perp_{2}\) fields can well be determined by themselves with little effects from the \(\|\perp_{1}\) fields, which may be ignored in the large-\(N\) limit. Then the dynamics of the \(\|\perp_{1}\) fields may be computed in the backgrounds of the \(\perp_{2}\) fields, which can well be approximated by their classical values because of their large number of degrees of freedom for large \(N\). More precisely, our approximation is given by \[Z=Z_{\perp_{2}}\,Z_{\|\perp_{1}}(R). \tag{52}\] Here \(Z_{\perp_{2}}\) is the partition function determined solely by the \(\perp_{2}\) fields, \[Z_{\perp_{2}}=(-1)^{N-2}\int d\bar{\psi}_{\perp_{2}}\cdots d\sigma_{\perp_{2} }\,e^{S_{\perp_{2}}}, \tag{53}\] where \(S_{\perp_{2}}\) is the collection of the terms which contain only the \(\perp_{2}\) fields in (C1) with (C2)10. The computation of the partition function \(Z_{\perp_{2}}\) is the same as that in the previous paper [26], because \(S_{\perp_{2}}\) has the same form as the action of the transverse directions there11. Footnote 10: For instance, we include \(\bar{\psi}_{\perp_{2}}\cdot\psi_{\perp_{2}}\bar{\varphi}_{\perp_{2}}\cdot \varphi_{\perp_{2}}\) but ignore \(\bar{\psi}_{\perp_{2}}\cdot\psi_{\perp_{2}}\bar{\varphi}_{\perp_{1}}\varphi_{ \perp_{1}}\), \(\bar{\psi}_{\perp_{2}}\cdot\psi_{\perp_{2}}\bar{\varphi}_{\parallel}\varphi_{ \parallel}\), etc., because of the reason mentioned in the first paragraph. The ignored terms will be considered in \(Z_{\|\perp_{1}}\). Footnote 11: But note the difference of the dimensions of \(\perp_{2}\) here and \(\perp\) in [26], where the former is \(N-2\), while the latter is \(N-1\). Therefore when we take a result from [26], we have to deduct \(N\) by one. \(Z_{\|\perp_{1}}(R)\) is the partition function of the \(\|\perp_{1}\) fields in the background of the \(\perp_{2}\) fields, \[Z_{\|\perp_{1}}(R)=\int d\bar{\psi}_{\|}\cdots d\sigma_{\perp_{1}}e^{S_{\| \perp_{1}}(R)}, \tag{54}\] where \(R\) denotes the classical backgrounds of the \(\perp_{2}\) fields, as will be explained below in more detail. Here the action \(S_{\|\perp_{1}}(R)\) is composed of all the terms which contain the \(\|\perp_{1}\) fields in (42) and (C1). Part of the terms in \(S_{\|\perp_{1}}(R)\) contain the \(\perp_{2}\) fields as well. For large \(N\) these \(\perp_{2}\) fields may well be approximated by their classical values because of the large degrees of freedom of the \(\perp_{2}\) fields. For instance, we perform replacements, \[\bar{\psi}_{\perp_{2}}\cdot\varphi_{\perp_{2}}\bar{\psi}_{\|}\varphi_{\|} \rightarrow\langle\bar{\psi}_{\perp_{2}}\cdot\varphi_{\perp_{2}}\rangle\bar{ \psi}_{\|}\varphi_{\|}, \tag{55}\] where \(\langle\cdot\rangle\) denotes an expectation value. By doing such replacements we obtain \(S_{\|\perp_{1}}(R)\), whose dynamical fields are only the \(\|\perp_{1}\) fields. Obtaining the explicit form of \(S_{\parallel\perp_{1}}(R)\) proceeds as follows. The quadratic and quartic terms of the \(\parallel\perp_{1}\) fields can be processed in the same manner as are performed for \(N=2\) in Section 4.2.3, and we obtain \(K_{\parallel\perp_{1}}\) in (49) with (50). Then the four-interaction terms between the \(\parallel\perp_{1}\) fields and the \(\perp_{2}\) fields, where the latter are replaced by their expectation values like in (55), generate some quadratic terms of the former, which are explicitly given in (E7) of Appendix E. Thus we have \[S_{\parallel\perp_{1}}(R)=K_{\parallel\perp_{1}}+V_{\parallel\perp_{1},\perp_ {2}}(R), \tag{56}\] whose terms are all quadratic in the \(\parallel\perp_{1}\) fields. Then the computation of the partition function (54) is just a computation of a determinant, and we obtain \[Z_{\parallel\perp_{1}}(R)=\sqrt{\pi}\int dg_{1}dg_{2}dg_{3}\,e^{-g_{1}^{2}-g_ {2}^{2}-g_{3}^{2}}\sqrt{\det H}, \tag{57}\] where \(H\) is given by \[H=\left(\begin{array}{cccc}\epsilon-A_{1}R_{22}&a_{1}-A_{1}R_{12}&0&a_{2}\\ a_{1}-A_{1}R_{12}&-1-A_{1}R_{11}&a_{2}&0\\ 0&a_{2}&\epsilon-A_{2}R_{22}&a_{3}-A_{2}R_{12}\\ a_{2}&0&a_{3}-A_{2}R_{12}&-1-A_{2}R_{11}\end{array}\right), \tag{58}\] where \(a_{i}\) are given in (50), \(R_{ij}\) are the values of the two point of correlation functions of the \(\perp_{2}\) fields explicitly given in (E3) and (E4), and \[A_{1}=\frac{8\beta v^{2}(N-2)}{v^{4}+12\alpha\beta},\ A_{2}=\frac{v^{2}(N-2)}{ 3\alpha}. \tag{59}\] The derivation of \(H\) is given in Appendix E. ## 5 Comparison with numerical simulations In this section we compare the distributions obtained for the spiked tensor in Sections 3 and 4 with Monte Carlo (MC) simulations. The method is basically the same as that taken in the previous works of the author [24, 25, 26]. Throughout this section we put \(\alpha=1/2\) without loss of generality. In the MC simulations, all the solutions to the eigenvector equation (1) must be computed for any randomly sampled \(C\) and \(\eta\). Since this requires a reliable polynomial equation solver, we used Mathematica 13 for the MC simulations. It computes the solutions to (1), which are generally complex, among which we take only the real ones. To check whether all the solutions are covered, we checked whether the number of the generally complex solutions to (1) agreed with the number \(2^{N}-1\) of the generally complex eigenvectors proven in [13], every time the solutions were computed. In fact, when \(N\) is large, we encountered some cases that a few solutions were missing. However, the missing rates were too small to statistically be relevant for this study. For example the missing rate was \(\lesssim 10^{-4}\) in the \(N=9\) data we use in this paper. We used a workstation which had a Xeon W2295 (3.0GHz, 18 cores), 128GB DDR4 memory, and Ubuntu 20 as OS. The Monte Carlo simulations were performed by the following procedure. * Randomly sample \(C\) and \(\eta\). Each \(\eta_{a}\) is randomly sampled by the normal distribution with the mean value zero and the standard deviation \(\sqrt{2\beta}\). Each \(C_{abc}\) is randomly sampled by the normal distribution with the mean value zero and the standard deviation \(1/\sqrt{d_{abc}}\), corresponding to \(\alpha=1/2\), where \(d_{abc}\) is the degeneracy factor defined by12 Footnote 12: This degeneracy factor is because the Gaussian term in (5) is \(C_{abc}C_{abc}=\sum_{a\leq b\leq c=1}^{N}d_{abc}C_{abc}^{2}\) in terms of the independent components of the symmetric tensor \(C\). \[d_{abc}=\left\{\begin{array}{ll}1&\mbox{for $a=b=c$,}\\ 3&\mbox{for $a\neq b=c$ or $b\neq c=a$ or $c\neq a=b$,}\\ 6&\mbox{for $a\neq b\neq c\neq a$.}\end{array}\right.\] (60) * As explained above, compute all the complex solutions to the eigenvector equation (1), and pick up only the real ones \(v^{i}\) (\(i=1,2,\cdots,\#\mbox{sol}(C,\eta)\)). * Store \(\left(|v^{i}|,v^{i}\cdot n/|v^{i}|,\mbox{sign}\left(\det M(v^{i},Q,C)\right) \right)\) for \(i=1,2,\cdots,\#\mbox{sol}(C,\eta)\). * Repeat the above processes. By this sampling procedure, we obtain a series of data, \(\left(|v^{h}|,v^{h}\cdot n/|v^{h}|,\mbox{sign}\left(\det M(v^{h},Q,C)\right)\right)\) for \(h=1,2,\cdots,L\), where \(L\) denotes the total number of real solutions obtained13. Footnote 13: Note that \(L\) is generally different from \(N_{MC}\) below. To plot the distributions, we classify the data into equally spaced bins in \(v\) and angle \(\theta\) as \[v-\delta_{v}/2<v^{h}\leq v+\delta_{v}/2,\] \[\cos(\theta-\delta_{\theta}/2)<v^{h}\cdot n/|v^{h}|\leq\cos( \theta+\delta_{\theta}/2), \tag{61}\] \[\mbox{sign}\left(\det M(v^{h},Q,C)\right)=1\mbox{ or sign}\left( \det M(v^{h},Q,C)\right)=-1,\] where \(v,\theta\) are the center values of a bin, and \(\delta_{v},\delta_{\theta}\) are the sizes of a bin. We denote the total number of data satisfying (61) as \(\mathcal{N}_{\delta_{v},\delta_{\theta},+}(v,\theta)\) and \(\mathcal{N}_{\delta_{v},\delta_{\theta},-}(v,\theta)\) for \(\mbox{sign}\left(\det M(v^{h},Q,C)\right)=1\) and \(\mbox{sign}\left(\det M(v^{h},Q,C)\right)=-1\), respectively. Then the distribution of the real eigenvectors from a data is given by \[\rho_{MC}(v,\theta;q,\beta)=\frac{1}{N_{MC}\delta_{v}\delta_{\theta}}\left({ \cal N}_{\delta_{v},\delta_{\theta},+}(v,\theta)+{\cal N}_{\delta_{v},\delta_{ \theta},-}(v,\theta)\pm\sqrt{{\cal N}_{\delta_{v},\delta_{\theta},+}(v,\theta )+{\cal N}_{\delta_{v},\delta_{\theta},-}(v,\theta)}\right), \tag{62}\] where \(N_{MC}\) denotes the total number of sampling processes in obtaining the data and the \(\pm\) part represents error estimates. The signed distribution is given by \[\rho_{MC}^{\rm signed}(v,\theta;q,\beta)=\frac{1}{N_{MC}\delta_{v}\delta_{ \theta}}\left({\cal N}_{\delta_{v},\delta_{\theta},+}(v,\theta)-{\cal N}_{ \delta_{v},\delta_{\theta},-}(v,\theta)\pm\sqrt{{\cal N}_{\delta_{v},\delta_{ \theta},+}(v,\theta)+{\cal N}_{\delta_{v},\delta_{\theta},-}(v,\theta)}\right). \tag{63}\] As for the analytical side, since we take only the size \(|v|\) and the relative angle \(\theta\) as data, the above MC distributions should be compared with \[\rho_{\rm analy}(v,\theta;q,\beta)dvd\theta= \int_{|v^{\prime}|=v,\,v^{\prime}\cdot n/|v^{\prime}|=\cos(\theta )}d^{N}v^{\prime}\rho(v^{\prime},q,n,\beta) \tag{64}\] \[= S_{N-2}v^{N-1}\sin^{N-2}(\theta)\,\rho(v,q,n,\beta)dvd\theta,\] where \(S_{N-2}=2\pi^{(N-1)/2}/\Gamma[(N-1)/2]\) is the surface volume of a unit sphere in the \(N-1\)-dimensional flat space. Here \(\rho(v,q,n,\beta)\) is one of the expressions obtained in Sections 3 and 4, and \(v\) in the argument of \(\rho\) on the righthand side abusively denotes an arbitrary vector \(v^{\prime}\) which satisfies \(|v^{\prime}|=v,\,v^{\prime}\cdot n/|v^{\prime}|=\cos(\theta)\). In the following we will compare the Monte Carlo and the analytical results. Figure 1: The MC signed distribution (63) is plotted for a data with \(N=9,\beta=10^{-4},q=10\) and total sampling number \(N_{MC}=4\cdot 10^{4}\). Let us first consider the signed distribution. The analytical result is obtained by putting (26) with (28) into (64). Since the analytical result is an exact result, it should agree with the MC result within errors. In Figure 1, we plot the MC result (63) for \(N=9,\beta=10^{-4},q=10\) with \(N_{MC}=4\cdot 10^{4}\). As examples, the analytical and MC results are compared at two slices, one at \(\left|v\right|=0.105\) and the other at \(\theta=\pi/2\) in the two slots of Figure 2. They agree quite well within error estimates, supporting the validities of both the analytical and the MC computations. As in Figure 1 and the left slot of Figure 2, an evident negative peak can be observed around \(\left|v\right|\sim 0.1\) and \(\theta\sim 0.5\). This peak approximately corresponds to an eigenvector \(q^{-1}n_{a}\) of the background tensor \(Q_{abc}=q\,n_{a}n_{b}n_{c}\). In fact, the location satisfies \(\left|v\right|\sim q^{-1}\), while the angle is not strictly \(\theta=0\). The reason is that the volume factor in (64) contains \(\sin^{N-2}(\theta)\), and pushes the peak away from \(\theta=0\). Because of the same reason, the other major structures are concentrated around \(\theta=\pi/2\) in Figure 1. A large-\(N\) limit which effectively vanishes this volume effect will be discussed in Section 6. In Figure 1 and the right slot of Figure 2 one can also see a peak around \(\left|v\right|\sim 0.04,\theta\sim\pi/2\). This peak corresponds to the trivial eigenvector \(v=0\). Because of \(\beta>0\) the distribution broadens around \(\left|v\right|\sim 0\), and the volume factor \(v^{N-1}\) in (64) pushes the peak away from \(\left|v\right|=0\). In Figure 3 the MC distribution (62) is shown for the same data. Except for the signs, the characters of the distribution are more or less similar to the signed case. On the other hand, the analytic result for this case has the difference that the partition function \(Z\) in (35) is computed by the approximation (52), while it was exact for the signed case. The exact Figure 2: The comparison between the analytical and the MC results with the same data as of Figure 1. The analytical result is drawn by the solid lines and the MC results are plotted with error bars. The comparisons are shown for two example slices in \(\left|v\right|\) and \(\theta\); the left is at \(\left|v\right|=0.105\) and the right is at \(\theta=\pi/2\). expression of \(Z_{\perp_{2}}\) can be taken from the previous result in [26], which is explicitly given in Appendix F. As for \(Z_{\parallel\perp_{1}}\), by numerically integrating (57) on a grid of points in \(|v|\) and \(\theta\), an interpolation function of \(Z_{\parallel\perp_{1}}\) is computed and used. In Figure 4 the analytic and the MC results are compared. The agreement is fairly satisfactory except for some slight systematic deviations around a peak. ## 6 Large-\(N\) limit In this section we will take large-\(N\) limits of the distribution obtained in Section 4.2 for a spiked tensor. We will particularly pay attention to the parameter region where the peak Figure 4: The MC results are plotted with error bars for the same data as in Figure 3. The analytic result is drawn by the solid lines. The left slot is of the slice at \(|v|=0.105\), and the right at \(\theta=\pi/2\). Figure 3: The MC distribution (62) is plotted for the same data as used in Figure 1 for \(N=9,\beta=10^{-4},q=10\) and \(N_{MC}=4\cdot 10^{4}\). corresponding to the background \(Q\) can been seen in the eigenvector distribution. We will consider two large-\(N\) limits. In one large-\(N\) limit, we will derive the result that a peak can be well identified with \(Q\) for the parameter region, \(\alpha q^{2}/N\gtrsim 0.6,\beta q^{2}N\lesssim 0.1\). In particular for \(\beta q^{2}=0\), we will find the threshold value to be \(0.66<(\alpha q^{2}/N)_{c}<0.67\), which agrees with Proposition 2 of [29]. However this peak is always smaller than the other peak(s) at \(n_{\parallel}=0\) and therefore relatively vanishes in the strict large-\(N\) limit. In the other scaling limit, \(\alpha q^{2}\sim N^{\gamma},\beta q^{2}\sim N^{-\gamma}\) with \(\gamma>1\), the peak remains in the strict large-\(N\) limit. We want to consider large-\(N\) limits which keep both the parameters \(Q\) and \(\beta\) relevant. As was discussed in Section 5, the volume factor \(\sin^{N-2}\theta\) in (64) suppresses the peak of the eigenvector \(q^{-1}n\) of the background tensor \(Q\), and this suppression becomes stronger as \(N\) becomes larger. Therefore, to obtain an interesting large-\(N\) limit, the parameters must be scaled so as to compete with \(\sin^{N-2}\theta\sim e^{N\log(\sin\theta)}\). A large-\(N\) scaling which makes the exponential factor in (41) in this order is given by \[\alpha=\frac{N\tilde{\alpha}}{q^{2}},\ \beta=\frac{\tilde{\beta}}{Nq^{2}},\ v= \frac{\tilde{v}}{q}, \tag{65}\] where \(\tilde{\alpha},\tilde{\beta}\) are kept finite. Here the factors of \(q\) are to absorb the dependence on \(q\) from the formulas below. Let us discuss the large-\(N\) limit of \(Z=Z_{\perp_{2}}Z_{\parallel\perp_{1}}\) in Section 4.2.4. The large-\(N\) limit of \(Z_{\perp_{2}}\) was determined in [25], and it is given by \[Z_{\perp_{2}}^{N=\infty}\sim\mbox{const.}\,e^{NS_{\perp_{2}}^{\infty}}, \tag{66}\] where14 Footnote 14: For simplicity, \(S_{\perp_{2}}^{\infty}\) is shifted by an irrelevant constant from the corresponding expression with \(R=1/2\) in [25]. \[S_{\perp_{2}}^{\infty}(x)=\left\{\begin{array}{ll}\log 2+\log(x)+\frac{1- \sqrt{1-4x}}{4x}-\log\left(1-\sqrt{1-4x}\right)&\mbox{ for }0<x<\frac{1}{4},\\ \frac{1}{4x}+\frac{1}{2}\log(x)&\mbox{ for }\frac{1}{4}<x,\end{array}\right. \tag{67}\] with15\(x=(N-2)v^{2}/(3\alpha)\sim\tilde{v}^{2}/(3\tilde{\alpha})\). As for \(Z_{\parallel\perp_{1}}\), one can easily see that the limit of (57) is just given by dropping the terms dependent on \(g_{i}\) in (50), while the \(N\)-dependencies of \(A_{i}\) in (59) and \(R_{ij}\) in (E3) and (E4) drop out. Therefore \(H\) does not depend on \(g_{i}\) and we get Footnote 15: \(N\) must be deducted by one, when we take a result from [25]. See a footnote below (E2). \[Z_{\parallel\perp_{1}}^{N=\infty}=\pi^{2}\left.\sqrt{\det H}\right|_{g_{i}=0}, \tag{68}\] which has no relevant effects to the formula below for the large \(N\) limit. By collecting the results above and using (64) and (41), we obtain \[S_{\infty}(\tilde{v},\theta)= \lim_{N\rightarrow\infty}\frac{1}{N}\log\rho_{\text{analy}} \tag{69}\] \[= \text{const.}+S_{\perp_{2}}^{\infty}+\log\tilde{v}+\log(n_{\perp}) -\frac{1}{2}\log(\tilde{v}^{4}+12\tilde{\alpha}\tilde{\beta})\] \[+\frac{-\tilde{\alpha}\tilde{v}^{2}+2\tilde{\alpha}\tilde{v}^{3} n_{\parallel}^{3}-\tilde{\alpha}\tilde{v}^{4}n_{\parallel}^{6}}{\tilde{v}^{4}+4 \tilde{\alpha}\tilde{\beta}}-\frac{3\tilde{\alpha}\tilde{v}^{4}n_{\parallel} ^{4}n_{\perp}{}^{2}}{\tilde{v}^{4}+12\tilde{\alpha}\tilde{\beta}},\] where \(n_{\parallel}=\cos\theta\) (\(n_{\perp}=\sin\theta\)), and const. is the part not dependent on \(\tilde{v}\) or \(\theta\). It is interesting to study the profile of \(S_{\infty}(\tilde{v},\theta)\) in the \(\tilde{v}\) and \(\theta\) plane for various values of \(\tilde{\alpha},\tilde{\beta}\). We have numerically studied it for the parameter region \(10^{-3}\leq\tilde{\alpha}\leq 10^{3},10^{-3}\leq\tilde{\beta}\leq 10^{3}\). In the unshaded region of Figure 5, the peak(s) exist only along \(n_{\parallel}=0\), as is shown in the left slot of Figure 6 as an example. In the shaded region, in addition to the peak(s) at \(n_{\parallel}=0\), there exists also a peak which has non-zero \(n_{\parallel}\). This peak corresponds to the eigenvector \(q^{-1}n\) of the background tensor \(Q\), as is shown in the right slot of Figure 6 as an example. In Figure 7, the values of \(n_{\parallel}\) and \(\tilde{v}\) are plotted for the latter peak. The location can be well identified with \(q^{-1}n\), if the values take \(n_{\parallel}\sim 1\) and \(\tilde{v}\sim 1\). As can be seen in the plots, this occurs in the region, \(\log_{10}\tilde{\alpha}\gtrsim-0.2\) and \(\log_{10}\tilde{\beta}\lesssim-1\). This is the parameter region in which the background tensor \(Q\) can be detected well. Figure 5: In the shaded region of the parameters, the eigenvector distribution has a peak of \(S_{\infty}\) corresponding to the eigenvector of \(Q\). It is interesting to compare this detectable region with a result of [29]. As can be seen in Figure 5, the shaded region has an edge around \(\log_{10}\tilde{\alpha}\sim-0.2\), namely, \(\tilde{\alpha}\sim 0.63\), independent of \(\tilde{\beta}\) for \(\log_{10}\tilde{\beta}\lesssim-1\). To see the threshold value more precisely for \(\tilde{\beta}=0\), we plot \(n_{\parallel}\) and \(\tilde{v}\) of the peak with \(n_{\parallel}>0\) in Figure 8. We find that the peak does not exist at \(\tilde{\alpha}\leq 0.66\), but exists at \(\tilde{\alpha}\geq 0.67\) with \(n_{\parallel}\gtrsim 0.7\). On the other hand, as explained in Appendix G, Proposition 2 of [29] states that the threshold value is \(\tilde{\alpha}=2/3\), which indeed agrees with our value. We numerically observed that a peak at \(n_{\parallel}=0\) always take the largest value of \(S_{\infty}\) at least in the parameter region of \(\tilde{\alpha},\tilde{\beta}\) we have studied above. This means that, because \(\rho\sim e^{NS_{\infty}}\), the peak corresponding to \(Q\) will effectively be invisible compared to the peak(s) at \(n_{\parallel}=0\) in the strict large-\(N\) limit. Therefore in the strict large-\(N\) limit, \(Q\), namely a "signal", cannot be detected by solving the eigenvector equation (1). The main reason for the above difficulty of detection comes from the strong effect of the volume factor \(\sin^{N-2}\theta\) in (64), which enhances the region \(n_{\parallel}\sim 0\) so strongly. Therefore an obvious way to solve this difficulty is to consider another scaling limit which overwhelms the volume factor. An example is given by \[\alpha=\frac{N^{\gamma}\tilde{\alpha}}{q^{2}},\ \beta=\frac{\tilde{\beta}}{N^{ \gamma}q^{2}},\ v=\frac{\tilde{v}}{q},\ \gamma>1. \tag{70}\] In this limit, \(x=(N-2)v^{2}/(3\alpha)\sim N^{-\gamma+1}\to 0\) in the large-\(N\) limit, so therefore (67) becomes a constant, meaning that \(Z_{\perp_{2}}\) is a free theory independent of \(v\). As for \(Z_{\parallel\perp_{1}}\), \(A_{i}\to 0\) and \(R_{ij}\) approaches finite values, so \(Z_{\parallel\perp_{1}}\) is again a finite quantity. Therefore from (41), the major contribution comes only from the exponent, and we obtain \[\begin{split} S_{\infty}^{\gamma}(\tilde{v},\theta)=& \lim_{N\rightarrow\infty}\frac{1}{N^{\gamma}}\log\rho_{\rm analy}\\ =&\frac{-\tilde{\alpha}\tilde{v}^{2}+2\tilde{ \alpha}\tilde{v}^{3}{n_{\parallel}}^{3}-\tilde{\alpha}\tilde{v}^{4}{n_{ \parallel}}^{6}}{\tilde{v}^{4}+4\tilde{\alpha}\tilde{\beta}}-\frac{3\tilde{ \alpha}\tilde{v}^{4}{n_{\parallel}}^{4}{n_{\perp}}^{2}}{\tilde{v}^{4}+12 \tilde{\alpha}\tilde{\beta}}.\end{split} \tag{71}\] As is shown in Appendix H, it is straightforward to prove that the maximum value of \(S_{\infty}^{\gamma}\) is 0, and this occurs only at three locations: (i) \(\tilde{v}=0\), (ii) \(\tilde{v}\rightarrow\infty,n_{\parallel}=0\), (iii) \(\tilde{v}=1,n_{\parallel}=1\). The last location corresponds to the background \(Q\). An example of \(S_{\infty}^{\gamma}\) is shown in Figure 9. Since the eigenvector distribution is given by \(\rho\sim e^{N^{\gamma}S_{\infty}^{\gamma}}\), there remains only the three locations above in the strict large-\(N\) limit. This means that, in the limit, a finite eigenvector (\(v\neq 0,\infty\)) is surely that of the background \(Q\). ## 7 Summary and future prospects In this paper we have studied the real eigenvector distributions of real symmetric order-three Gaussian random tensors in the case that the random tensors have non-zero mean value backgrounds and the eigenvector equations have Gaussian random deviations. This is an extension of the previous studies [24, 25, 26], which have no such mean values or deviations. We have derived the quantum field theories with quartic interactions whose partition functions give the distributions. For the background tensor being rank-one (a spiked tensor case) in particular, we have explicitly derived the distributions by computing the partition functions exactly or approximately. We have obtained good agreement between the analytical results and Monte Carlo simulations. We have derived the scaling and range of parameters for the background tensor to be detectable in the distributions in the large-\(N\) limit. Our threshold value has agreed with that of [29]. The quantum field theories we have derived in this paper are much more complicated than those in the previous studies [24, 25, 26] due to the presence of the backgrounds and the deviations. Nonetheless, we have obtained some exact expressions for the signed distributions, and have also derived some approximate expressions of the (authentic) distributions, which agree very well with the Monte Carlo results. This success can be ascribed to the quantum field theoretical expressions, to which we can apply various well-developed techniques and knowledge of quantum field theories. The results of this paper strengthen our belief that the quantum field theoretical procedure for computing distributions of quantities in random tensors is general, powerful, and intuitive. As far as random tensors are Gaussian, it is in principle straightforward to extend the quantum field theoretical procedure to some other problems in random tensors; distributions of complex eigenvectors/values, tensor rank decompositions, correlations among eigenvectors, etc. Although derived quantum field theories with quartic interactions may become quite complicated, it will always be possible to find ways to, exactly or approximately, compute the partition functions by quantum field theoretical techniques, knowledge and intuition. These studies will enrich fundamental knowledge about random tensors, which will eventually be applied in various subjects in future studies. Tensor models have emerged from discrete approaches to quantum gravity [6, 7, 8, 9], and are also taking active part in more recent approaches, such as in the AdS/CFT correspondence [33]. A question of the author's interest is whether there exists an analogous phenomenon in tensor models as the Gross-Witten-Wadia transition [4, 5]. In fact there are some indications that similar transitions exist in the context of a discrete model of quantum gravity [34, 35]. We hope the knowledge about random tensors enriched along the line of our studies will give some insights into quantum gravity in the future. ## Acknowledgment The author is supported in part by JSPS KAKENHI Grant No.19K03825. He would like to thank M. Akyazi, N. Delporte, O. Evnin, R. Gurau, L. Lionni, Z. Mirzaiyan, and R. Toriumi for some stimulating discussions.
2306.16478
Pre-Training Multi-Modal Dense Retrievers for Outside-Knowledge Visual Question Answering
This paper studies a category of visual question answering tasks, in which accessing external knowledge is necessary for answering the questions. This category is called outside-knowledge visual question answering (OK-VQA). A major step in developing OK-VQA systems is to retrieve relevant documents for the given multi-modal query. Current state-of-the-art asymmetric dense retrieval model for this task uses an architecture with a multi-modal query encoder and a uni-modal document encoder. Such an architecture requires a large amount of training data for effective performance. We propose an automatic data generation pipeline for pre-training passage retrieval models for OK-VQA tasks. The proposed approach leads to 26.9% Precision@5 improvements compared to the current state-of-the-art asymmetric architecture. Additionally, the proposed pre-training approach exhibits a good ability in zero-shot retrieval scenarios.
Alireza Salemi, Mahta Rafiee, Hamed Zamani
2023-06-28T18:06:40Z
http://arxiv.org/abs/2306.16478v1
# Pre-Training Multi-Modal Dense Retrievers for Outside-Knowledge Visual Question Answering ###### Abstract. This paper studies a category of visual question answering tasks, in which accessing external knowledge is necessary for answering the questions. This category is called outside-knowledge visual question answering (OK-VQA). A major step in developing OK-VQA systems is to retrieve relevant documents for the given multi-modal query. Current state-of-the-art asymmetric dense retrieval model for this task uses an architecture with a multi-modal query encoder and a uni-modal document encoder. Such an architecture requires a large amount of training data for effective performance. We propose an automatic data generation pipeline for pre-training passage retrieval models for OK-VQA tasks. The proposed approach leads to 26.9% Precision@5 improvements compared to the current state-of-the-art asymmetric architecture. Additionally, the proposed pre-training approach exhibits a good ability in zero-shot retrieval scenarios. Dense Retrieval; Visual Question Answering; Multi-Modal Retrieval; Pre-training; Data Generation + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision on asymmetric architecture, unveiling a novel methodology for enhancing the training of superior asymmetric retrievers. Importantly, this approach effectively curtails the necessity of caption generation solely to the training phase, sparing it from being required during inference time. Qu et al. (2019) demonstrates that supervised asymmetric dense retrieval models with multi-modal query encoder and uni-modal document encoder lead to state-of-the-art passage retrieval performance for OK-VQA tasks. However, it requires large-scale manually labeled training data which is expensive and time consuming to obtain. Inspired by the prior research on text retrieval based on weak supervision (Zhou et al., 2017; Zhang et al., 2018; Zhang et al., 2019) and Inverse Cloze Task (ICT) pre-training (Dai et al., 2019), this paper introduces a novel pipeline for automatic generation of training data for OK-VQA tasks. This data generation pipeline requires no manually labeled OK-VQA data. It first obtains an image corpus (e.g., MS COCO (Liu et al., 2019)) and generates captions for the images. Each caption is then used as a query to retrieve text passages from Wikipedia. We then select some noun phrases from each passage as potential answers and generate a question for each of them using a fine-tuned language model. To reduce the noise introduced into the pre-training data, we design a question-answering model and filter out questions for which the model cannot produce a close enough answer. This process leads to a large-scale dataset with about 4.6 million question-image pairs for OK-VQA tasks. The generated data can then be used for pre-training dense retrieval models for OK-VQA tasks. To the best of out knowledge, this is the first attempt to automatic generation of data for OK-VQA tasks. Our experiments on the OK-VQA passage retrieval dataset (Zhou et al., 2017; Zhang et al., 2019) demonstrate that training dense retrieval models using the proposed data generation pipeline leads to 40.2% Precision@5 improvements in a zero-shot setting compared to competitive baselines. We also show that pre-training state-of-the-art supervised dense retrieval models improves state-of-the-art performance by 26.9% in terms of Precision@5. The obtained improvements are statistically significant in all cases. Further analysis suggests that the proposed pre-trained model that is fine-tuned only on 25% of the OK-VQA supervised data outperforms the model that is trained on 100% of the supervised data without pre-training. Moreover, the performance of the pre-trained model becomes relatively stable after observing 50% of the supervised training data. Therefore, the proposed pre-training procedure reduces the need to large-scale manually labeled training sets. In summary, the major contributions of this work include: 1. Introducing the first automatic data generation pipeline for outside-knowledge visual question answering tasks. 2. Improving the current state-of-the-art asymmetric passage retrieval models in both zero-shot and supervised settings. 3. Providing extensive result analysis to better understand the impact of pre-training on OK-VQA performance. To foster research in this area, we release our generated dataset, our data creation pipeline, and our learned model parameters.1 Footnote 1: The data and code are available at [https://github.com/alirezasalemi7/pretraining-multimodal-dense-retriever-for-okvqa](https://github.com/alirezasalemi7/pretraining-multimodal-dense-retriever-for-okvqa) ## 2. Related Work _Multi-Modal Dense Passage Retrieval_. Multi-modal dense retrieval can be defined in different categories based on where the multi-modality takes place. The multi-modality can be in the queries, with a corpus of uni-modal documents, which enables the underlying information need to be expressed through a multi-modal representation (Zhou et al., 2017). Our work fits into this category with queries comprised of images with corresponding questions, and uni-modal textual passages in the corpus. Another line of work has been focusing on multi-modal documents in the corpus, such as a mix of textual, tabular, or visual information, while the query is expressed in one modality (Liu et al., 2019; Liu et al., 2019; Zhang et al., 2019). In another setting, both queries and documents can be multi-modal, for example where the answer to a query about an image contains multiple modalities (Zhang et al., 2019). Cross-modal retrieval is also partly related to multi-modal retrieval, where both queries and documents are uni-modal but they come from different modalities (Liu et al., 2019; Zhang et al., 2019). _Outside-Knowledge Visual Question Answering_. In standard visual question answering (VQA) (Dai et al., 2019), the answer lies in the image; however, in outside-knowledge visual question answering (OK-VQA) (Zhou et al., 2017), the image and question are jointly used to find the answer to the question from an external knowledge source (Zhou et al., 2019). That being said, retrieving relevant passages to a query, which consists of an image and a question about it, plays an essential role in this task (Zhou et al., 2019). Previous work (Liu et al., 2019; Liu et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019) mostly utilizes knowledge graphs as a source of external information; however, the lack of a complete and easily updatable knowledge source is challenging (Dai et al., 2019; Zhang et al., 2019). Therefore, following Qu et al. (Zhou et al., 2019) and Salemi et al. (Salemi et al., 2019), we focus on retrieving passages from Wikipedia as the knowledge source. Previous work mostly evaluates OK-VQA based on the answer generation quality (Zhou et al., 2017; Liu et al., 2019; Liu et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019); however, following Qu et al. (Zhou et al., 2019), we only investigate the retrieval performance in the aforementioned task. In contrast with Salemi et al. (Salemi et al., 2019), which focuses on designing a symmetric architecture for OK-VQA retrieval and answer generation, we investigate the data generation and augmentation methods to train the proposed asymmetric retriever architecture by Qu et al. (Zhou et al., 2019) with no labeled training data. _Pre-Training Dense Passage Retrievers_. In recent years, pre-training transformers (Zhu et al., 2019) using semi- and self-supervised tasks has become a standard approach for achieving strong performance in natural language and vision tasks (Zhou et al., 2017; Zhang et al., 2019; Zhang et al., 2019). Moreover, retrieval-specific pre-training tasks, such as Inverse Cloze Task (ICT) (Dai et al., 2019), have been shown to be effective for uni-modal retrieval. Recently, a multi-modal variant of ICT has been proposed by Lerner et al. (Lerner et al., 2019), in which queries are question-image pairs, and documents are passage-image pairs. However, our work focuses on the case that passages are only textual, while queries consist of question-image pairs. The research by Changpinyo et al. (Changpinyo et al., 2019) is perhaps the closest work to ours, in which the authors focus on pre-training models for VQA tasks, which is by nature different from OK-VQA. Changpinyo et al. (Changpinyo et al., 2019) only generates questions from the image captions due to the nature of VQA, in which the answers lie in the image. In contrast, we use captions to retrieve a relevant passage to the image and generate questions from that passage to ensure that answering them requires external knowledge. ## 3. Problem Statement While multi-modal retrieval can be defined in different ways as mentioned in section 2, this paper only focuses on multi-modal scenarios where the query \((Q,I)\) consists of the question \(Q\) about the image \(I\), and the corpus \(C\) from which relevant passages should be selected is only textual. Suppose \(T=\{(Q_{1},I_{1},A_{1},R_{1}),...,(Q_{N},I_{N},A_{N},R_{N})\}\) represents the training set for multi-modal retrieval in this paper. Each training sample in \(T\) consists of a question \(Q_{i}\) written in natural language, an image \(I_{i}\), a set of answers \(A_{i}\) to the question \(Q_{i}\), and a set of relevant passages \(R_{i}\) that contains the answer to \(Q_{i}\). In more detail, the answer set \(A_{i}\) might contain more than one answer to the question, which are syntactically different but semantically the same (\(|A_{i}|\geq 1\)). Additionally, each question and image might have more than one related passage (\(|R_{i}|\geq 1\) and \(R_{i}\subseteq C\)). The main task in this paper is to use training set \(T\) to train a dense retriever that takes query \((Q,I)\) as input and retrieves \(K\) passages that are relevant to the query from the corpus \(C\) (\(|C|\gg K\)). In this paper, we introduce a pipeline for generating weakly supervised data, similar to the proposed problem definition, to first pre-train the model on the weakly supervised generated data and then fine-tune the pre-trained on the task's data. The following sections explain our proposed pipeline for this purpose. ## 4. The Proposed Pre-Training Pipeline Automatic data generation for (pre-)training neural models for text retrieval and question answering has proven to be effective. For instance, Dehghani et al. (2017) introduced weak supervision in information retrieval by utilizing an existing unsupervised retrieval model as a weak labeler. Zamani and Croft (2018) provided theoretical justification on when and why weak supervision lead to strong and robust improvements. Wang et al. (2019) used a similar approach for adapting well-trained retrieval models to an unseen target domain. More recently, Chang et al. (2020) used Inverse Cloze Task for pre-training text retrieval models and Bonifacio et al. (2020) used large-scale language models, such as GPT (Zamani and Croft, 2020), for data generation. All these approaches are developed for text retrieval tasks. For multi-modal tasks, Changpinyo et al. (2020) focused on pre-training models for visual question answering (VQA) tasks, which is fundamentally different from OK-VQA. Changpinyo et al. (2020) only generates questions from the image's caption due to the nature of VQA, in which the answers lie in the image (e.g., asking about the color of an object in the image). VQA is not an information-seeking task; thus, this approach cannot be applied to OK-VQA. This section introduces our data generation pipeline for pre-training dense passage retrieval models for OK-VQA tasks. ### Automatic Data Generation for Pre-training Figure 2 depicts an overview of our automatic data generation pipeline for pre-training multi-modal dense passage retrieval models. We start with an image and use an automatic image captioning model to produce a textual description of the image. We then retrieve \(M\) passages from a large collection, such as Wikipedia, given the image caption as the query. We then extract a set of potential short answers from the retrieved passages. For each potential answer, we generate a question using a sequence-to-sequence model. We later filter out low quality questions. A negative selection component is also developed to produce data for optimizing retrieval models. The outcome of our pipeline is a set of data instances, each represented as \((Q_{i},I_{i},A_{i},R_{i},N_{i})\), where \(Q_{i}\) is a question about the image \(I_{i}\), \(A_{i}\) is the answer to the question \(Q_{i}\), \(R_{i}\) is a relevant passage to the question, and \(N_{i}\) is a hard negative passage for the question \(Q_{i}\). In the following sections, we explain the procedure of generating each component in detail. Matching Images and Passages using CaptionsFor each image in the MS COCO (Liu et al., 2019) training set (\(82\)K images), we aim at retrieving \(M\) passages. Therefore, designing retriever \(R_{img2text}\), which takes an image as input and retrieves a set of related passages from corpus \(C\), is required. We use a Wikipedia dump with \(11\)M passages as the corpus \(C\).2 We retrieve \(M=5\) passages for each image. Footnote 2: This Wikipedia dump is available at: [https://ciirc.cs.umass.edu/downloads/ORConvQA/all](https://ciirc.cs.umass.edu/downloads/ORConvQA/all) blocks.txt.gz While some models, such as CLIP (Zamani and Croft, 2020) and ALIGN (Liu et al., 2019), are designed to act as \(R_{\text{img2text}}\), for simplicity and without losing generality, we use BM25 (Liu et al., 2019) as \(R_{\text{img2text}}\), in which we use a textual description of the image to retrieve a set of passages. To calculate the similarity score between the image \(I\) and the passage \(P\), we use the following formula: \(S_{R}(I,P)=S_{BM25}(\phi_{I\to T}(I),P)\), where \(\phi_{I\to T}\) is a modality converting module that takes an image and generates a textual description for it. Generating a description of an image can happen in several ways. For instance, the textual label of objects in an image can be used to describe the image using text. This approach suffers from two issues: 1) object labels are limited to a pre-defined set, and 2) labeling objects in images in large scale is costly. Conversely, using captions as the image description resolves the mentioned issues by generating an open-ended textual description of an image and using the large-scale available image-caption data on the web. That being said, we use ViT-GPT (Zamani and Croft, 2018), a transformer-based (Liu et al., 2019) image-to-text model, to generate a caption for each image. ViT-GPT is trained on the images and captions provided by MS COCO (Liu et al., 2019) dataset using a cross-entropy loss function. Once the model is trained, we freeze the model's parameters and use it in inference mode. Selecting Potential Answer Phrases from Retrieved PassagesInvestigating the OK-VQA dataset shows that approximately 80% of the answers in this dataset are noun phrases. Following this observation, we use noun phrases in the retrieved passages as potential answers. This approach has been previously used by Lee et al. (2019). In more detail, we use \(\text{spacCy}^{3}\) to extract noun phrases from passages. We consider all noun phrases as potential answers, except those that have a pronoun or determiner (e.g., "a", "an", and "the") in their subtrees. This is because pronouns and determiners usually refer to a specific word in the passage (i.e., co-references), and we would like to select "standalone" answer phrases. Question Generation and FilteringThe next step in the pre-training data generation pipeline is generating a question for each selected answer phrase. Suppose \(M_{\text{QG}}(A,P)\) is a question generator that takes passage \(P\) and the answer phrase \(A\) as input and generates a question \(Q\) whose answer is \(A\). To implement \(M_{QG}\), following Ushio et al. (Ushio et al., 2018), we feed the passage \(P\) and the potential answer phrase \(A\) to T5-large (Vaswani et al., 2017) and instruct the model to generate a question. To this aim, we utilize SQuAD v1.1 (Zhu et al., 2018) dataset for fine-tuning this question generation model. For each training sample in the SQuAD v1.1 dataset, the answer is surrounded with <hl> token, and the passage with the surrounded answer is fed to T5. The cross-entropy loss is used for training the model: \[L_{QG}=-\sum_{i}^{|Q|}\log P(y_{i}|y_{k<i};P^{\prime}) \tag{1}\] where \(y_{i}\) is the \(i^{\text{th}}\) token in the question \(Q\), and \(P^{\prime}\) is the passage \(P\) with <hl> surrounding tokens4. Footnote 4: The checkpoint for this question-generation model is available at: [https://huggingface.co/long/15-large-squad-qa5](https://huggingface.co/long/15-large-squad-qa5) As a reference on the quality of the question generation model, we evaluate it on the test set of SQuAD (Zhu et al., 2018) and it achieves a BLEU-4 (Vaswani et al., 2017) score of 27.21 and rouge-L (Vaswani et al., 2017) of 54.13. To further reduce the amount of noise in the generated pre-training data, we filter out the questions that a question-answering model cannot answer. Suppose \(M_{QA}(Q,P)\) is a question-answering model, which takes the question \(Q\) and the passage \(P\) as inputs and generates or selects an answer phrase. Finally, we only select the generated questions that satisfy the following condition: Rouge-1\((A,M_{QA}(M_{QG}(A,P),P))>T\), where Rouge-1 is the rouge-1 score (Vaswani et al., 2017), \(A\) is the potential answer from the passage \(P\), and \(T\) is a threshold for the similarity of the potential answer and the answer selected by the question-answering model. We use \(T=0.5\) in our experiments. To implement \(M_{QA}\), we use a RoBERTa-base (Vaswani et al., 2017) that is fine-tuned for answer span selection trained on the SQuAD dataset. The model is trained based on the log-likelihood of predicting the correct start and end tokens. For selecting the answer span, the span with the highest \(P(S_{i}|P;Q)+P(E_{j}|P;Q)\) is selected where \(P(S_{i}|P;Q)\) shows the probability of the \(i^{\text{th}}\) token being the start of the span and \(P(E_{j}|P;Q)\) shows the probability of the \(j^{\text{th}}\) token being the end of the span. As a reference, this question-answering model achieves a F1 score of 82.91% and exact match of 79.87% on the test set of SQuAD v2 dataset (Zhu et al., 2018).5 Footnote 5: The checkpoint for this question-answering model is available at: [https://huggingface.co/deep/r/oberta-base-squad2](https://huggingface.co/deep/r/oberta-base-squad2) Footnote 6: The checkpoint for this question-answering model is available at: [https://huggingface.co/deep/r/oberta-base-squad2](https://huggingface.co/deep/r/oberta-base-squad2) Negative Passage SamplingUsing hard negatives and their quality plays an essential role in the final performance of dense passage retrieval models (Kang et al., 2018). For each generated question, we retrieve passages using BM25. We choose the highest scored passage that does not contain the answer \(A\) as the negative passage. SummaryThe proposed pipeline leads to 4,621,973 question-image pairs from 82,783 unique images of MS COCO (Vaswani et al., 2017). The average question, passage, and answer length in the created dataset are \(9.6\pm 3.0\), \(187.2\pm 105.7\) and \(2.3\pm 1.2\) words, respectively. ### Dense Retrieval Model The nature of the multi-modal retrieval task that we attempt to solve in this paper requires the bi-encoder dense passage retriever to encode queries in multi-modal semantic space and to encode passages in textual semantic space. We use an asymmetric state-of-the-art dense passage retrieval for OK-VQA tasks proposed by Qu et al. (Qu et al., 2018). It uses an asymmetric dense passage retriever with the multi-modal query encoder \(E_{MM}\) and the textual passage encoder \(E_{T}\). Then, the relevance score is calculated as follows: \(S((Q,I),P)=E_{MM}(Q,I)\cdot E_{T}(P)\), where \(\cdot\) denotes the inner product. Following Qu et al. (Qu et al., 2018), we implement \(E_{T}\) using the representation of the [CLS] token provided by a BERT-base (Devlin et al., 2017) model. Similarly, we utilize the representation of the [CLS] token generated by LXMERT (Li et al., 2018), a vision-language model pre-trained with various vision-language tasks. To train the retriever, we use a contrastive loss as follows: \[L_{DR}=-\log\frac{e^{S((Q,I),P_{pos})}}{e^{S((Q,I),P_{pos})}+\sum_{P^{\prime} \in\mathbf{P_{neg}}}e^{S((Q,I),P^{\prime})}} \tag{2}\] where \(P_{pos}\) is a positive (relevant) passage and \(\mathbf{P_{neg}}\) is a set of negative passages for the question-image pair \((Q,I)\). In addition to the selected negative passages, we use in-batch negatives, in which all the positive and negative passages of other queries in the same training batch are considered as negative passages to the query. We use the Faiss library (Kang et al., 2018) for indexing and efficient dense retrieval. ## 5. Experiments This section discusses the datasets, experiments, and results obtained in this paper. Figure 2. The proposed data generation pipeline for pre-training OK-VQA models. ### Experimental Setup DatasetIn our experiments, we use the OK-VQA passage retrieval dataset (Zhou et al., 2019), an extension to the OK-VQA dataset (Zhou et al., 2019). This dataset aims at evaluating passage retrieval tasks for outside-knowledge visual question answering tasks. This dataset contains 9009 questions for training, 2523 questions for validation, and 2523 for testing. As the retrieval collection, it uses the same Wikipedia dump that we use during pre-training (11M passages). Pre-training and Fine-tuning setupsIn order to pre-train the multi-modal dense passage retriever, we use a batch size of 32 on four RTX8000 GPUs, each with 49GB of GPU memory and a total of 256GB of RAM, which results in an effective batch size of 128. We utilize the Adam optimizer (Kingmae and Ba, 2015) with a learning rate of \(10^{-5}\). A linear learning rate scheduler with 10% of total training steps as warmup steps is used for pre-training. Additionally, gradient clipping with a clipping value of 1.0 is used in the training procedure. The maximum length of passages and queries for each encoder is 384 and 20 tokens, respectively. We only train the model for one epoch on the pre-training data to avoid overfitting. For fine-tuning on the OK-VQA training set, we follow the same training setup, but we use two epochs and a batch size of 4 on each GPU for a fair comparison with previous work (Zhou et al., 2019), which results in an effective batch size of 16. Baselines and Terms of ComparisonWe compare our models with the following baselines. (1) **BM25**: a baseline that only uses the question as the query and retrieves passages using BM25. (2) **DenseBERT**: a dense retrieval baselines similar to DPR (Krizhevsky et al., 2014) that uses questions as queries and is trained using the same training objective as ours. (3) **BERT-LXMERT**: a state-of-the-art asymmetric dense retrieval model (Zhou et al., 2019) that uses the exact same architecture as we introduced in Section 4.2. This baseline is basically our model but without being pre-trained using the generated data. EvaluationFollowing Qu et al. (Zhou et al., 2019), we use mean reciprocal rank (MRR) and precision with ranking cut-off of 5 as evaluation metrics. We use the two-tailed paired t-test with Bonferroni correction as the statistical significance test (\(p<0.05\)). Since the OK-VQA dataset does not provide relevant judgment for passages, we assume a passage is identified to be positive if it contains an exact match (case insensitive) of a ground truth answer (Zhou et al., 2019). ### Results This section presents our experimental results and analyzes the model performance to better demonstrate the impact of pre-training on OK-VQA performance. Zero-Shot PerformanceIn the first set of experiments, we evaluate the zero-shot capabilities of the models. In this setting, the BM25 baseline uses the default parameters (\(k_{1}=1.2,b=0.75\)), and the baselines with BERT and LXMERT use the parameters learned through their (vision-) language model pre-training. The 'Pre-trained BERT-LXMERT' model is trained on the data that we automatically generated. The results are reported in Table 1. BM25 demonstrates the strongest zero-shot performance. This suggests that the initialized parameters of BERT and LXMERT are not suitable for retrieval tasks. This is inline with findings by previous work on text retrieval (Krizhevsky et al., 2014; Li et al., 2019). The proposed pre-training pipeline significantly outperforms all the baselines and leads to 33% and 40.2% MRR@5 and P@5 improvements compared to BM25, respectively. Supervised PerformanceIn the second set of experiments, we fine-tune the same models on the OK-VQA training set. All neural models use the same training procedure. The BM25 parameters are tuned through exhaustive grid search where \(k_{1}\in[0.5,1.5]\) and \(b\in[0.2,0.8]\) with a step size of 0.2. The model with the best MRR@5 \begin{table} \begin{tabular}{l|c c c|c c c|c c} & \multicolumn{3}{c|}{Zero-Shot Performance} & \multicolumn{3}{c}{Supervised Performance} \\ **Model** & \multicolumn{3}{c|}{**Validation**} & \multicolumn{3}{c|}{**Test**} & \multicolumn{3}{c}{**Validation**} & \multicolumn{3}{c}{**Test**} \\ & **MRR@5** & **P@5** & **MRR@5** & **P@5** & **MRR@5** & **P@5** & **MRR@5** & **P@5** \\ \hline BM25 & 0.2450 & 0.1668 & 0.2528 & 0.1642 & 0.2565 & 0.1772 & 0.2637 & 0.1755 \\ Dense-BERT & 0.0709 & 0.0382 & 0.0726 & 0.0375 & 0.4555 & 0.3155 & 0.4325 & 0.3058 \\ BERT-LXMERT & 0.0744 & 0.0376 & 0.0665 & 0.0345 & 0.4704 & 0.3364 & 0.4526 & 0.3329 \\ \hline Pre-trained BERT-LXMERT & \(\mathbf{0.3716^{*}}\) & \(\mathbf{0.2629^{*}}\) & \(\mathbf{0.3364^{*}}\) & \(\mathbf{0.2303^{*}}\) & \(\mathbf{0.5557^{*}}\) & \(\mathbf{0.4195^{*}}\) & \(\mathbf{0.5603^{*}}\) & \(\mathbf{0.4274^{*}}\) \\ \% rel. imp. w.r.t. the best baseline & 51.6\% \(\uparrow\) & 57.6\% \(\uparrow\) & 33.0\% \(\uparrow\) & 40.2\% \(\uparrow\) & 17.5\% \(\uparrow\) & 20.4\% \(\uparrow\) & 21.2\% \(\uparrow\) & 26.9\% \(\uparrow\) \\ \end{tabular} \end{table} Table 1. Passage retrieval performance on the OK-VQA dataset (Zhou et al., 2019). The superscript \({}^{*}\) denotes statistically significant improvement compared to all baselines based on two-tailed paired t-test with Bonferroni correction (\(p<0.05\)). Figure 3. Learning curve for ‘Pre-trained BERT-LXMERT’ on the OK-VQA test set. The orange line shows the performance of the BERT-LXMERT model without pre-training that is fine-tuned on 100% of the supervised OK-VQA training data. on the validation set is selected. The selected parameters are \(k_{1}=1.1,b=0.4\). In Table 1, we observe that, as expected, all neural models largely benefit from fine-tuning on the OK-VQA training set and substantially outperform BM25. Fine-tuning BERT-LXMERT that is pre-trained using the proposed data generation pipeline leads to 21.2% MRR@5 and 26.9% P@5 improvements compared to BERT-LXMERT without pre-training (i.e., the current SOTA model on passage retrieval for OK-VQA (Zhu et al., 2020)). _Learning Curve_. We hypothesize that the proposed pre-training pipeline reduces the need for large-scale supervised training data, which is often difficult or expensive to obtain. To validate this hypothesis, we fine-tuned our pre-trained model using 25%, 50%, 75%, and 100% of the supervised data randomly sampled from the OK-VQA training set. The results are plotted in Figure 3. For the sake of space, the performance based on MRR@5 on the OK-VQA test set is reported. Other curves follow a similar behavior. The dashed orange line in the figure shows the performance of the BERT-LXMERT model without pre-training that is trained on 100% of the OK-VQA training set. The curve demonstrates that our pre-trained model outperforms the model without pre-training by only observing 25% of supervised training data. Moreover, the performance of the pre-trained model becomes relatively stable after observing 50% of the supervised data, which shows that pre-training retrieval models for OK-VQA reduces the need for supervised data. _Result Analysis_. To have a deeper understanding of the proposed pre-training impact on OK-VQA tasks, Figure 4 presents MRR@5 obtained by the fine-tuned BERT-LXMERT model with and without pre-training for each question category. The categories are borrowed from the OK-VQA (Zhu et al., 2020) dataset.6 We observe that pre-training improves the OK-VQA performance on all question categories, however, the improvements are not the similar across categories. It can be seen that the highest improvement is achieved for the "sports & recreation" category (56.6%), while the lowest improvement is observed for the "weather & climate" category (4.86%). The reason is that we use the MS COCO dataset (Zhu et al., 2020) as the image collection for automatic creation of our pret-training data and MS COCO does not include any category related to "weather & climate," "science & Tech," and "Geography & History & Language & Culture". As a result, the extent of improvement is smaller for these categories in the OK-VQA dataset. On the other hand, a considerable proportion of images in the MS COCO dataset are related to the categories such as "Sport & Recreation," "People & Everyday Life," and "Plants & Animals" that observe the highest improvements. This analysis demonstrates that including the nature of data included in the automatic data creation pipeline directly impact the downstream OK-VQA performance, and including images from underrepresented categories is likely to further improve the performance. Footnote 6: The categories include “Plants & Animals (PA)”, “Science & Tech (ST)”, “Sport & Recreation (SR),” Geography & History & Language & Culture, (GHLC)” "Brands & Companies & Products (BCP)," "Vehicles & Transportation (VT)," "Cooking & Food (CT)". Weather & Climate (WC)," "People & Everyday Life (PEL)," "Objects & Material & Clothing (OMAC)," and “Other (O).” ## 6. Conclusions and Future Work This paper introduced a pipeline for pre-training dense retrievers for OK-VQA tasks. The proposed pipeline started from an image collection and paired each image with a passage from a knowledge source. Then, a question generation model was used to generate questions for all possible answers to the questions about the image and the passage. Finally, low-quality questions were filtered out, and negative samples for the remaining questions were selected. Our experiments suggest statistically significant improvements compared to state-of-the-art asymmetric dense retrieval performance for OK-VQA tasks. Even though our results show consistent improvement in the OK-VQA dataset, there might be some other kinds of knowledge-intensive VQA datasets, such as FVQA (Zhu et al., 2020), that this pre-training approach needs to be revised. In the future, we intend to extend our data generation pipeline to other knowledge-intensive vision-language tasks. This paper also limits multi-modality to multi-modal queries and textual passages. Providing a solution for removing the mentioned limitations can be investigated in future work. ## Acknowledgment This work was supported in part by the Center for Intelligent Information Retrieval, in part by Lowes, and in part by NSF grant #2106282. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
2307.07884
Preconditioning techniques for generalized Sylvester matrix equations
Sylvester matrix equations are ubiquitous in scientific computing. However, few solution techniques exist for their generalized multiterm version, as they now arise in an increasingly large number of applications. In this work, we consider algebraic parameter-free preconditioning techniques for the iterative solution of generalized multiterm Sylvester equations. They consist in constructing low Kronecker rank approximations of either the operator itself or its inverse. While the former requires solving standard Sylvester equations in each iteration, the latter only requires matrix-matrix multiplications, which are highly optimized on modern computer architectures. Moreover, low Kronecker rank approximate inverses can be easily combined with sparse approximate inverse techniques, thereby enhancing their performance with little or no damage to their effectiveness.
Yannis Voet
2023-07-15T21:07:42Z
http://arxiv.org/abs/2307.07884v2
# Preconditioning techniques for generalized Sylvester matrix equations ###### Abstract Sylvester matrix equations are ubiquitous in scientific computing. However, few solution techniques exist for their generalized multiterm version, as they recently arose in stochastic Galerkin finite element discretizations and isogeometric analysis. In this work, we consider preconditioning techniques for the iterative solution of generalized Sylvester equations. They consist in constructing low Kronecker rank approximations of either the operator itself or its inverse. In the first case, applying the preconditioning operator requires solving standard Sylvester equations, for which very efficient solution methods have already been proposed. In the second case, applying the preconditioning operator only requires computing matrix-matrix multiplications, which are also highly optimized on modern computer architectures. Moreover, low Kronecker rank approximate inverses can be easily combined with sparse approximate inverse techniques, thereby further speeding up their application with little or no damage to their preconditioning capability. **Keywords**: Generalized Sylvester equations, Low Kronecker rank, Nearest Kronecker product, Alternating least squares, Sparse approximate inverse, Isogeometric analysis. **2020 MSC**: 65F08, 65F45, 65F50. ## 1 Introduction We consider the numerical solution of generalized Sylvester matrix equations \[\sum_{k=1}^{r}B_{k}XA_{k}^{T}=C \tag{1.1}\] where \(A_{k}\in\mathbb{R}^{n\times n}\), \(B_{k}\in\mathbb{R}^{m\times m}\) for all \(k=1,\ldots,r\) and \(X,C\in\mathbb{R}^{m\times n}\). Generalized Sylvester equations are at the forefront of many applications in scientific computing. Some important special cases have attracted considerable interest and are listed below. * The generalized Sylvester equation (for \(r=1\)) \[B_{1}XA_{1}^{T}=C\] (1.2) is its simplest instance. It appears in explicit time integration schemes for tensorized finite element discretizations of certain time dependent partial differential equations (PDEs) in two spatial dimensions [1, 2, 3]. If \(A_{1}\) and \(B_{1}\) are invertible, the solution of (1.2) is particularly simple since \(X=B_{1}^{-1}CA_{1}^{-T}\) only requires solving \(n\) linear systems with \(B_{1}\) and \(m\) linear systems with \(A_{1}\). If \(A_{1}=I_{n}\), (1.2) reduces to a standard linear system with multiple right-hand sides. * The generalized Sylvester equation (for \(r=2\)) \[B_{1}XA_{1}^{T}+B_{2}XA_{2}^{T}=C\] (1.3) stems, for instance, from tensorized finite element discretizations of certain differential operators on structured domains in two spatial dimensions [4, 5] and generalized eigenproblems [6]. Some special cases of (1.3) are: The (standard) Sylvester equation (obtained for \(A_{1}=I_{n}\) and \(B_{2}=I_{m}\)), which appears in various applications, including block-diagonalization of block triangular matrices [7, 8], finite difference discretizations of certain PDEs [9, 10], and eigenvalue problems [8, 11]. Sylvester equations are also the main building block for iteratively solving more complicated nonlinear matrix equations, as they arise for computing invariant subspaces [8]. Although (1.3) may sometimes be transformed to a standard Sylvester equation, this transformation is neither always possible nor desirable if the coefficient matrices are ill-conditioned. * The Lyapunov equation (obtained for \(A_{2}=B_{1}\) and \(A_{1}=B_{2}=I_{n}\)), which is itself a particular case of the (standard) Sylvester equation and arises, for instance, in the stability analysis of dynamical systems [12] and control theory [13]. * The Stein equation (obtained for \(A_{1}=I_{n}\) and \(B_{1}=I_{m}\)), also known as the discrete-time Sylvester equation, appears in the analysis of dynamical systems [14] and in the stage equations of implicit Runge-Kutta methods [15]. Solution techniques for generalized Sylvester equations most critically depend on the number of terms \(r\) of the equation. While the case \(r=1\) is straightforward, the case \(r=2\) is already significantly more challenging and an impressive collection of methods have been proposed. Efficient solvers exploit the structure of the equation and its solution, including the sparsity and relative size of the coefficient matrices [16]. Since the pioneering work of Bartels and Stewart [17], many different solution techniques have emerged including alternating direction implicit (ADI) iteration [18], recursive blocked algorithms [19], low-rank and sparse data formats [20] and tensorized Krylov subspace methods [10] to name just a few. An exhaustive list of methods is beyond the scope of this article and we instead refer to [16] and the references therein for an overview. While the special matrix equations listed above account for a vast number of publications, the more general equation (1.1) has received much less attention. The practical utility of these equations may simply explain the difference: while numerous applications have driven the development of solution techniques for (standard) Sylvester and Lyapunov equations, the generalized Sylvester equation (1.1) was far less common and has mainly been considered of theoretical interest [16, 7]. However, the gap is quickly being filled as recent developments in stochastic Galerkin finite element methods [21, 22] and isogeometric analysis [23] now lead to solving generalized Sylvester equations with \(r>2\). Unfortunately, the vast majority of the solution techniques proposed for \(r=2\) are not applicable to \(r>2\). The main reason is that solution techniques for \(r=2\) rely on results for joint diagonalization (or triangularization) of matrix pairs such as generalized eigendecompositions (or Schur decompositions), which are also the basis for existence and uniqueness results [24]. These techniques generally do not extend to sequences of matrices \(\{A_{k}\}_{k=1}^{r}\) and \(\{B_{k}\}_{k=1}^{r}\) (with \(r>2\)), unless the elements of these sequences are related in some special way (e.g. they are powers of one same matrix [25] or are a commuting family of symmetric matrices). Therefore, most solution techniques for (1.1) have instead focused on solving the equivalent linear system [26, Lemma 4.3.1] \[\left(\sum_{k=1}^{r}A_{k}\otimes B_{k}\right)\mathbf{x}=\mathbf{c}. \tag{1.4}\] with \(\mathbf{x}=\operatorname{vec}(X)\) and \(\mathbf{c}=\operatorname{vec}(C)\). We recall that the vectorization of a matrix \(A\), denoted \(\operatorname{vec}(A)\), stacks the columns of \(A\) on top of each other. The transformation to (1.4) shows that solving the generalized Sylvester equation (1.1) is equivalent to solving a special linear system where the coefficient matrix is the sum of \(r\) Kronecker products. Such a matrix is said to have Kronecker rank \(r\) if \(r\) is the smallest number of terms in the sum. In this article, we will focus on the iterative solution of (1.4) as a way of solving generalized Sylvester equations. The rest of the article is structured as follows: In Section 2 we first recall some iterative solution techniques applicable to linear matrix equations. Similarly to iterative methods for linear systems, these methods may converge very slowly when the associated system matrix is ill-conditioned, creating a formidable strain on memory resources. Therefore, in Sections 3 and 4, we exploit the underlying Kronecker structure of the system matrix to design efficient and robust preconditioning strategies. These strategies aim at finding low Kronecker rank approximations of the operator itself (Section 3) or its inverse (Section 4). Furthermore, if the inverse admits a good sparse approximation, we propose to combine our strategies with sparse approximate inverse techniques to construct low Kronecker rank sparse approximate inverses. Section 5 gathers a few numerical experiments illustrating the effectiveness of our preconditioning strategies for solving generalized Sylvester equations stemming from isogeometric analysis, a tensorized finite element method. Finally, conclusions are drawn in Section 6. Iterative solution techniques for matrix equations Since direct solution methods are generally not applicable to (1.1), we investigate its iterative solution. For this purpose, we denote \(\mathcal{M}\colon\mathbb{R}^{m\times n}\to\mathbb{R}^{m\times n}\) the linear operator defined as \[\mathcal{M}(X)=\sum_{k=1}^{r}B_{k}XA_{k}^{T}. \tag{2.1}\] This operator has a Kronecker structured matrix representation given by \[M=\sum_{k=1}^{r}A_{k}\otimes B_{k}. \tag{2.2}\] We will generally use curly letters for linear operators and straight letters for their associated matrix. Since (1.1) and (1.4) represent the same set of equations but written differently, iterative solution techniques for solving (1.1) are specialized versions of well-known iterative methods for solving linear systems. The global GMRES (GI-GMRES) method is one of them. Originally proposed for solving linear systems with multiple right-hand sides, the method found natural applications for solving linear matrix equations [27]. As a matter of fact, the idea was already laid out in [28] several years earlier. As the name suggests, the GI-GMRES method is an adaptation of the famous GMRES method [29] for linear systems whose coefficient matrix is expressed as a sum of Kronecker products. It exploits the fact that \[Y=\mathcal{M}(X)\iff\mathbf{y}=M\mathbf{x} \tag{2.3}\] with \(\mathbf{x}=\operatorname{vec}(X)\), \(\mathbf{y}=\operatorname{vec}(Y)\) together with \(\mathcal{M}\) and \(M\) defined in (2.1) and (2.2), respectively. Instead of vector Krylov subspaces, the GI-GMRES method builds the matrix Krylov subspace [30] \[\mathcal{K}_{k}(\mathcal{M},V)=\operatorname{span}(V,\mathcal{M}(V),\ldots, \mathcal{M}^{k-1}(V))\] where \(V\in\mathbb{R}^{m\times n}\) and \(\mathcal{M}^{i}(V)\) is defined recursively as \(\mathcal{M}^{i}(V)=\mathcal{M}(\mathcal{M}^{i-1}(V))\). In a standard GMRES method, fast matrix-vector multiplications with \(M=\sum_{k=1}^{r}A_{k}\otimes B_{k}\) would evidently also use the connection (2.3). However, the GI-GMRES method works with matrices all along the process and avoids repeatedly reshaping vectors to matrices and back. Therefore, it is well-suited for iteratively solving the generalized Sylvester equation (1.1). Clearly, other Krylov subspace methods such as the conjugate gradient method (CG) [31, 32] and the biconjugate gradient stabilized method (Bi-CGSTAB) [33] can also be adapted for solving linear matrix equations. However, since GI-GMRES is mathematically equivalent to GMRES, any ill-conditioning of \(M\) impedes on its convergence. Preconditioning techniques are commonly employed for speeding up the convergence of iterative methods. In the context of matrix equations, they take the form of a preconditioning operator \(\mathcal{P}\colon\mathbb{R}^{m\times n}\to\mathbb{R}^{m\times n}\). Preconditioning can be straightforwardly incorporated in the GI-GMRES method by adapting preconditioned versions of the standard GMRES method [32]. Algorithm 2.1, for instance, presents the right preconditioned variant of the GI-GMRES method. Preconditioning techniques for Sylvester and Lyapunov equations were already considered in [30, 28] but cannot be extended to \(r>2\) since they rely on the special structure of the equation. ``` 1:Input: Linear operator \(\mathcal{M}\colon\mathbb{R}^{m\times n}\to\mathbb{R}^{m\times n}\), preconditioning operator \(\mathcal{P}\colon\mathbb{R}^{m\times n}\to\mathbb{R}^{m\times n}\), right-hand side matrix \(C\in\mathbb{R}^{m\times n}\), starting matrix \(X_{0}\in\mathbb{R}^{m\times n}\) 2:Output: Approximate solution \(X_{k}\) to \(\mathcal{M}(X)=C\) 3:Set \(R_{0}=C-\mathcal{M}(X_{0})\) 4:Set \(\beta_{0}=\|R_{0}\|_{F}\), \(U_{1}=R_{0}/\beta_{0}\) 5:for\(j=1,2,\ldots,k\)do 6: Compute \(W=\mathcal{M}(\mathcal{P}(U_{j}))\) 7:for\(i=1,\ldots,j\)do 8: Compute \(h_{ij}=\langle U_{i},W\rangle_{F}\) 9: Compute \(W=W-h_{ij}U_{i}\) 10:endfor 11: Set \(h_{j+1,j}=\|W\|_{F}\) 12: Set \(U_{j+1}=W/h_{j+1,j}\) 13: Find \(\mathbf{y}_{j}=\operatorname{argim}_{\mathbf{y}\in\mathbb{R}^{j}}\|\beta_{0} \mathbf{e}_{1}-\tilde{H}_{j}\mathbf{y}\|_{2}\)\(\triangleright\)\(\tilde{H}_{j}\in\mathbb{R}^{j+1\times j}\) 14: Set \(\beta_{j}=\|\partial_{0}\mathbf{e}_{1}-\tilde{H}_{j}\mathbf{y}\|_{2}\)\(\triangleright\)\(\beta_{j}=\|C-\mathcal{M}(X_{j})\|_{F}\) 15:endfor 16:Return \(X_{k}=X_{0}+\mathcal{P}(\sum_{j=1}^{k}y_{j}U_{j})\)\(\triangleright\)\(\mathbf{y}_{k}^{T}=(y_{1},\ldots,y_{k})\) ``` **Algorithm 2.1** Right preconditioned GI-GMRES If the number of iterations \(k\) remains relatively small, applying the operators \(\mathcal{M}\) and \(\mathcal{P}\) in line 4 is the most expensive step in Algorithm 2.1. Assuming that all factor matrices \(A_{k}\) and \(B_{k}\) are dense for \(k=1,\ldots,r\), storing them requires \(O(r(n^{2}+m^{2}))\) while applying \(\mathcal{M}(X)\) requires \(O(r(n^{2}m+nm^{2}))\) operations. In comparison, when \(M\) is formed explicitly, matrix-vector multiplications require \(O(n^{2}m^{2})\) operations, in addition to the prohibitive storage requirements amounting to \(O(n^{2}m^{2})\). The cost of applying the preconditioning operator will depend on its definition and this will be the focus of the next few sections. ## 3 Nearest Kronecker product preconditioner In Section 1, we had noted that the solution of a generalized Sylvester equation can be computed (relatively) easily when \(r\leq 2\). Indeed, for \(r=2\) the equation may often be reformulated as a standard Sylvester equation for which there exists dedicated solvers while for \(r=1\) the equation reduces to a very simple matrix equation, which can be solved straightforwardly. Therefore, a first preconditioning strategy could rely on finding the best Kronecker rank 1 or 2 approximation of \(M=\sum_{k=1}^{r}A_{k}\otimes B_{k}\) and use it as a preconditioning operator. Kronecker rank 1 preconditioners have already been proposed for many different applications including image processing [34], Markov chains [35, 36, 37], stochastic Galerkin [22] and tensorized [1, 2, 38] finite element methods. Extensions to Kronecker rank 2 preconditioners have been considered in [5, 4] but not for preconditioning generalized Sylvester equations. The problem of finding the best Kronecker product approximation of a matrix (not necessarily expressed as a sum of Kronecker products) was first investigated by Van Loan and Pitsianis [39]. A more modern presentation followed in [40]. We adopt the same general framework for the time being and later specialize it to our problem. For the best Kronecker rank 1 approximation of a matrix \(M\in\mathbb{R}^{nm\times nm}\), factor matrices \(Y\in\mathbb{R}^{n\times n}\) and \(Z\in\mathbb{R}^{m\times m}\) are sought such that \(\phi_{M}(Y,Z)=\|M-Y\otimes Z\|_{F}\) is minimized. Van Loan and Pitsianis observed that both the Kronecker product \(Y\otimes Z\) and \(\operatorname{vec}(Y)\operatorname{vec}(Z)^{T}\) form all the products \(y_{ij}z_{kl}\) for \(i,j=1,\ldots,n\) and \(k,l=1,\ldots,m\) but at different locations. Thus, there exists a linear mapping \(\mathcal{R}\colon\mathbb{R}^{nm\times nm}\to\mathbb{R}^{n^{2}\times m^{2}}\) (which they called _rearrangement_) such that \(\mathcal{R}(Y\otimes Z)=\operatorname{vec}(Y)\operatorname{vec}(Z)^{T}\). This mapping is defined explicitly by considering a block matrix \(A\) where \(A_{ij}\in\mathbb{R}^{m\times m}\) for \(i,j=1,\ldots,n\). Then, by definition \[A=\begin{pmatrix}A_{11}&\cdots&A_{1n}\\ \vdots&\ddots&\vdots\\ A_{n1}&\cdots&A_{nn}\end{pmatrix}\qquad\mathcal{R}(A)=\begin{pmatrix} \operatorname{vec}(A_{11})^{T}\\ \operatorname{vec}(A_{21})^{T}\\ \vdots\\ \operatorname{vec}(A_{nn})^{T}\end{pmatrix}.\] By construction, for a matrix \(A=Y\otimes Z\), \[Y\otimes Z=\begin{pmatrix}y_{11}Z&\cdots&y_{1n}Z\\ \vdots&\ddots&\vdots\\ y_{n1}Z&\cdots&y_{nn}Z\end{pmatrix}\qquad\mathcal{R}(Y\otimes Z)=\begin{pmatrix} y_{11}\operatorname{vec}(Z)^{T}\\ y_{21}\operatorname{vec}(Z)^{T}\\ \vdots\\ y_{nn}\operatorname{vec}(Z)^{T}\end{pmatrix}=\operatorname{vec}(Y) \operatorname{vec}(Z)^{T}.\] More generally, since the vectorization operator is linear, \[\mathcal{R}\left(\sum_{s=1}^{r}Y_{s}\otimes Z_{s}\right)=\sum_{s=1}^{r} \operatorname{vec}(Y_{s})\operatorname{vec}(Z_{s})^{T}.\] Therefore, \(\mathcal{R}\) transforms a Kronecker rank \(r\) matrix into a rank \(r\) matrix. Since rearranging the entries of a matrix does not change its Frobenius norm, the minimization problem becomes \[\min\phi_{M}(Y,Z)=\min\|M-Y\otimes Z\|_{F}=\min\|\mathcal{R}(M)-\mathcal{R}(Y \otimes Z)\|_{F}=\min\|\mathcal{R}(M)-\operatorname{vec}(Y)\operatorname{vec} (Z)^{T}\|_{F}.\] Thus, finding the best factor matrices \(Y\) and \(Z\) is equivalent to finding the best rank 1 approximation of \(\mathcal{R}(M)\). Moreover generally, finding the best factor matrices \(Y_{s}\) and \(Z_{s}\) for \(s=1,\ldots,q\) defining the best Kronecker rank \(q\) approximation is equivalent to finding the best rank \(q\) approximation of \(\mathcal{R}(M)\), which can be conveniently done using a truncated singular value decomposition (SVD). These computations are particularly cheap in our context given that \(\mathcal{R}(M)\) is already in low-rank format. Note that applying the inverse operator \(\mathcal{R}^{-1}\) to the SVD of \(\mathcal{R}(M)\) enables to express \(M\) as \[M=\sum_{k=1}^{r}\sigma_{k}(U_{k}\otimes V_{k}) \tag{3.1}\] where \(U_{k}\) and \(V_{k}\) are reshapings of the \(k\)th left and right singular vectors of \(\mathcal{R}(M)\), respectively, and \(\sigma_{k}\) are the singular values for \(k=1,\ldots,r\). The orthogonality of the left and right singular vectors ensures that \(\langle U_{i},U_{j}\rangle_{F}=\delta_{ij}\) \(\langle V_{j},V_{j}\rangle_{F}=\delta_{ij}\) and \(\langle M,(U_{i}\otimes V_{i})\rangle_{F}=\sigma_{i}\), where \(\langle.,.\rangle_{F}\) denotes the Frobenius inner product. The best Kronecker rank \(q\) approximation \(P\) then simply consists in retaining the first \(q\) terms of the sum in (3.1) and the approximation error is then given by the tail of the singular values \[\|M-P\|_{F}^{2}=\sum_{k=q+1}^{r}\sigma_{k}^{2}. \tag{3.2}\] The procedure is summarized in Algorithm 3.1 and is referred to as the SVD approach. We emphasize that we only consider \(q\leq 2\) for constructing a practical preconditioner. For \(q=1\), the resulting preconditioner is commonly referred to as the nearest Kronecker product preconditioner (NKP) [35, 38]. We will abusively use the same terminology for \(q=2\). ``` 0: Factor matrices \(\{A_{k}\}_{k=1}^{r}\subset\mathbb{R}^{n\times n}\) and \(\{B_{k}\}_{k=1}^{r}\subset\mathbb{R}^{m\times m}\) Kronecker rank \(q\leq r\) Output: Factor matrices \(Y_{s}\) and \(Z_{s}\) for \(s=1,\ldots,q\) such that \(\sum_{s=1}^{q}Y_{s}\otimes Z_{s}\approx\sum_{k=1}^{r}A_{k}\otimes B_{k}\) 1: Set \(V_{A}=[\operatorname{vec}(A_{1}),\ldots,\operatorname{vec}(A_{r})]\) 2: Set \(V_{B}=[\operatorname{vec}(B_{1}),\ldots,\operatorname{vec}(B_{r})]\) 3: Compute the thin QR factorization \(V_{A}=Q_{A}R_{A}\) 4: Compute the thin QR factorization \(V_{B}=Q_{B}R_{B}\) 5: Compute the SVD \(R_{A}R_{B}^{T}=\tilde{U}\Sigma\tilde{V}^{T}\)\(\triangleright\)\(\Sigma=\operatorname{diag}(\sigma_{1},\ldots,\sigma_{r})\) 6: Set \(V_{Y}=Q_{A}\tilde{U}\Sigma^{1/2}\)\(\triangleright\)\(V_{Y}=[\operatorname{vec}(Y_{1}),\ldots,\operatorname{vec}(Y_{r})]\) 7: Set \(V_{Z}=Q_{B}\tilde{V}\Sigma^{1/2}\)\(\triangleright\)\(V_{Z}=[\operatorname{vec}(Z_{1}),\ldots,\operatorname{vec}(Z_{r})]\) 8: Return and reshape the first \(q\) columns of \(V_{Y}\) and \(V_{Z}\). ``` **Algorithm 3.1** Best Kronecker rank \(q\) approximation The SVD approach to the best Kronecker product approximation in the Frobenius norm is well established in the numerical linear algebra community. However, Van Loan and Pitsianis also proposed an alternating least squares approach, which in our context might be cheaper. We both specialize their strategy to Kronecker rank \(r\) matrices and extend it to Kronecker rank \(q\) approximations. Adopting the same notations as in Algorithm 3.1 and employing the reordering \(\mathcal{R}\), we obtain \[\left\|\sum_{k=1}^{r}A_{k}\otimes B_{k}-\sum_{s=1}^{q}Y_{s}\otimes Z_{s} \right\|_{F}=\left\|\sum_{k=1}^{r}\operatorname{vec}(A_{k})\operatorname{vec} (B_{k})-\sum_{s=1}^{q}\operatorname{vec}(Y_{s})\operatorname{vec}(Z_{s}) \right\|_{F}=\|V_{A}V_{B}^{T}-V_{Y}V_{Z}^{T}\|_{F} \tag{3.3}\] If the (linearly independent) matrices \(Z_{s}\) are fixed for \(s=1,\ldots,q\), the optimal solution of the least squares problem (3.3) is given by \[V_{Y}=V_{A}V_{B}^{T}V_{Z}(V_{Z}^{T}V_{Z})^{-1}. \tag{3.4}\] If instead all matrices \(Y_{s}\) are fixed for \(s=1,\ldots,q\), the optimal solution of (3.3) is given by the similar looking expression \[V_{Z}=V_{B}V_{A}^{T}V_{Y}(V_{Y}^{T}V_{Y})^{-1}. \tag{3.5}\] Equations (3.4) and (3.5) reveal that all factor matrices \(Y_{s}\) and \(Z_{s}\) for \(s=1,\ldots,q\) are linear combinations of \(A_{k}\) and \(B_{k}\), respectively, which could already be inferred from the SVD approach. This finding was already stated in [39] and proved in [35, Theorem 4.1] for \(q=1\) and [38, Theorem 4.2] for arbitrary \(q\). In particular, for \(q=1\), after some reshaping, equations (3.4) and (3.5) reduce to \[Y=\sum_{k=1}^{r}\frac{\langle B_{k},Z\rangle_{F}}{\langle Z,Z\rangle_{F}}A_{k} \quad\text{and}\quad Z=\sum_{k=1}^{r}\frac{\langle A_{k},Y\rangle_{F}}{ \langle Y,Y\rangle_{F}}B_{k},\] respectively, which can also be deduced from [39, Theorem 4.1]. Our derivations are summarized in Algorithm 3.2. The norm of the residual is used as stopping criterion in the alternating least squares algorithm. It can be cheaply evaluated without forming the Kronecker products explicitly since \[\|V_{A}V_{B}^{T}-V_{Y}V_{Z}^{T}\|_{F}^{2} =\|V_{A}V_{B}^{T}\|_{F}^{2}-2\langle V_{A}V_{B}^{T},V_{Y}V_{Z}^{T} \rangle_{F}+\|V_{Y}V_{Z}^{T}\|_{F}^{2}\] \[=\langle V_{A}^{T}V_{A},V_{B}^{T}V_{B}\rangle_{F}-2\langle V_{A}^ {T}V_{Y},V_{B}^{T}V_{Z}\rangle_{F}+\langle V_{Y}^{T}V_{Y},V_{Z}^{T}V_{Z} \rangle_{F}.\] A more explicit expression already appeared in [35, Theorem 4.2] for \(r=2\) and \(q=1\). Our expression generalizes it to arbitrary \(r\) and \(q\). ### Complexity analysis We briefly compare the complexity of both algorithms. For Algorithm 3.1, the QR factorizations in lines 3 and 4 require about \(O(r^{2}(n^{2}+m^{2}))\) flops (if \(r\ll n,m\)) [40, 41]. Computing the SVD in line 5 only requires \(O(r^{3})\) while the matrix-matrix products in lines 6 and 7 require \(O(r^{2}(n+m))\). Thus, the computational cost is typically dominated by the QR factorizations. For Algorithm 3.2, if \(r\ll n,m\), lines 6 and 7 require about \(O(rq(n^{2}+m^{2}))\) operations. Naively recomputing the residual at each iteration in line 8 may be quite costly. Therefore, we suggest computing \(\langle V_{A}^{T}V_{A},V_{B}^{T}V_{B}\rangle_{F}\) once for \(O(r^{2}(n^{2}+m^{2}))\) flops and storing the result. The last two terms of the residual can be cheaply evaluated if intermediate computations necessary in lines 6 and 7 are stored. Therefore, for \(N\) iterations, the total cost amounts to \(O((r^{2}+Nrq)(n^{2}+m^{2}))\) and is quite similar to the SVD framework if \(N\) and \(q\) remain small. ### Theoretical results The approximation problem in the Frobenius norm is mainly motivated for computational reasons. However, it also offers some theoretical guarantees, which are summarized in this section. The next theorem first recalls a very useful result for Kronecker rank 1 approximations. **Theorem 3.1** ([39, Theorems 5.1, 5.3 and 5.8]).: Let \(M\in\mathbb{R}^{nm\times nm}\) be a block-banded, nonnegative and symmetric positive definite matrix. Then, there exists banded, nonnegative and symmetric positive definite factor matrices \(Y\) and \(Z\) such that \(\phi_{M}(Y,Z)=\|M-Y\otimes Z\|_{F}\) is minimized. Thus, the properties of \(M\) are inherited by its approximation \(Y\otimes Z\). However, not all properties of Theorem 3.1 extend to Kronecker rank \(q\geq 2\). Clearly, due to the orthogonality relations \(\langle U_{i},U_{j}\rangle_{F}=\delta_{ij}\), \(\langle V_{j},V_{j}\rangle_{F}=\delta_{ij}\) deduced from the SVD approach, only \(Y_{1}\) and \(Z_{1}\) are nonnegative if \(M\) is. However, other useful properties such as sparsity and symmetry are preserved. We formalize it through the following definition. **Definition 3.2** (Sparsity pattern).: The sparsity pattern of a matrix \(A\in\mathbb{R}^{n\times n}\) is the set \[\operatorname{sp}(A)=\{(i,j)\colon a_{ij}\neq 0,1\leq i,j\leq n\}\] The following lemma summarizes some useful properties shared by the SVD and alternating least squares solutions. Its proof is an obvious consequence of Algorithms 3.1 and 3.2. **Lemma 3.3**.: Let \(\sum_{s=1}^{q}Y_{s}\otimes Z_{s}\) be the Kronecker rank \(q\) approximation computed with Algorithm 3.1 or 3.2. Then, * If all \(A_{k}\) and \(B_{k}\) are symmetric, then all \(Y_{s}\) and \(Z_{s}\) also are. * The sparsity patterns of \(Y_{s}\) and \(Z_{s}\) are contained in those of \(A_{k}\) and \(B_{k}\); i.e. \[\operatorname{sp}(Y_{s})\subseteq\bigcup_{k=1}^{r}\operatorname{sp}(A_{k}), \qquad\operatorname{sp}(Z_{s})\subseteq\bigcup_{k=1}^{r}\operatorname{sp}(B_{ k})\qquad s=1,\dots,q.\] Note that the properties listed in Lemma 3.3 do not depend on the initial guesses. In this work, we are interested in computing Kronecker product approximations as a means of constructing efficient preconditioners. Therefore, we would like to connect the approximation quality to the preconditioning effectiveness. Several authors have attempted to obtain estimates for the condition number of the preconditioned system or some related measure [22, 38]. Getting descriptive estimates is surprisingly challenging and most results currently available are application specific. We present hereafter a general result, which is only satisfactory for small or moderate condition numbers of \(M\). Since the error is measured in the Frobenius norm, it naturally leads to controlling the average behavior of the eigenvalues of the preconditioned matrix. **Theorem 3.4**.: Let \(M,\tilde{M}\in\mathbb{R}^{n\times n}\) be symmetric positive definite matrices. Then, \[\frac{1}{\kappa(M)}\frac{\|M-\tilde{M}\|_{F}}{\|M\|_{F}}\leq\sqrt{\frac{1}{n} \sum_{i=1}^{n}\left(1-\frac{1}{\lambda_{i}(M,\tilde{M})}\right)^{2}}\leq\kappa (M)\frac{\|M-\tilde{M}\|_{F}}{\|M\|_{F}}\] where \(\kappa(M)=\frac{\lambda_{n}(M)}{\lambda_{1}(M)}\) is the spectral condition number of \(M\). Proof.: Consider the matrix pair \((M,\tilde{M})\). Since \(M\) and \(\tilde{M}\) are symmetric positive definite, there exists a matrix \(U\in\mathbb{R}^{n\times n}\) such that \(U^{T}MU=D\) and \(U^{T}\tilde{M}U=I\), where \(U\) is the matrix of \(\tilde{M}\)-orthonormal eigenvectors and \(D=\operatorname{diag}(\lambda_{1},\dots,\lambda_{n})\) is the diagonal matrix of positive eigenvalues [11, Theorem VI.1.15]. Now note that \[\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(1-\lambda_{i}(M,\tilde{M})\right)^{2}}= \frac{\|I-D\|_{F}}{\|I\|_{F}}=\frac{\|U^{T}(M-\tilde{M})U\|_{F}}{\|U^{T} \tilde{M}U\|_{F}}.\] Moreover, \[\|U\|_{2}^{-2}\|U^{-1}\|_{2}^{-2}\frac{\|M-\tilde{M}\|_{F}}{\|\tilde{M}\|_{F} }\leq\frac{\|U^{T}(M-\tilde{M})U\|_{F}}{\|U^{T}\tilde{M}U\|_{F}}\leq\|U\|_{2} ^{2}\|U^{-1}\|_{2}^{2}\frac{\|M-\tilde{M}\|_{F}}{\|\tilde{M}\|_{F}}.\] The quantity \(\kappa(U)^{2}=\|U\|_{2}^{2}\|U^{-1}\|_{2}^{2}\) appearing in the bounds is nothing more than the condition number of \(\tilde{M}\). Indeed, thanks to the normalization of the eigenvectors \(\tilde{M}=U^{-T}U^{-1}\) and \(\tilde{M}^{-1}=UU^{T}\). Consequently, \(\|U^{-1}\|_{2}^{2}=\|U^{-T}U^{-1}\|_{2}=\|\tilde{M}\|_{2}\) and \(\|U\|_{2}^{2}=\|UU^{T}\|_{2}=\|\tilde{M}^{-1}\|_{2}\). Finally, we obtain the bounds \[\frac{1}{\kappa(\tilde{M})}\frac{\|M-\tilde{M}\|_{F}}{\|\tilde{M}\|_{F}}\leq \sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(1-\lambda_{i}(M,\tilde{M})\right)^{2}} \leq\kappa(\tilde{M})\frac{\|M-\tilde{M}\|_{F}}{\|\tilde{M}\|_{F}}. \tag{3.6}\] There are two issues with the previous bounds: firstly they involve \(\kappa(\tilde{M})\), which is not easy to relate back to the low-rank approximation problem. Secondly, the quantity being bounded is especially large when the eigenvalues of \((M,\tilde{M})\) are large. Yet, moderately large eigenvalues of \((M,\tilde{M})\) are not necessarily detrimental to the preconditioner's effectiveness, especially not if its eigenvalues are clustered [42]. However, small eigenvalues close to zero will severely undermine the preconditioner's effectiveness. This fact leads us to measuring instead \[\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(1-\frac{1}{\lambda_{i}(M,\tilde{M})} \right)^{2}}.\] Since the eigenvalues of \((\tilde{M},M)\) are the reciprocal of the eigenvalues of \((M,\tilde{M})\), the last expression is immediately recovered by simply swapping the roles of \(M\) and \(\tilde{M}\) in (3.6) and the result follows. **Remark 3.5**.: If \(\tilde{M}\) is the best Kronecker rank \(q\) approximation, the relative error in the Frobenius norm appearing in the bounds of Theorem 3.4 is directly related to the singular values since, following (3.2), \[\frac{\|M-\tilde{M}\|_{F}}{\|M\|_{F}}=\sqrt{\frac{\sum_{k=q+1}^{r}\sigma_{k}^{2 }}{\sum_{k=1}^{r}\sigma_{k}^{2}}}\leq\sqrt{\sum_{k=q+1}^{r}\left(\frac{\sigma_ {k}}{\sigma_{1}}\right)^{2}}\] where the last inequality is reasonable if the singular values are decaying rapidly. We therefore obtain the more explicit upper bound \[\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(1-\frac{1}{\lambda_{i}(M,\tilde{M})} \right)^{2}}\leq\kappa(M)\sqrt{\sum_{k=q+1}^{r}\left(\frac{\sigma_{k}}{\sigma _{1}}\right)^{2}}.\] The upper bound in particular depends on the ratio of singular values, which was already suspected by some authors [37, 38] but to our knowledge never formally proved. ## 4 Low Kronecker rank approximate inverse Instead of finding an approximation of the operator itself, we will now find an approximation of its inverse. Clearly, since \((A\otimes B)^{-1}=A^{-1}\otimes B^{-1}\) for invertible matrices \(A,B\)[26, Corollary 4.2.11], (invertible) Kronecker rank \(1\) matrices have a Kronecker rank \(1\) inverse. However, there is generally no straightforward relation between the Kronecker rank of a matrix and the Kronecker rank of its inverse for \(r\geq 2\). Although the Kronecker rank of the inverse could be much larger than the one of the matrix itself, it might be very well approximated by low Kronecker rank matrices. Indeed, it was shown in [9] that the inverse of sums of Kronecker products obtained by finite difference and finite element discretizations of model problems can be well approximated by Kronecker products of matrix exponentials (exponential sums). Unfortunately, due to the special tensor product structure, these results are limited to discretizations of idealized problems with trivial coefficients and geometries (e.g. the hypercube \(\Omega=(0,1)^{d}\)). Nevertheless, these insightful results indicate that it might be possible to generally approximate the inverse of an arbitrary sum of Kronecker products by a low Kronecker rank matrix. Therefore, we describe in this section a general and algebraic way of constructing such an approximation without ever forming the Kronecker product matrix explicitly. We first consider the rank \(1\) case and later extend it to rank \(q\geq 2\). ### Kronecker rank \(1\) approximate inverse We set at finding factor matrices \(C\in\mathbb{R}^{n\times n}\) and \(D\in\mathbb{R}^{m\times m}\) such that \(C\otimes D\approx\left(\sum_{k=1}^{r}A_{k}\otimes B_{k}\right)^{-1}\) and therefore consider the minimization problem \[\min_{C,D}\|I-\sum_{k=1}^{r}A_{k}C\otimes B_{k}D\|_{F} \tag{4.1}\] where we have used the mixed-product property of the Kronecker product (see e.g. [26, Lemma 4.2.10]). The minimization problem is nonlinear when optimizing for \((C,D)\) simultaneously, but is linear when optimizing for \(C\) or \(D\) individually. This observation motivates an alternating optimization approach and is based on solving a sequence of linear least squares problems. Assume for the time being that \(C\) is fixed and \(D\) must be computed. Since any permutation or matrix reshaping is an isometry in the Frobenius norm, the block matrices \[M=\begin{pmatrix}M_{11}&\ldots&M_{1n}\\ \vdots&\ddots&\vdots\\ M_{n1}&\ldots&M_{nn}\end{pmatrix}\quad\text{and}\quad\tilde{M}=\begin{pmatrix}M_ {11}\\ M_{21}\\ \vdots\\ M_{nn}\end{pmatrix} \tag{4.2}\] have the same Frobenius norm. Applying this transformation to (4.1), we obtain \[\|I-\sum_{k=1}^{r}A_{k}C\otimes B_{k}D\|_{F}=\|\tilde{I}-\sum_{k=1}^{r} \operatorname{vec}(A_{k}C)\otimes B_{k}D\|_{F}=\|\tilde{I}-(U\otimes I_{m}) BD\|_{F} \tag{4.3}\] where \[\tilde{I}=\begin{pmatrix}I_{m}\\ 0\\ \vdots\\ I_{m}\end{pmatrix},\quad B=\begin{pmatrix}B_{1}\\ B_{2}\\ \vdots\\ B_{r}\end{pmatrix}, \tag{4.4}\] and we have defined \(U=[\operatorname{vec}(A_{1}C),\ldots,\operatorname{vec}(A_{r}C)]\in\mathbb{R} ^{n^{2}\times r}\). Minimizing the Frobenius norm in (4.3) for the matrix \(D\) is indeed equivalent to solving a linear least squares problem for each column of \(D\) with coefficient matrix \(\mathcal{B}=(U\otimes I_{m})B=\sum_{k=1}^{r}\operatorname{vec}(A_{k}C)\otimes B _{k}\) of size \(mn^{2}\times m\). For obvious storage reasons, we will never form this matrix explicitly (which would be as bad as forming the Kronecker product explicitly). Despite potential conditioning and stability issues, forming and solving the normal equations instead is very appealing because of its ability to compress large least squares problems into much smaller linear systems. Indeed, \[\mathcal{B}^{T}\mathcal{B}=B^{T}(U^{T}U\otimes I_{m})B=\sum_{k,l=1}^{r} \operatorname{vec}(A_{k}C)^{T}\operatorname{vec}(A_{l}C)B_{k}^{T}B_{l}=\sum_{ k,l=1}^{r}\beta_{kl}B_{k}^{T}B_{l}\] with \(\beta_{kl}=\operatorname{vec}(A_{k}C)^{T}\operatorname{vec}(A_{l}C)\in \mathbb{R}\) for \(k,l=1,\ldots,r\). Therefore, \(\mathcal{B}^{T}\mathcal{B}\) has size \(m\), independently of the Kronecker rank \(r\). The right-hand side of the normal equations is \(\mathcal{B}^{T}\tilde{I}\). Thanks to the structure of \(\tilde{I}\), the computation of this term can be drastically simplified. For a general matrix \(\tilde{M}\), as defined in (4.2), we have \[\mathcal{B}^{T}\tilde{M}=\left(\sum_{k=1}^{r}\operatorname{vec}(A_{k}C)^{T} \otimes B_{k}^{T}\right)\tilde{M}=\sum_{k=1}^{r}\sum_{i,j=1}^{n}(A_{k}C)_{ij}B _{k}^{T}M_{ij}. \tag{4.5}\] However, for \(\tilde{M}=\tilde{I}\), we have \(M_{ii}=I_{m}\) for \(i=1,\ldots,n\) and \(M_{ij}=0\) for all \(i\neq j\). Thus, (4.5) reduces to \[\mathcal{B}^{T}\tilde{I} =\sum_{k=1}^{r}B_{k}^{T}\sum_{i=1}^{n}(A_{k}C)_{ii}=\sum_{k=1}^{r} \operatorname{trace}(A_{k}C)B_{k}^{T}=\sum_{k=1}^{r}\delta_{k}B_{k}^{T}.\] with coefficients \(\delta_{k}=\operatorname{trace}(A_{k}C)\). Note that the coefficients \(\beta_{kl}\) can also be expressed as \[\beta_{kl}=\operatorname{vec}(A_{k}C)^{T}\operatorname{vec}(A_{l}C) =\langle A_{k}C,A_{l}C\rangle_{F}=\operatorname{trace}(A_{k}CC^{T}A_{l}^{T})= \operatorname{trace}(A_{l}^{T}A_{k}CC^{T})=\langle A_{k}^{T}A_{l},CC^{T} \rangle_{F},\] while the coefficients \(\delta_{k}\) are given by \[\delta_{k}=\operatorname{trace}(A_{k}C)=\langle A_{k}^{T},C\rangle_{F}.\] Although the factors \(\beta_{kl}\) and \(\delta_{k}\) may seem related, it must be emphasized that \(\beta_{kl}\neq\delta_{k}\delta_{l}\). Indeed, \(\beta_{kl}\) involves all entries of \(A_{k}C\) and \(A_{l}C\) whereas \(\delta_{k}\delta_{l}\) only involves their diagonal entries. As a matter of fact, \(\delta_{k}\delta_{l}\) is the trace of \(A_{k}C\otimes A_{l}C\) whereas \(\beta_{kl}\) is the trace of \(\mathcal{R}(A_{k}C\otimes A_{l}C)\). \[\operatorname{trace}(A_{k}C\otimes A_{l}C) =\operatorname{trace}(A_{k}C)\operatorname{trace}(A_{l}C)=\delta _{k}\delta_{l},\] \[\operatorname{trace}(\mathcal{R}(A_{k}C\otimes A_{l}C)) =\operatorname{trace}(\operatorname{vec}(A_{k}C)\operatorname{ vec}(A_{l}C)^{T})=\operatorname{vec}(A_{k}C)^{T}\operatorname{vec}(A_{l}C)= \beta_{kl}.\] Since the factor matrices \(A_{k}\) and \(B_{k}\) for \(k=1,\ldots,r\) do not change during the course of the iterations, if \(r\) is relatively small it might be worthwhile precomputing the products \(A_{k}^{T}A_{l}\) and \(B_{k}^{T}B_{l}\) for \(k,l=1,\ldots,r\) at the beginning of the algorithm. Storing these matrices will require \(O(r^{2}(n^{2}+m^{2}))\) of memory. Provided \(r\) is small with respect to \(n\) and \(m\), the memory footprint is still significantly smaller than the \(O(n^{2}m^{2})\) required for storing the Kronecker product matrix explicitly. We now assume that \(D\) is fixed and \(C\) must be computed. For this purpose, we recall that there exists a perfect shuffle permutation matrix \(S_{n,m}\)[26, Corollary 4.3.10] such that \[S_{n,m}(A\otimes B)S_{n,m}^{T}=B\otimes A.\] Since permutation matrices are orthogonal and the Frobenius norm is unitarily invariant, \[\|I-\sum_{k=1}^{r}A_{k}C\otimes B_{k}D\|_{F}=\|P(I-\sum_{k=1}^{r}A_{k}C \otimes B_{k}D)P^{T}\|_{F}=\|I-\sum_{k=1}^{r}B_{k}D\otimes A_{k}C\|_{F}.\] Therefore, the expressions when optimizing for \(C\) are completely analogous, with \(B_{k}\) swapped for \(A_{k}\) and \(C\) swapped for \(D\). We define \[\mathcal{A}^{T}\mathcal{A}=\sum_{k,l=1}^{r}\alpha_{kl}A_{k}^{T}A_ {l}, \alpha_{kl}=\langle B_{k}^{T}B_{l},DD^{T}\rangle_{F},\] \[\mathcal{A}^{T}\tilde{I}=\sum_{k=1}^{r}\gamma_{k}A_{k}^{T}, \gamma_{k}=\langle B_{k}^{T},D\rangle_{F}.\] Note that \(\tilde{I}\) is here defined by applying the transformation (4.2) to \(I_{m}\otimes I_{n}\) (and not \(I_{n}\otimes I_{m}\) as in (4.4)). Its size is \(m^{2}n\times n\) and its only nontrivial blocks are identity matrices of size \(n\). With a slight abuse of notation, we will not distinguish the two reshaped identity matrices since it will always be clear from the context which one is used. The stopping criterion of the alternating least squares algorithm relies on evaluating the residual at each iteration. If this operation is done naively, much of the computational saving is lost in addition to prohibitive memory requirements. We now discuss how the residual may be evaluated at negligible additional cost by recycling quantities that were previously computed. \[\|I-\sum_{k=1}^{r}A_{k}C\otimes B_{k}D\|_{F}^{2} =nm-2\langle I,\sum_{k=1}^{r}A_{k}C\otimes B_{k}D\rangle_{F}+\left \|\sum_{k=1}^{r}A_{k}C\otimes B_{k}D\right\|_{F}^{2}\] \[=nm-2\operatorname{trace}\left(\sum_{k=1}^{r}A_{k}C\otimes B_{k}D \right)+\operatorname{trace}\left(\sum_{k,l=1}^{r}A_{k}CC^{T}A_{l}^{T}\otimes B _{k}DD^{T}B_{l}^{T}\right)\] \[=nm-2\sum_{k=1}^{r}\operatorname{trace}(A_{k}C)\operatorname{ trace}(B_{k}D)+\sum_{k,l=1}^{r}\operatorname{trace}(A_{k}CC^{T}A_{l}^{T}) \operatorname{trace}(B_{k}DD^{T}B_{l}^{T})\] \[=nm-2\sum_{k=1}^{r}\gamma_{k}\delta_{k}+\sum_{k,l=1}^{r}\alpha_{ kl}\beta_{kl} \tag{4.6}\] Since the scalars \(\alpha_{kl},\beta_{kl},\gamma_{k}\) and \(\delta_{k}\) have already been computed, evaluating (4.6) nearly comes for free. The entire procedure is summarized in Algorithm 4.1. ``` 1:Input: Factor matrices \(\{A_{k}\}_{k=1}^{r}\subset\mathbb{R}^{n\times n}\) and \(\{B_{k}\}_{k=1}^{r}\subset\mathbb{R}^{m\times m}\) 2:Initial guess for the factor matrix \(C\in\mathbb{R}^{n\times n}\) 3: Tolerance \(\epsilon>0\) and maximum number of iterations \(N\in\mathbb{N}\) 4:Output: Factor matrices \(C\) and \(D\) such that \(C\otimes D\approx\left(\sum_{k=1}^{r}A_{k}\otimes B_{k}\right)^{-1}\) 5:Set \(r=\infty\), \(j=0\)\(\triangleright\) Initialization 6:while\(\sqrt{r}>\epsilon\) and \(j\leq N\)do\(\triangleright\) Optimizing for \(D\) 7: Compute \(\beta_{kl}=\langle A_{k}^{T}A_{l},CC^{T}\rangle_{F}\) for \(k,l=1,\ldots,r\)\(\triangleright\)\(O(rn^{3}+r^{2}n^{2})\) 8: Compute \(\delta_{k}=\langle A_{k}^{T},C\rangle_{F}\) for \(k=1,\ldots,r\)\(\triangleright\)\(O(rn^{2})\) 9: Form \(\mathcal{B}^{T}\mathcal{B}=\sum_{k,l=1}^{r}\beta_{kl}B_{l}^{T}B_{l}\)\(\triangleright\)\(O(rn^{3}+r^{2}m^{2})\) 10: Form \(\mathcal{B}^{T}\tilde{I}=\sum_{k=1}^{r}\delta_{k}B_{k}^{T}\)\(\triangleright\)\(O(rm^{2})\) 11: Solve \(\mathcal{B}^{T}\mathcal{B}D=\mathcal{B}^{T}\tilde{I}\)\(\triangleright\)\(O(m^{3})\) 12: Compute \(\alpha_{kl}=(B_{k}^{T}B_{l},DD^{T})_{F}\) for \(k,l=1,\ldots,r\)\(\triangleright\)\(O(rm^{3}+r^{2}m^{2})\) 13: Compute \(\gamma_{k}=\langle B_{k}^{T},D\rangle_{F}\) for \(k=1,\ldots,r\)\(\triangleright\)\(O(rm^{2})\) 14: Form \(\mathcal{A}^{T}\mathcal{A}=\sum_{k,l=1}^{r}\alpha_{kl}A_{k}^{T}A_{l}\)\(\triangleright\)\(O(rn^{3}+r^{2}n^{2})\) 15: Form \(\mathcal{A}^{T}\tilde{I}=\sum_{k=1}^{r}\gamma_{k}A_{k}^{T}\)\(\triangleright\)\(O(rn^{2})\) 16: Solve \(\mathcal{A}^{T}\mathcal{A}C=\mathcal{A}^{T}\tilde{I}\)\(\triangleright\)\(O(n^{3})\) 17: Update \(\beta_{kl}\) and \(\delta_{k}\) following lines 3 and 4, respectively\(\triangleright\) Computing the residual 18: Compute \(r=nm-2\sum_{k=1}^{r}\gamma_{k}\delta_{k}+\sum_{k,l=1}^{r}\alpha_{kl}\beta_{kl}\)\(\triangleright\)\(O(r^{2})\) 19: Update \(j=j+1\) 20:endwhile 21:Return \(C\) and \(D\) ``` **Algorithm 4.1** Alternating least squares for Kronecker rank 1 approximate inverse #### 4.1.1 Complexity analysis When presenting Algorithm 4.1, we have favored clarity over efficiency. A practical implementation might look very different and we now describe in detail the tricks that are deployed to reduce its complexity. Since the algorithmic steps for \(C\) and \(D\) are similar, we only discuss those for \(D\) and later adapt them to \(C\). We will assume that all factor matrices \(A_{k}\) and \(B_{k}\) are dense. * In line 3, an alternative expression for \(\beta_{kl}\) \[\beta_{kl}=\langle A_{k}^{T}A_{l},CC^{T}\rangle_{F}=\langle A_{k}C,A_{l}C\rangle_ {F}\] immediately reveals the symmetry (\(\beta_{kl}=\beta_{lk}\)). Thus, only \(\frac{r}{2}(r+1)\) coefficients must be computed, instead of \(r^{2}\). Moreover, their computation only requires \(r\) matrix-matrix products \(A_{k}C\) for \(k=1,\ldots,r\) and then a few Frobenius inner products, which in total amount to \(O(rn^{3}+r^{2}n^{2})\) operations. * A naive implementation of line 5 would require \(r^{2}\) matrix-matrix products. This number can be reduced significantly thanks to the sum factorization technique. After rewriting the equation as \[\mathcal{B}^{T}\mathcal{B}=\sum_{k,l=1}^{r}\beta_{kl}B_{k}^{T}B_{l}=\sum_{k=1}^ {r}B_{k}^{T}\sum_{l=1}^{r}\beta_{kl}B_{l},\] we notice that only \(r\) matrix-matrix products are needed once all matrices \(\sum_{l=1}^{r}\beta_{kl}B_{l}\) for \(k=1,\ldots,r\) have been computed. This technique trades some matrix-matrix products for a few additional (but cheaper) matrix sums. The workload in this step amounts to \(O(rm^{3}+r^{2}m^{2})\) operations. * Since all coefficients are independent, the algorithm is well suited for parallel computations. * A suitable sequencing of operations avoids updating \(\beta_{kl}\) and \(\delta_{k}\) before evaluating the residual. Computing the coefficients \(\delta_{k}\) and forming \(\mathcal{B}^{T}\tilde{I}\) is significantly cheaper and only leads to low order terms, which are neglected. Finally, solving the linear system \(\mathcal{B}^{T}\mathcal{B}D=\mathcal{B}^{T}\tilde{I}\) in line 7 with a standard direct solver will require \(O(m^{3})\) operations. After performing a similar analysis for the optimization of \(C\) and assuming that \(N\) iterations of the algorithm were necessary, the final cost amounts to \(O(Nr(n^{3}+m^{3})+Nr^{2}(n^{2}+m^{2}))\). The cost for evaluating the residual is negligible and does not enter our analysis. For the sake of completeness, the cost of each step is summarized in Algorithm 4.1. It may often be reduced if the factor matrices are sparse. Note in particular that the sparsity pattern of the system matrix of the normal equations does not change during the course of the iterations. Therefore, sparse direct solvers only require a single symbolic factorization. **Remark 4.1**.: In Algorithm 4.1, the products \(A_{k}^{T}A_{l}\) and \(B_{k}^{T}B_{l}\) repeatedly appear during the course of the iterations and one might be tempted to precompute them at the beginning of the algorithm. However, unless \(r\) is small, such a strategy could offset much of the storage savings gained from the Kronecker representation. Therefore, we have not considered it in our implementation. ### Kronecker rank \(q\) approximate inverse If the inverse does not admit a good Kronecker product approximation, the result of Algorithm 4.1 may be practically useless. To circumvent this issue, it might be worthwhile looking for approximations having Kronecker rank \(q\geq 2\). We will see in this section how our strategies developed for rank 1 approximations may be extended to rank \(q\geq 2\). We therefore consider the problem of finding \(C_{s}\in\mathbb{R}^{n\times n}\) and \(D_{s}\in\mathbb{R}^{m\times m}\) for \(s=1,\ldots,q\) that minimize \[\|I-\sum_{s=1}^{q}\sum_{k=1}^{r}A_{k}C_{s}\otimes B_{k}D_{s}\|_{F}\] For the rank 1 case, we had first transformed the problem to an equivalent one by stacking all the blocks of the matrix one above the other in reverse lexicographical order. In order to use the same transformation for the rank \(q\) case, we must first find an expression for the \((i,j)\)th block of \(\sum_{s=1}^{q}\sum_{k=1}^{r}A_{k}C_{s}\otimes B_{k}D_{s}\). This can be conveniently done by applying the same strategy adopted earlier. Indeed, the \((i,j)\)th block of the matrix is \[\sum_{s=1}^{q}\sum_{k=1}^{r}(A_{k}C_{s})_{ij}B_{k}D_{s}=\left[\sum_{k=1}^{r}(A _{k}C_{1})_{ij}B_{k},\ldots,\sum_{k=1}^{r}(A_{k}C_{q})_{ij}B_{k}\right]D\] where \(D=[D_{1};\ldots;D_{q}]\) and the semi-colon means that the factor matrices are stacked one above the other. After stacking all the blocks \((i,j)\) for \(i,j=1,\ldots,n\) on top of each other, we deduce the coefficient matrix for the least squares problem \[\mathcal{B}=[(U_{1}\otimes I_{m})B,\ldots,(U_{q}\otimes I_{m})B]\in\mathbb{R} ^{n^{2}m\times qm} \tag{4.7}\] where \(U_{s}=[\operatorname{vec}(A_{1}C_{s}),\ldots,\operatorname{vec}(A_{r}C_{s}) ]\in\mathbb{R}^{n^{2}\times r}\) for \(s=1,\ldots,q\) and \(B\) is the same as defined in (4.4) for the rank 1 approximation. Once again, the matrix \(\mathcal{B}\) will never be formed explicitly and we will instead rely on the normal equations. Although the size of the problem is larger, its structure is very similar to the rank 1 case. Indeed \(\mathcal{B}^{T}\mathcal{B}\in\mathbb{R}^{qm\times qm}\) is a \(q\times q\) block matrix consisting of blocks of size \(m\times m\). The \((s,t)\)th block is given by \[(\mathcal{B}^{T}\mathcal{B})_{st}=B^{T}(U_{s}^{T}U_{t}\otimes I_{m})B=\sum_{k,l=1}^{r}\operatorname{vec}(A_{k}C_{s})^{T}\operatorname{vec}(A_{l}C_{t})B_{k }^{T}B_{l}=\sum_{k,l=1}^{r}\beta_{kl}^{st}B_{k}^{T}B_{l}\] where we have defined \[\beta_{kl}^{st}=\operatorname{vec}(A_{k}C_{s})^{T}\operatorname{vec}(A_{l}C_{ t})=\langle A_{k}^{T}A_{l},C_{s}C_{t}^{T}\rangle_{F}.\] The steps for the right-hand side are analogous: \(\mathcal{B}^{T}\tilde{I}\in\mathbb{R}^{qm\times m}\) is a \(q\times 1\) block matrix and its \(s\)th block is given by \[B^{T}(U_{s}^{T}\otimes I_{m})\tilde{I}=\sum_{k=1}^{r}B_{k}^{T}\sum_{i=1}^{n}(A _{k}C_{s})_{ii}=\sum_{k=1}^{r}\operatorname{trace}(A_{k}C_{s})B_{k}^{T}=\sum_{ k=1}^{r}\delta_{k}^{s}B_{k}^{T}.\] with \(\delta_{k}^{s}=\langle A_{k}^{T},C_{s}\rangle_{F}\). We further note that \(\mathcal{B}^{T}\mathcal{B}\) and \(\mathcal{B}^{T}\tilde{I}\) can be expressed as \[\mathcal{B}^{T}\mathcal{B}=\sum_{k,l=1}^{r}b_{kl}\otimes B_{k}^{T}B_{l},\quad \mathcal{B}^{T}\tilde{I}=\sum_{k=1}^{r}d_{k}\otimes B_{k}^{T}\] with \[b_{kl}=\begin{pmatrix}\beta_{kl}^{11}&\ldots&\beta_{kl}^{1q}\\ \vdots&\ddots&\vdots\\ \beta_{kl}^{q1}&\ldots&\beta_{kl}^{qq}\end{pmatrix}\quad\text{and}\quad d_{k}= \begin{pmatrix}\delta_{k}^{1}\\ \vdots\\ \delta_{k}^{q}\end{pmatrix}. \tag{4.8}\] Resorting to perfect shuffle permutations allows to write a similar least squares problem for \(C=[C_{1};\ldots;C_{q}]\) once the coefficient matrices \(D_{s}\) for \(s=1,\ldots,q\) have been computed. It leads to defining the quantities \[\mathcal{A}^{T}\mathcal{A}=\sum_{k,l=1}^{r}a_{kl}\otimes A_{k}^{T}A_{l},\quad \mathcal{A}^{T}\tilde{I}=\sum_{k=1}^{r}c_{k}\otimes A_{k}^{T}\] with \[a_{kl}=\begin{pmatrix}\alpha_{kl}^{11}&\ldots&\alpha_{kl}^{1q}\\ \vdots&\ddots&\vdots\\ \alpha_{kl}^{q1}&\ldots&\alpha_{kl}^{qq}\end{pmatrix},\quad c_{k}=\begin{pmatrix} \gamma_{k}^{1}\\ \vdots\\ \gamma_{k}^{q}\end{pmatrix} \tag{4.9}\] and \[\alpha_{kl}^{st}=\langle B_{k}^{T}B_{l},D_{s}D_{t}^{T}\rangle_{F}\quad\text{ and}\quad\gamma_{k}^{t}=\langle B_{k}^{T},D_{s}\rangle_{F}.\] We will prefer those latter expressions due to their analogy with the rank 1 case. Moreover, similarly to the rank 1 case, the residual may be cheaply evaluated without forming the Kronecker products explicitly. Indeed, similarly to (4.6), we obtain \[\|I-\sum_{s=1}^{q}\sum_{k=1}^{r}A_{k}C_{s}\otimes B_{k}D_{s}\|_{F}^{2} =nm-2\sum_{k=1}^{r}\sum_{s=1}^{q}\gamma_{k}^{s}\delta_{k}^{s}+ \sum_{k,l=1}^{r}\sum_{s,t=1}^{q}\alpha_{kl}^{st}\beta_{kl}^{st}\] \[=nm-2\sum_{k=1}^{r}c_{k}\cdot d_{k}+\sum_{k,l=1}^{r}\langle a_{kl },b_{kl}\rangle_{F}.\] Thus, apart from the proliferation of indices, the rank \(q\) case does not lead to any major additional difficulty. The steps necessary for computing the Kronecker rank \(q\) approximate inverse are summarized in Algorithm 4.2. ``` 1:Input: 2:Factor matrices \(\{A_{k}\}_{k=1}^{r}\subset\mathbb{R}^{n\times n}\) and \(\{B_{k}\}_{k=1}^{r}\subset\mathbb{R}^{m\times m}\) 3:Linearly independent factor matrices \(C_{s}\in\mathbb{R}^{n\times n}\) for \(s=1,\ldots,q\) 4:Tolerance \(\epsilon>0\) and maximum number of iterations \(N\in\mathbb{N}\) 5:Output: 6:Matrices \(C=[C_{1};\ldots;C_{q}]\) and \(D=[D_{1};\ldots;D_{q}]\) such that \(\sum_{s=1}^{q}C_{s}\otimes D_{s}\approx\left(\sum_{k=1}^{r}A_{k}\otimes B_{k }\right)^{-1}\) 7:Set \(r=\infty\), \(j=0\)\(\triangleright\) Initialization 8:while\(\sqrt{r}>\epsilon\) and \(j\leq N\)do\(\triangleright\) Optimizing for \(D_{s}\) 9: Compute \(\beta_{kl}^{st}=\langle A_{k}^{T}A_{l},C_{s}C_{t}^{T}\rangle_{F}\) for \(k,l=1,\ldots,r\) and \(s,t=1,\ldots,q\)\(\triangleright\)\(O(rqn^{3}+r^{2}q^{2}n^{2})\) 10: Compute \(\delta_{k}^{s}=\langle A_{k}^{T},C_{s}\rangle_{F}\) for \(k=1,\ldots,r\) and \(s=1,\ldots,q\)\(\triangleright\)\(O(rqn^{2})\) 11: Form the matrix \(b_{kl}\) and the vector \(d_{k}\) following (4.8) 12: Form \(\mathcal{B}^{T}\mathcal{B}=\sum_{k,l=1}^{r}b_{kl}\otimes B_{k}^{T}B_{l}\)\(\triangleright\)\(O(rq^{2}m^{3}+r^{2}q^{2}m^{2})\) 13: Form \(\mathcal{B}^{T}\tilde{I}=\sum_{k=1}^{r}d_{k}\otimes B_{k}^{T}\)\(\triangleright\)\(O(rqm^{2})\) 14: Solve \(\mathcal{B}^{T}\mathcal{B}D=\mathcal{B}^{T}\tilde{I}\)\(\triangleright\)\(O(q^{3}m^{3})\) 15: Compute \(\alpha_{kl}^{st}=\langle B_{k}^{T}B_{l},D_{s}D_{t}^{T}\rangle_{F}\) for \(k,l=1,\ldots,r\) and \(s,t=1,\ldots,q\)\(\triangleright\)\(O(rqm^{3}+r^{2}q^{2}m^{2})\) 16: Compute \(\gamma_{k}^{s}=\langle B_{k}^{T},D_{s}\rangle_{F}\) for \(k=1,\ldots,r\) and \(s=1,\ldots,q\)\(\triangleright\)\(O(rqm^{2})\) 17: Form the matrix \(a_{kl}\) and the vector \(c_{k}\) following (4.9) 18: Form \(\mathcal{A}^{T}\mathcal{A}=\sum_{k,l=1}^{r}a_{kl}\otimes A_{k}^{T}A_{l}\)\(\triangleright\)\(O(rq^{2}n^{3}+r^{2}q^{2}n^{2})\) 19: Form \(\mathcal{A}^{T}\tilde{I}=\sum_{k=1}^{r}c_{k}\otimes A_{k}^{T}\)\(\triangleright\)\(O(rqn^{2})\) 20: Solve \(\mathcal{A}^{T}\mathcal{A}C=\mathcal{A}^{T}\tilde{I}\)\(\triangleright\)\(O(q^{3}n^{3})\) 21: Update \(b_{kl}\) and \(d_{k}\) by repeating lines 3,4 and 5 22: Compute \(r=nm-2\sum_{k=1}^{r}c_{k}\cdot d_{k}+\sum_{k,l=1}^{r}\langle a_{kl},b_{kl} \rangle_{F}\)\(\triangleright\)\(O(r^{2}q^{2})\) 23: Update \(j=j+1\) 24:endwhile 25:Return \(C\) and \(D\) ``` **Algorithm 4.2** Alternating least squares for Kronecker rank \(q\) approximate inverse Firstly, we note that Algorithm 4.2 reduces to Algorithm 4.1 for \(q=1\). Secondly, similarly to Algorithm 3.2, the initial factor matrices \(C_{s}\) must be linearly independent, otherwise \(\mathcal{B}^{T}\mathcal{B}\) is singular. #### 4.2.1 Complexity analysis Several implementation tricks may reduce the complexity of Algorithm 4.2. They are mentioned below for some critical operations. We again restrict the analysis to the optimizing procedure for \(C\) and assume all factor matrices \(A_{k}\) and \(B_{k}\) are dense. * For computing \(\beta_{kl}^{st}\) in line 3, we use its alternative expression \[\beta_{kl}^{st}=\langle A_{k}^{T}A_{l},C_{s}C_{t}^{T}\rangle_{F}=\langle A_{k} C_{s},A_{l}C_{t}\rangle_{F}\] revealing that \(\beta_{kl}^{st}=\beta_{lk}^{ts}\) (i.e. \(b_{kl}=b_{lk}^{T}\)) and reducing the number of coefficients to \(\frac{rq}{2}(rq+1)\) instead of \(r^{2}q^{2}\). We then proceed by first computing the \(rq\) matrix-matrix products \(A_{k}C_{s}\) for \(k=1,\ldots,r\) and \(s=1,\ldots,q\) and then all Frobenius inner products are computed. The combined cost amounts to \(O(rqn^{3}+r^{2}q^{2}n^{2})\) operations. * We preferably form \(\mathcal{B}^{T}\mathcal{B}\) by proceeding blockwise. Indeed, the \((s,t)\)th block of \(\mathcal{B}^{T}\mathcal{B}\) is given by \[(\mathcal{B}^{T}\mathcal{B})_{st}=\sum_{k,l=1}^{r}\beta_{kl}^{st}B_{k}^{T}B_{ l}=\sum_{k=1}^{r}B_{k}^{T}\sum_{l=1}^{r}\beta_{kl}^{st}B_{l},\] which has the same structure as the rank 1 case. Therefore, the procedure described for the rank 1 case is reused blockwise and leads to \(O(rq^{2}m^{3}+r^{2}q^{2}m^{2})\) operations in total. Computing \(\delta_{k}^{t}\) and forming \(\mathcal{B}^{T}\tilde{I}\) again results in much smaller contributions. Finally, solving the linear system \(\mathcal{B}^{T}\mathcal{B}D=\mathcal{B}^{T}\tilde{I}\) with a standard direct solver will require \(O(q^{3}m^{3})\) operations. We perform a similar analysis for the optimization of \(\mathcal{C}\) and summarize the cost of each step in Algorithm 4.2. Assuming that \(N\) iterations of the algorithm were necessary, after adding up all individual contributions and neglecting low order terms, the final cost amounts to \(O(Nrq^{2}(n^{3}+m^{3})+Nr^{2}q^{2}(n^{2}+m^{2}))\). Although this cost might seem significant at a first glance, we must recall that the total number of iterations \(N\) and the rank \(q\) are controlled by the user and take small integer values. Clearly, if \(r\ll n,m\), forming \(\mathcal{B}^{T}\mathcal{B}\) and \(\mathcal{A}^{T}\mathcal{A}\) stand out as the most expensive operations, an observation we later confirmed in our numerical experiments. However, for small dense factor matrices, these operations benefit from highly optimized matrix-matrix multiplication algorithms (level 3 BLAS). **Remark 4.2**.: Storing all the products \(A_{k}^{T}A_{l}\) and \(B_{k}^{T}B_{l}\) was already attractive in the rank 1 case and becomes even more appealing for the rank \(q\) case given how often these terms appear in the computations. However, such a strategy might be infeasible if \(n,m\) and \(r\) are large. Contrary to approximations of the operator, the approximate inverse computed with Algorithm 4.2 is generally not symmetric, even if the operator is. Fortunately, symmetry of the factor matrices can be easily restored by retaining their symmetric part, which experimentally did not seem to have any detrimental effect on the preconditioning quality. More importantly, Algorithm 4.2 may deliver an exceedingly good data sparse representation of the inverse. Moreover, applying the preconditioning operator \[\mathcal{P}(X)=\sum_{s=1}^{q}D_{s}XC_{s}^{T}\] only requires computing a few matrix-matrix products, which is generally much cheaper than solving standard Sylvester equations. ### Theoretical results Contrary to the nearest Kronecker product preconditioner, since we are directly approximating the inverse, bounds on the eigenvalues of the preconditioned matrix can be obtained straightforwardly. The theory was already established in the context of sparse approximate inverse preconditioning [43]. We recall below an important result. **Theorem 4.3** ([43, Theorem 3.2]).: Let \(M,P\in\mathbb{R}^{n\times n}\). Then, \[\sum_{i=1}^{n}|1-\lambda_{i}(MP)|^{2}\leq\|I-MP\|_{F}^{2}\] Proof.: The result stems from the Schur triangulation theorem [44, Theorem 2.3.1]: given a matrix \(A\in\mathbb{C}^{n\times n}\), there exists a unitary matrix \(U\in\mathbb{C}^{n\times n}\) and an upper triangular matrix \(T\in\mathbb{C}^{n\times n}\) with diagonal entries \(t_{ii}=\lambda_{i}\), the eigenvalues of \(A\), such that \(A=UTU^{*}\). Consequently, \[\sum_{i=1}^{n}|\lambda_{i}|^{2}\leq\sum_{i=1}^{n}|\lambda_{i}|^{2}+\sum_{ \begin{subarray}{c}i,j=1\\ i<j\end{subarray}}^{n}|t_{ij}|^{2}=\|T\|_{F}^{2}=\|A\|_{F}^{2}.\] The result then immediately follows after applying the previous inequality to \(A=I-MP\). Thanks to Theorem 4.3, the quality of the clustering of the eigenvalues of the preconditioned matrix is monitored since \(\|I-MP\|_{F}^{2}\) is evaluated at each iteration and coincides with the stopping criterion of the alternating least squares algorithm. Following the arguments presented in [43, Theorem 3.1, Corollary 3.1, Theorem 3.2], it is also possible to state sufficient conditions guaranteeing invertibility of the preconditioning matrix and derive estimates for the iterative condition number of the preconditioned matrix. However, these results are very pessimistic since we only have access to the Frobenius norm of the error and not its spectral norm. The minimization problem in the spectral norm was recently considered in [45] and could have interesting applications for preconditioning. However, computational methods are still in their infancy and not yet suited for large scale applications. ### Kronecker rank \(q\) sparse approximate inverse It is well-known that the entries of the inverse of a banded matrix are decaying in magnitude (although non-monotonically) away from the diagonal [46, 47]. In the case of block-banded matrices with banded blocks, the inverse features two distinctive decaying patterns: a global decay on the block level as well as a local decay within each individual block [48, 49]. Kronecker products of banded matrices fall in this category. Figure 4.1 shows the magnitude of the entries of the inverse of a Kronecker sum resulting from a finite difference discretization of the 2D Poisson problem. The global and local decaying patterns described earlier are clearly visible and were theoretically analyzed in [48, 49] for some model problems. However, they have not yet been fully exploited in applications. Similarly to approximating the inverse of a banded matrix by a banded matrix, the inverse of Kronecker products of banded matrices could also be approximated by Kronecker products of banded matrices, as suggested in Figure 4.1. Before explaining how to obtain such an approximation, we must first recall the construction of sparse approximate inverses. #### 4.4.1 Sparse approximate inverse techniques We begin by recalling some of the basic ideas behind sparse approximate inverse techniques, as they were outlined in [43, 50, 51]. Given a sparse matrix \(M\in\mathbb{R}^{n\times n}\), the problem consists in finding a sparse approximate inverse of \(M\) with a prescribed sparsity pattern. Let \(S\subseteq\{(i,j)\colon 1\leq i,j\leq n\}\) be a set of pairs of indices defining a sparsity pattern and \(\mathcal{S}=\{P\in\mathbb{R}^{n\times n}\colon p_{ij}=0\ (i,j)\notin S\}\) be the associated set of sparse matrices. We then consider the constrained minimization problem \[\min_{P\in\mathcal{S}}\|I-MP\|_{F}^{2}\] where the approximate inverse now satisfies a prescribed sparsity. Noticing that \(\|I-MP\|_{F}^{2}=\sum_{j=1}^{n}\|e_{j}-Mp_{j}\|_{2}^{2}\), each column of \(P\) can be computed separately by solving a sequence of independent least squares problems. Since all columns are treated similarly, we restrict the discussion to a single one, denoted \(p_{j}\). Let \(\mathcal{J}\) be the set of indices of nonzero entries in \(p_{j}\). Since the multiplication \(Mp_{j}\) only involves the columns of \(M\) associated to indices in \(\mathcal{J}\), only the submatrix \(M(:,\mathcal{J})\) must be retained, thereby drastically reducing the size of the least squares problem. Figure 4.1: Magnitude of the entries of the inverse of \(I\otimes A+A\otimes I\) with \(A=(n+1)^{2}\,\mathrm{tridiag}(-1,2,-1)\) and \(n=20\) resulting from a finite difference discretization of the Poisson problem on the unit square The problem can be further reduced by eliminating the rows of the submatrix \(M(:,\mathcal{J})\) that are identically zero (as they will not affect the least squares solution). Denoting \(\mathcal{I}\) the set of indices of nonzero rows, the constrained minimization problem turns into a (much smaller) unconstrained problem \[\min_{\hat{p}_{j}}\|\hat{e}_{j}-\hat{M}\hat{p}_{j}\|_{2}^{2}\] where \(\hat{M}=M(\mathcal{I},\mathcal{J})\), \(\hat{p}_{j}=p_{j}(\mathcal{J})\) and \(\hat{e}_{j}=e_{j}(\mathcal{I})\). The greater the sparsity of the matrices, the smaller the size of the least squares problem, which is usually solved exactly using a QR factorization. The procedure is then repeated for each column of \(P\). Instead of prescribing the sparsity pattern, several authors have proposed adaptive strategies to iteratively augment it until a prescribed tolerance is reached. For simplicity, we will not consider such techniques here and instead refer to the original articles [43, 50, 51] for further details. It goes without saying that sparse approximate inverse techniques can only be successful if the inverse can be well approximated by a sparse matrix. Although it might seem as a rather restrictive condition, it is frequently met in applications. In the next section, we will combine low Kronecker rank approximations with sparse approximate inverse techniques. In effect, it will allow us to compute low Kronecker rank approximations of the inverse with sparse factor matrices. #### 4.4.2 Low Kronecker rank sparse approximate inverse We first consider again the Kronecker rank 1 approximation. We seek factor matrices \(C\in\mathcal{S}_{C}\) and \(D\in\mathcal{S}_{D}\) where \(\mathcal{S}_{C}\) and \(\mathcal{S}_{D}\) are sets of sparse matrices with prescribed sparsity defined analogously as in Section 4.4.1. Recalling Equation (4.3) from Section 4.1, we have \[\|I-\sum_{k=1}^{r}A_{k}C\otimes B_{k}D\|_{F}^{2}=\|\tilde{I}-\mathcal{B}D\|_{ F}^{2}=\sum_{j=1}^{m}\|\hat{e}_{j}-\mathcal{B}d_{j}\|_{2}^{2}.\] We now proceed analogously to Section 4.4.1 and solve a sequence of independent least squares problems for each column of \(D\). Let \(\mathcal{J}\) be the set of indices corresponding to nonzero entries of \(d_{j}\) and \(\mathcal{I}\) be the set of indices for nonzero rows in \(\mathcal{B}(:,\mathcal{J})\). We then solve the unconstrained problem \[\min_{\hat{d}_{j}}\|\hat{e}_{j}-\hat{\mathcal{B}}\hat{d}_{j}\|_{2}^{2} \tag{4.10}\] where \(\hat{\mathcal{B}}=\mathcal{B}(\mathcal{I},\mathcal{J})\), \(\hat{d}_{j}=d_{j}(\mathcal{J})\) and \(\hat{e}_{j}=\hat{e}_{j}(\mathcal{I})\). Contrary to standard sparse approximate inverse techniques, we will not rely on a QR factorization of \(\hat{\mathcal{B}}\) but on the normal equations. The solution of the least squares problem in (4.10) is the solution of the linear system \(\hat{\mathcal{B}}^{T}\hat{\mathcal{B}}_{d}=\hat{\mathcal{B}}^{T}\hat{e}_{j}\). Furthermore, we notice that \[\hat{\mathcal{B}}^{T}\hat{\mathcal{B}} =\mathcal{B}(\mathcal{I},\mathcal{J})^{T}\mathcal{B}(\mathcal{I},\mathcal{J})=(\mathcal{B}^{T}\mathcal{B})(\mathcal{J},\mathcal{J}),\] \[\hat{\mathcal{B}}^{T}\hat{e}_{j} =\mathcal{B}(\mathcal{I},\mathcal{J})^{T}\hat{e}_{j}(\mathcal{I} )=(\mathcal{B}^{T}\hat{e}_{j})(\mathcal{J}).\] Therefore, the required system matrix and right-hand side vector are simply submatrices of \(\mathcal{B}^{T}\mathcal{B}\) and \(\mathcal{B}^{T}\tilde{I}\), respectively. These quantities are formed only once at each iteration and appropriate submatrices are extracted for computing each column of \(D\). This strategy is very advantageous given that forming \(\mathcal{B}^{T}\mathcal{B}\) is rather expensive. The strategy for computing \(C\) is again analogous. Overall, computing sparse factors only requires minor adjustments to Algorithm 4.1. The case of a Kronecker rank \(q\) sparse approximate inverse is not much more difficult. As we have seen in Section 4.2, \[\|I-\sum_{s=1}^{q}\sum_{k=1}^{r}A_{k}C_{s}\otimes B_{k}D_{s}\|_{F}^{2}=\| \tilde{I}-\mathcal{B}D\|_{F}^{2}\] where \(D=[D_{1};\ldots;D_{q}]\) and \(\mathcal{B}\) is defined in (4.7). We then apply exactly the same strategy as for the Kronecker rank 1 approximation. The only minor difficulty lies in defining suitable sparsity patterns \(\mathcal{S}_{C}\) and \(\mathcal{S}_{D}\) that accounts for the sparsity patterns of the individual factor matrices \(C_{s}\) and \(D_{s}\) stacked in the matrices \(C\) and \(D\), respectively. It must be emphasized that sparse approximate inverse techniques are applied to the factor matrices themselves. This strategy is much more efficient than blindly applying the same techniques on the (potentially huge) matrix \(M=\sum_{k=1}^{r}A_{k}\otimes B_{k}\). Apart from obvious storage savings, sparse approximate inverses further speed up the application of the preconditioning operator \(\mathcal{P}\). Numerical experiments We now test our preconditioning strategies on a few benchmark problems. All algorithms are implemented in MATLAB R2022b and run on MacOS with an M1 chip and 32 GB of RAM. As a first experiment, we consider the Lyapunov equation \[AX+XA=C \tag{5.1}\] where \(A=(n+1)^{2}\,\mathrm{tridiag}(-1,2,-1)\) is a symmetric tridiagonal matrix and \(C\) is a matrix of all ones. This equation is the prototypical example of a centered finite difference discretization of the Poisson problem on the unit square with homogeneous Dirichlet boundary conditions and \(n+2\) discretization points along each direction. Although it is actually a standard Sylvester equation, it serves merely as a validation check. More complicated problems will follow. We compare the nearest Kronecker product (NKP) preconditioners with (sparse) Kronecker product approximations of the inverse (KINV), as described in Sections 4.1 and 4.2, respectively. We must emphasize that the NKP preconditioners are only defined for Kronecker ranks \(q\leq 2\) while the KINV preconditioners may have larger Kronecker ranks. The former are computed using the SVD approach (Algorithm 3.1) while the latter are obtained after 10 iterations of alternating least squares (Algorithm 4.2), which was generally more than enough given the fast convergence of the algorithm. As initial guess, we chose \(C_{1}=\mathrm{diag}(1,1,\ldots,1)\) and then defined \(C_{s}\) for \(s\geq 2\) from \(C_{s-1}\) by adding a sub-diagonal and super-diagonal of ones. Figure 5.1 shows the convergence history of the Gl-GMRES method when solving (5.1) for \(n=200\) using the NKP and KINV preconditioners for \(q=1\) and \(q=1,2,3\), respectively. Since the system matrix in this specific example has Kronecker rank 2, we have only tested the NKP preconditioner for \(q=1\). Moreover, although the operators in this section are symmetric positive definite (i.e. \(\mathcal{M}=\mathcal{M}^{T}\) and \(\langle\mathcal{M}(X),X\rangle_{F}>0\) for \(X\neq 0\)), not all preconditioning operators are and therefore obliges us to use non-symmetric solvers such as the right preconditioned Gl-GMRES method described in Section 2. In order to ease comparison, we have used the same method for all experiments with a default tolerance of \(10^{-8}\) on the absolute residual and a zero initial matrix. According to Figure 5.1, Kronecker rank 1 preconditioners marginally improve the convergence but not enough to meet the tolerance after 100 iterations, our cap in this experiment. However, slightly increasing the Kronecker rank of the KINV preconditioners yields drastic improvements. Although increasing the Kronecker rank of the KINV preconditioners reduces the iteration count, the magnitude of the entries of the factor matrices also tends to increase. Figure 5.2 shows the magnitude of the entries of \(\sum_{s=1}^{q}|C_{s}|\), i.e. the absolute sum of entries of the factor matrices \(C_{s}\). This quantity accounts for all factor matrices and avoids being misled by individual ones (whose numbering is completely arbitrary). Figure 5.1: Convergence history for solving (5.1) with \(n=200\) using the right-preconditioned Gl-GMRES method with NKP and KINV preconditioners The pattern depicted in Figure 5.2 is not surprising given that the approximate inverse should converge to the actual inverse as the Kronecker rank increases. Unfortunately, it also suggests increasing the bandwidth of sparse approximate inverses as the Kronecker rank increases. We have repeated the experiment in Figure 5.1 for increasing values of \(n\) (finer discretizations) and compared a fully dense implementation to a sparse one. Our sparse implementation is combined with sparse approximate inverse techniques described in Section 4.4, where the sparsity pattern is prescribed following the results in Figure 5.2. Iteration counts and computing times for the dense and sparse case are summarized in Tables 5.1a and 5.1b, respectively. For the preconditioned methods, the computing time includes the setup time for the preconditioner. Evidently, exploiting sparsity is beneficial for reducing computing times and memory load but the benefits only become visible for sufficiently large problems. The experiment also reveals that the fully dense factor matrices of the approximate inverse are exceedingly well approximated by sparse matrices and lead to nearly the same iteration counts. However, they tend to increase with the size of the problem. For fine grids, the NKP preconditioner actually increases the computing time. Indeed, as already inferred from Figure 5.1, it barely reduces the iteration count while introducing an overhead for solving matrix equations with the preconditioning operator. On the contrary, KINV preconditioners with increased Kronecker ranks can provide effective preconditioning solutions. Our next set of experiments arises from applications in isogeometric analysis. Isogeometric analysis is a spline based discretization technique for solving PDEs [23, 52]. Conceived as an extension of the classical finite element method, it relies on spline functions such as B-splines both for parametrizing the geometry and representing the unknown solution. The underlying tensor product structure of the basis functions in dimension \(d\geq 2\) naturally leads to Kronecker products on the algebraic level. Given the scope of this work, we will restrict our discussion to dimension \(d=2\). In some idealized settings (e.g. rectangular domains and separable coefficient functions), tensorized finite element discretizations of the Poisson model problem lead to solving Sylvester equations \[B_{1}XA_{1}^{T}+B_{2}XA_{2}^{T}=C\] where the factor matrices \(A_{1},A_{2}\) and \(B_{1},B_{2}\) are stiffness or mass matrices of univariate problems [5]. This connection is an immediate consequence of the Kronecker product structure of the stiffness matrix \[K=A_{1}\otimes B_{1}+A_{2}\otimes B_{2}.\] Unfortunately, this pleasant structure only holds for idealized problems. For non-trivial single patch geometries, systems matrices (e.g. stiffness and mass matrices) are nevertheless very well approximated by sums of Kronecker products; a property at the heart of several fast assembly algorithms in isogeometric analysis [53, 54, 55]. Recall \begin{table} \end{table} Table 5.1: Iteration count / computing time (sec) for preconditioned Gl-GMRES with NKP (\(q=1\)) and KINV (\(q=3\)). For \({}^{*}\) the method did not converge within 200 iterations Figure 5.2: Magnitude of the entries of \(\sum_{s=1}^{q}|C_{s}|\) for \(n=50\) from Section 3 that the singular value decay of \(\mathcal{R}(M)\) indicates how well \(M\) can be approximated by a sum of Kronecker products. This decay is shown in Figure 5.3b for the geometry depicted in Figure 5.3a and does not depend on the discretization parameters. Generally speaking, the stiffness matrix has a larger Kronecker rank than the mass matrix, suggesting that NKP preconditioners might work much better for the latter. For this specific example, the mass matrix has Kronecker rank 10 while the stiffness matrix has Kronecker rank 35. Fast assembly algorithms such as those described in [53, 54, 55] may compute the factor matrices without ever explicitly assembling the system matrix. We exploit this feature by solving the associated generalized Sylvester equations using again the right preconditioned GI-GMRES method with the NKP and KINV preconditioners. The latter are computed using only 5 iterations of alternating least squares. For applications involving PDEs, it is important to design preconditioners that are robust with respect to the discretization parameters (e.g. mesh size and polynomial degree) and physical parameters (e.g. coefficient values). Therefore, we conduct an \(hp\)-refinement test by decreasing the mesh size \(h\) and increasing the spline degree \(p\). The iteration counts and computing times are reported in Tables 5.2 and 5.3 for the stiffness and mass operators, respectively, and a right-hand side matrix of all ones. The tolerance was set at \(10^{-8}\) and the number of iterations was capped at 500. Clearly, according to Table 5.2, none of our preconditioning strategies are robust with respect to the mesh size but are nearly robust with respect to the spline degree. Although the iteration counts might be smaller for the NKP preconditioner than for KINV, they are not quite as stable for fine meshes (Table 5.2). Moreover, while the setup cost for the NKP preconditioner is smaller than for KINV, its application cost is larger. Indeed, applying the NKP preconditioning operator requires solving generalized Sylvester equations for \(r=2\). In our experiments, we have first transformed them to standard form (which is generally possible) before calling MATLAB's built-in solver for Sylvester equations. MATLAB's solver relies on Schur decompositions and is well suited for small to moderate size matrices. Experiments on finer meshes would evidently require different strategies but they fall outside the scope of this paper. Our numerical experiments indicate that the smaller application cost of the KINV preconditioner generally outpaces its larger setup cost. Nevertheless, these conclusions also depend on the origin of the matrix equation and the properties of the underlying system matrix. Indeed, both preconditioning strategies perform remarkably well for the mass operator (Table 5.3). Figure 5.3: Plate with a hole ## 6 Conclusion In this paper, we have proposed general and algebraic preconditioning techniques for the iterative solution of generalized Sylvester matrix equations. Our strategies rely on low Kronecker rank approximations of either the operator or its inverse. In both cases, the approximations are computed without explicitly forming the associated system matrix and are therefore well suited for large scale applications. Moreover, we have shown how sparse approximate inverse techniques could be combined with low Kronecker rank approximations, thereby speeding up the application of the preconditioning operator. Numerical experiments have shown the effectiveness of our strategies in preconditioning generalized Sylvester equations arising from discretizations of PDEs, including non-trivial problems in isogeometric analysis. Although in this context preconditioning techniques are usually tailored to \begin{table} \end{table} Table 5.3: Plate with a hole: Iteration count / computing time (sec) for preconditioned Gl-GMRES applied to the mass operator. For \({}^{*}\) the method did not converge within 500 iterations \begin{table} \end{table} Table 5.2: Plate with a hole: Iteration count / computing time (sec) for preconditioned Gl-GMRES applied to the stiffness operator. For \({}^{*}\) the method did not converge within 500 iterations specific matrices (e.g. the mass or stiffness matrix) our approach is very general and applicable to both, including linear combinations. For this reason, it might also be promising for preconditioning the stages of implicit time integration schemes. A natural extension of our work would entail solving generalized Sylvester _tensor_ equations [56], as they arise for discretizations of PDEs in three-dimensional space. Extending our methods to this case is a research direction worthwhile exploring.
2310.05484
IDTraffickers: An Authorship Attribution Dataset to link and connect Potential Human-Trafficking Operations on Text Escort Advertisements
Human trafficking (HT) is a pervasive global issue affecting vulnerable individuals, violating their fundamental human rights. Investigations reveal that a significant number of HT cases are associated with online advertisements (ads), particularly in escort markets. Consequently, identifying and connecting HT vendors has become increasingly challenging for Law Enforcement Agencies (LEAs). To address this issue, we introduce IDTraffickers, an extensive dataset consisting of 87,595 text ads and 5,244 vendor labels to enable the verification and identification of potential HT vendors on online escort markets. To establish a benchmark for authorship identification, we train a DeCLUTR-small model, achieving a macro-F1 score of 0.8656 in a closed-set classification environment. Next, we leverage the style representations extracted from the trained classifier to conduct authorship verification, resulting in a mean r-precision score of 0.8852 in an open-set ranking environment. Finally, to encourage further research and ensure responsible data sharing, we plan to release IDTraffickers for the authorship attribution task to researchers under specific conditions, considering the sensitive nature of the data. We believe that the availability of our dataset and benchmarks will empower future researchers to utilize our findings, thereby facilitating the effective linkage of escort ads and the development of more robust approaches for identifying HT indicators.
Vageesh Saxena, Benjamin Bashpole, Gijs Van Dijck, Gerasimos Spanakis
2023-10-09T07:43:57Z
http://arxiv.org/abs/2310.05484v1
IDTraffickers: An Authorship Attribution Dataset to link and connect Potential Human-Trafficking Operations on Text Escort Advertisements ###### Abstract Human trafficking (HT) is a pervasive global issue affecting vulnerable individuals, violating their fundamental human rights. Investigations reveal that a significant number of HT cases are associated with online advertisements (ads), particularly in escort markets. Consequently, identifying and connecting HT vendors has become increasingly challenging for Law Enforcement Agencies (LEAs). To address this issue, we introduce IDTraffickers, an extensive dataset consisting of 87,595 text ads and 5,244 vendor labels to enable the verification and identification of potential HT vendors on online escort markets. To establish a benchmark for authorship identification, we train a DeCLUTR-small model, achieving a macro-F1 score of 0.8656 in a closed-set classification environment. Next, we leverage the style representations extracted from the trained classifier to conduct authorship verification, resulting in a mean r-precision score of 0.8852 in an open-set ranking environment. Finally, to encourage further research and ensure responsible data sharing, we plan to release IDTraffickers for the authorship attribution task to researchers under specific conditions, considering the sensitive nature of the data. We believe that the availability of our dataset and benchmarks will empower future researchers to utilize our findings, thereby facilitating the effective linkage of escort ads and the development of more robust approaches for identifying HT indicators. ## 1 Introduction Human trafficking (HT) is a global crime that exploits vulnerable individuals for profit, affecting people of all ages and genders (EUROPOL, 2020; UNDOC, 2020). Sex trafficking, a form of HT, involves controlling victims through violence, threats, deception, and debt bondage to force them into commercial sex (ILO, 2012). These operations occur in various locations such as massage businesses, brobthels, strip clubs, and hotels (EUROPOL, 2020). Women and girls comprise a significant portion of HT victims, particularly in the commercial sex industry (ILO, 2012). Despite being advertised online, many victims have no control over the content of the advertisements (ads). Around 65% of HT victims are advertised online for escort services in the United States (POLARIS, 2020). However, the large number of online escort ads makes manual detection of HT cases infeasible, leading to numerous unidentified instances (POLARIS, 2018). Researchers and law enforcement agencies (LEAs) rely on sex trafficking indicators (Ibanez and Suthers, 2014; Ibanez and Gazan, 2016; Lugo-Graulich and Meyer, 2021) to identify HT ads. However, these investigations require linking ads to individuals or trafficking rings, often using phone numbers or email addresses to connect them. Our research reveals that only 37% (202,439 out of 513,705) of collected ads have such contact information. Moreover, manual detection of HT cases is time-consuming and resource-intensive due to the high volume of online escort ads. To address this, researchers and LEAs are exploring automated systems leveraging data analysis (Keskin et al., 2021), knowledge graphs (Szekely et al., 2015; Kejiwal and Szekely, 2022), network theory (Ibanez and Suthers, 2014, 2016; Kejiwal and Kapoor, 2019; Kosmas et al., 2022), and machine learning (Dubrawski et al., 2015; Portnoff et al., 2017; Tong et al., 2017; Stylianou et al., 2017; Alvari et al., 2017; Shahrokh Esfahani et al., 2019; Wiriyakun and Kurutach, 2021; Wang et al., 2019). A recent literature review by (Dimas et al., 2022) highlights current trends on various research fronts for combating HT. Although most of the abovementioned studies were conducted on online ads from the Backpage escort market, none analyzed authorship features to link and connect these trafficking operations. In the absence of phone numbers, email addresses, and private identifiers, such authorship techniques can become key to connecting vendor communities and analyzing the language, style, and content of escort ads. Therefore, as illustrated in Figure 1, this research focuses on bringing the following contributions to bridge the gap between authorship techniques and HT: (i) Authorship dataset:Through this research, we release IDTraffickers, an authorship attribution dataset of 87.5K text ads collected between December 2015 and April 2016 from the United State's Backpage escort market. While we do not claim that all the ads in our dataset comprise sex trafficking operations, investigations have uncovered the facilitation of numerous sex trafficking operations within the Backpage escort market Callanan et al. (2017). Analyzing the language and content of these escort advertisements can provide crucial insights into the authorship traits and patterns associated with trafficking operations. Furthermore, by developing authorship approaches on such a dataset, we can uncover recurring patterns and use them to link ads from potential HT communities, thereby bridging the gap in identifying and connecting individuals or groups involved in trafficking operations. (ii) Authorship Benchmarks:On escort markets, multiple vendors and communities often share a single account and post numerous ads. Moreover, some vendors create multiple accounts to avoid detection by LEA and expand their business. To address these challenges, we first establish an authorship identification benchmark through a closed-setting classification task Vaze et al. (2022) (Figure 1(ii)). Given a specific text, the objective of the classifier is to predict the vendor that posted the advertisement. Furthermore, using the style representations from our trained classifier, we also establish an authorship verification benchmark through an open-setting text-similarity-based ranking task (Figure 1(iii)). Given two ads, we compute the cosine similarity between the style representations to analyze the patterns in writing style and determine if they came from the same vendor. only 37% of the ads in our dataset contained phone numbers. While clustering approaches Lee et al. (2021); Vajiac et al. (2023); Nair et al. (2022); Vajiac et al. (2023) can assist in connecting near-duplicate ads, they fail to establish connections in paraphrased and distinct ads. Therefore, we focus our research on leveraging authorship techniques to analyze unique writing styles within escort ads and establish connections with individual vendors. Authorship Attribution in NLP:In the past, research has established many machine-learning approaches to analyze text styles and link distinctive writing characteristics to specific authors. These approaches encompass TF-IDF-based clustering and classification techniques Agarwal et al. (2019); Izzet Bozkurt et al. (2007), conventional convolutional neural networks (CNNs) Rhodes (2015); Shrestha et al. (2017), recurrent neural networks (RNNs) Zhao et al. (2018); Jafariakinabad et al. (2019); Gupta et al. (2019), and contextualized transformers Fabien et al. (2020); Ordonez et al. (2020); Uchendu et al. (2020); Barlas and Stamatatos (2021). Moreover, researchers have recently demonstrated the effectiveness of contrastive learning approaches Gao et al. (2022) for authorship tasks Rivera-Soto et al. (2021); Ai et al. (2022). These advancements have led to applications in style representational approaches Hay et al. (2020); Zhu and Jurgens (2021); Wegmann et al. (2022), which currently represent the state-of-the-art (SOTA) for authorship tasks. Consequently, several datasets Conneau and Kiela (2018); Andrews and Bishop (2019); Bevendorff et al. (2020, 2023) have been established to facilitate further research in this area. Authorship Attribution and Cybercrime:Numerous authorship attribution studies have been successfully applied to the fields of forensic Yang and Chow (2014); Johansson and Isbister (2019); Belvisi et al. (2020) and cybercrime investigations Zheng et al. (2003); Rashid et al. (2013), spam detection Alazab et al. (2013); Jones et al. (2022), and linking vendor accounts on darknet markets Ekmanbaranathan (2018); Tai et al. (2019); Manolache et al. (2022); Saxena et al. (2023). However, to our knowledge, none of the existing studies focus on connecting vendors of HT through escort ads. In this research, we address this gap by introducing a novel dataset, IDTraffickers, which enables us to highlight the distinctions between language in existing authorship datasets and escort ads. Furthermore, we demonstrate the capabilities of authorship attribution approaches in establishing connections between escort advertisements and HT vendors using authorship verification and identification tasks. ## 3 Dataset The data in this research is collected from online posted escort ads between December 2015 and April 2016 on the Backpage Market, a classified ads website similar to Craigslist on the surface web 1. Although the market listing hosted everything from apartments to escorts, a report by Fichtner (2016) suggested that 90% of Backpage's revenue came from adult ads. Another report by Callanan et al. (2017) suggests that Backpage hosted escort listings concerned with the sex trafficking operations of women and children across 943 locations, 97 countries, and 17 languages. In this research, we accumulated 513,705 advertisements spread across 14 states and 41 cities in the United States. Footnote 1: While we perform the pre-processing and restructuring of data for the authorship task, we would like to acknowledge Bashpole Software, Inc. for sharing the scraped data with us. Preprocessing:First, we begin by merging the title and description of the text ads using the "[SEP]" token, as illustrated in Figure 1[i]. Figure 2(A) and figure 2(B) show that most ads in our dataset (approximately 99%) have a sentence length below 512 tokens and 2,000 characters. To generate ground truth, i.e., vendor labels, we employ the TJBatchExtractor Nagpal et al. (2017) and CNN-LSTM-CRF classifier Chambers et al. (2019) to extract phone numbers from the ads. Subsequently, Figure 2: (A) Total number of tokens per ad, (B) Total number of characters per ad, and (C) Number of ads per vendor (class-frequency) distributions. we utilize NetworkX Hagberg et al. (2008) to create vendor communities based on these phone numbers. Each community is assigned a label ID, which forms the vendor labels. For evaluation purposes, ads without phone numbers are discarded, resulting in a remaining dataset of 202,439 ads. Following the findings of Lee et al. (2021), which indicate that the average vendor of escort ads has 4-6 victims, we remove entries from vendors with fewer than five (average of 4-6) ads. The overall outcome of this process is a dataset comprising 87,595 unique ads and 5,244 vendor labels. Most of these vendors have an ad frequency of under 1,000 2. Footnote 2: For a more comprehensive understanding of our dataset, we encourage readers to find a detailed explanation in the datasheet attached in appendix A.2. After generating the vendor labels, we took measures to safeguard privacy by masking sensitive information within the ad descriptions. This included masking phone numbers, email addresses, age details, post ids, dates, and links to ensure that none of these details could be reverse-engineered, thereby minimizing the potential misuse of our dataset. Despite our efforts to extract escort names and location information using BERT-NER, RoBERTa-NER-based entity recognition techniques, and the approach described by Li et al. (2022), we encountered a significant number of false positives. Unfortunately, our attempts to mask this information resulted in further noise in our data. Consequently, we decided to forgo this approach. Differences between Existing Authorship and IDTraffickers dataset:To understand the differences between the existing authorship dataset and the IDTraffickers, we examine the part-of-speech (POS) and wikification Szymanski and Naruszewicz (2019) distributions between IDTraffickers, PAN2023 Bevendorff et al. (2023), and the Reddit-Conversations dataset Wegmann et al. (2022). The POS distribution is parsed through the RoBERTa-base space-transformers tagger Montani et al. (2020), whereas the wikification is carried out using the Amazon ReFinED entity linker Ayoola et al. (2022). Figure 3 presents a comparative analysis of the POS (part-of-speech) distributions among three datasets. The results reveal that the IDtraffickers dataset exhibits a higher frequency of punctuations, emojis, white spaces, proper nouns, and numbers than the other datasets. The punctuations, emojis, white spaces, and random characters represent approximately 47% of all POS tags in the IDtraffickers dataset. In contrast, these tags only account for 10.6% and 12.4% of all tags in the PAN2023 and Reddit conversation datasets, respectively. This discrepancy sheds light on the substantial noise within our dataset, highlighting the need for fine-tuning for domain adaptation. In addition to examining POS distributions, we also investigate the wikifiability, or the presence of entities with corresponding Wikipedia mentions, on a per-advertisement basis. Figure 4 provides insights into the wikifiability across three datasets: IDTraffickers, PAN2023, and Reddit Conversational. Notably, the IDTraffickers dataset exhibits a higher level of wikifiability compared to the PAN2023 and Reddit Conversational datasets. However, a closer examination in figure 5 reveals that the majority of recognized entities in the IDTraffickers dataset are primarily related to locations, escort names, or organizations. This observation aligns with the nature of the ads, as they often include information such as the posting's location, the escort's name, and nearby landmarks. \begin{table} \begin{tabular}{l l l} \hline **Geography** & **Advertisements** & **Vendors** \\ \hline \hline **East** & 24,000 & 5,029 \\ **West** & 22,556 & 2,576 \\ **North** & 3,124 & 254 \\ **South** & 27,871 & 2,291 \\ **Central** & 21,124 & 2,928 \\ \hline **Overall** & 87,595 & 5,244 \\ \hline \end{tabular} \end{table} Table 1: Total number of unique advertisements and vendors across US geography Figure 3: **POS-distribution:** Normalized POS-distribution for IDtraffickers, PAN2023, and Reddit-Conversations datasets. ## 4 Experimental Setup ### Authorship Identification: A Classification Task Researchers have consistently demonstrated that the transformers-based contextualized models outperform traditional stylometric approaches, statistical TF-IDF, and conventional RNNs and CNNs on authorship tasks Kumar et al. (2020); Fabien et al. (2020); Ai et al. (2022); Saxena et al. (2023). Hence, we establish our baselines through closed-set classification experiments using the distilled versions of BERT-cased Sanh et al. (2020), RoBERTabase Liu et al. (2019), and GPT2 Li et al. (2021), smaller versions of, AlBERTa Lan et al. (2020) and DeBERTa-v3 He et al. (2023), and architectures trained on contrastive objective such as MiniLM Wang et al. (2020) and DeCLUTR Giorgi et al. (2021). To account for domain differences, we also fine-tuned a RoBERTa-base language model (LM) on the IDTraffickers ads for the language task. Then, we extract the sentence representations from our trained LM and employ mean-pooling for the closed-set classification task 3. In our research, we refer to this model as the LM-Classifier. Finally, we evaluated our results against a classifier trained on style representations Wegmann et al. (2022), which currently represents the state-of-the-art (SOTA) in authorship tasks. We employed several metrics for evaluation, including balanced accuracy, micro-F1, weighted-F1, and macro-F1 scores. However, we emphasize the performance of our classifiers on the macro-F1 score due to the class imbalance (Figure 2(C)) in our dataset 4. Footnote 3: Please note that we experiment with mean, max, and mean-max pooling strategies for both author verification and identification tasks. However, the best results are obtained using the mean-pooling strategy Footnote 4: The training setup and hyperparameter details are described in the appendix section A.1. ### Authorship Verification: A Ranking Task The closed-set classifier effectively identifies known vendors presented to it during training. However, it cannot handle inference for unknown vendors. Given the daily frequency of escort ads, it is impractical for law enforcement agencies (LEAs) to repeatedly train a network whenever a new vendor emerges. To address this limitation, we leverage the trained classifier to extract mean-pooled style representations from the ads and utilize FAISS Johnson et al. (2019) for a similarity search. Specifically, we employ k-means clustering on the style representations. Our test set ads serve as query documents, while the training set ads act as index documents. By employing cosine similarity, we identify the K closest index documents for each query document. Since we treat the authorship verification task as a ranking task, we evaluate the effectiveness of this similarity search operation using Precision@\(K\), Recall@\(K\), Mean Average Precision (MAP@\(K\)) Pothula and Dhavachelvan (2011); Jin et al. (2021), and average R-Precision scores Beitzel et al. (2009) metrics. ## 5 Results ### Authorship Identification Task Table 3 showcase the performance of trained our classifier baselines, evaluated on the test IDTraffi Figure 4: **Wikifiability:** No. of entities per advertisement with Wikipedia mentions in the IDtraffickers, PAN2023, and Reddit-Conversations datasets. Figure 5: **Wiki-entities-distribution:** Extracted entities from the wikification of IDtraffickers, PAN2023, and Reddit-Conversations datasets. ickers dataset. As illustrated, the DeCLUTR-small classifier outperforms all other baselines for the authorship identification task. The success of the Style-Embedding model comes from its ability to latch onto content correlations. However, since escort ads have similar content types, the model struggles to effectively adapt to the noise in our data and distinguish between different writing styles. In contrast, DeCLUTR, trained to generate universal sentence representations, excels at capturing stylometric patterns and associating them with individual vendors. Next, we train the LM until convergence to achieve a perplexity of 6.22. When compared to the end-to-end DeCLUTR baseline, training a classifier over the trained representations of our LM only provides us with \(\sim\)1% increase in performance. We believe that the higher perplexity in the LM illustrates its inability to adapt to high noise in our data. Considering this minor improvement, we conclude that the additional training for our LM is not worthwhile. Consequently, we establish the DeCLUTR architecture as the benchmark for vendor identification on the IDTraffickers dataset. ### Authorship Verification Task Table 2 presents the open-set author verification results for the DeCLUTR and Style-Embedding models before and after being trained on the IDTraffickers dataset. In this experiment, we conduct a similarity search on the training dataset for each query in the test dataset. The results show the average performance across all queries, along with the standard deviation. As can be observed, a substantial performance difference exists between the models before and after training, emphasizing the importance of our dataset. The zero-shot performance of the available checkpoints in red proves inconclusive for the author verification task. The Precision\(@K\) metric measures the model's ability to identify relevant vendor ads within the top-\(K\) predictions, whereas Recall@\(K\) quantifies its capability to identify relevant recommended ads from the entire set of relevant ads in our training dataset. High precision at smaller \(K\) values shows that our trained model retrieves relevant ads effectively within the top-\(K\) retrieved items. While not all vendors in our dataset have a similar number of ads, we know they all have at least five ads. Therefore, we focus on precision performance for \(K\) values less than or equal to 5. On the other hand, increasing Recall with higher \(K\) values indicates \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline \multirow{2}{*}{K} & \multirow{2}{*}{@1} & \multirow{2}{*}{@3} & \multirow{2}{*}{@5} & \multirow{2}{*}{@10} & \multirow{2}{*}{@20} & \multirow{2}{*}{@25} & \multirow{2}{*}{@50} & \multirow{2}{*}{@100} & \multirow{2}{*}{@X} \\ \hline \multicolumn{6}{|c|}{Precision@\(K\)} \\ \hline Style & 0.0482 \(\pm\) 0.20 & 0.0410 \(\pm\) 0.16 & 0.0391 \(\pm\) 0.15 & 0.0366 \(\pm\) 0.13 & 0.0329 \(\pm\) 0.11 & 0.0319 \(\pm\) 0.10 & 0.0270 \(\pm\) 0.08 & 0.0227 \(\pm\) 0.07 & - \\ DeCLUTR & 0.3198 \(\pm\) 0.46 & 0.2883 \(\pm\) 0.39 & 0.2671 \(\pm\) 0.36 & 0.2278 \(\pm\) 0.32 & 0.1837 \(\pm\) 0.27 & 0.1693 \(\pm\) 0.26 & 0.1277 \(\pm\) 0.21 & 0.0893 \(\pm\) 0.15 & - \\ Style & 0.9616 \(\pm\) 0.19 & 0.9437 \(\pm\) 0.19 & 0.9124 \(\pm\) 0.21 & 0.8175 \(\pm\) 0.27 & 0.6818 \(\pm\) 0.33 & 0.6328 \(\pm\) 0.35 & 0.4815 \(\pm\) 0.36 & - \\ DeCLUTR & 0.9672 \(\pm\) 0.17 & 0.9532 \(\pm\) 0.17 & 0.9292 \(\pm\) 0.19 & 0.8253 \(\pm\) 0.26 & 0.6868 \(\pm\) 0.33 & 0.6367 \(\pm\) 0.34 & 0.8355 \(\pm\) 0.36 & 0.3561 \(\pm\) 0.36 & - \\ \hline \multicolumn{6}{|c|}{Recall@\(K\)} \\ \hline Style & 0.0023 \(\pm\) 0.01 & 0.0063 \(\pm\) 0.04 & 0.0091 \(\pm\) 0.05 & 0.0146 \(\pm\) 0.07 & 0.0233 \(\pm\) 0.09 & 0.0269 \(\pm\) 0.10 & 0.0394 \(\pm\) 0.12 & 0.0580 \(\pm\) 0.15 & - \\ DeCLUTR & 0.0242 \(\pm\) 0.06 & 0.0567 \(\pm\) 0.12 & 0.0792 \(\pm\) 0.16 & 0.1136 \(\pm\) 0.20 & 0.1539 \(\pm\) 0.24 & 0.1675 \(\pm\) 0.2122 \(\pm\) 0.29 & 0.2590 \(\pm\) 0.31 & - \\ Style & 0.0828 \(\pm\) 0.09 & 0.2348 \(\pm\) 0.24 & 0.3483 \(\pm\) 0.32 & 0.5092 \(\pm\) 0.37 & 0.6652 \(\pm\) 0.37 & 0.6945 \(\pm\) 0.36 & 0.7909 \(\pm\) 0.29 & 0.8600 \(\pm\) 0.27 & - \\ DeCLUTR & 0.0836 \(\pm\) 0.09 & 0.2397 \(\pm\) 0.25 & 0.3563 \(\pm\) 0.32 & 0.5192 \(\pm\) 0.37 & 0.6653 \(\pm\) 0.37 & 0.7041 \(\pm\) 0.36 & 0.7988 \(\pm\) 0.32 & 0.8664 \(\pm\) 0.27 & - \\ \hline \multicolumn{6}{|c|}{MAP.} \\ \hline Style & 0.0442 \(\pm\) 0.20 & 0.0562 \(\pm\) 0.21 & 0.0598 \(\pm\) 0.21 & 0.0640 \(\pm\) 0.21 & 0.0673 \(\pm\) 0.21 & 0.0681 \(\pm\) 0.21 & 0.0700 \(\pm\) 0.21 & 0.0712 \(\pm\) 0.21 & - \\ DeCLUTR & 0.3198 \(\pm\) 0.46 & 0.3587 \(\pm\) 0.45 & 0.3681 \(\pm\) 0.45 & 0.3750 \(\pm\) 0.44 & 0.3794 \(\pm\) 0.44 & 0.3803 \(\pm\) 0.44 & 0.3823 \(\pm\) 0.44 & 0.3833 \(\pm\) 0.44 & - \\ Style & 0.9616 \(\pm\) 0.19 & 0.9687 \(\pm\) 0.16 & 0.9706 \(\pm\) 0.15 & 0.9709 \(\pm\) 0.14 & 0.9710 \(\pm\) 0.14 & 0.9710 \(\pm\) 0.14 & 0.9710 \(\pm\) 0.14 & - \\ DeCLUTR & 0.9672 \(\pm\) 0.17 & 0.9735 \(\pm\) 0.14 & 0.9746 \(\pm\) 0.14 & 0.9752 \(\pm\) 0.13 & 0.9755 \(\pm\) 0.13 & 0.9756 \(\pm\) 0.13 & 0.9756 \(\pm\) 0.13 & - \\ \hline \multicolumn{6}{|c|}{Re-Precision@\(J\)} \\ \hline Style & - & - & - & - & - & - & - & 0.0199 \(\pm\) 0.07 \\ DeCLUTR & - & - & - & - & - & - & 0.1641 \(\pm\) 0.23 \\ Style & - & - & - & - & - & - & 0.8601 \(\pm\) 0.22 \\ DeCLUTR & - & - & - & - & - & - & - & 0.8850 \(\pm\) 0.20 \\ \hline \end{tabular} \end{table} Table 2: Precision@\(K\), Recall@\(K\), MAP@\(K\), and R-Precision@\(X\) scores for the DeCLUTR and Style-Embedding models before and after being trained on the IDTraffickers dataset \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Models** & **Acc.** & **Micro-F1** & **Weight-F1** & **Macro-F1** \\ \hline \hline \multicolumn{4}{|c|}{**Distilled Models**} \\ \hline BERT & 0.9110 & 0.9147 & 0.9143 & 0.8467 \\ RoBERTa & 0.9199 & 0.9230 & 0.9229 & 0.8603 \\ GPT2 & 0.9132 & 0.9172 & 0.9166 & 0.8500 \\ \hline \hline \multicolumn{4}{|c|}{**Smaller Models**} \\ \hline \hline ALBERT & 0.7832 & 0.7891 & 0.7925 & 0.6596 \\ DeBERTa-v3 & 0.8703 & 0.8757 & 0.8756 & 0.7825 \\ T5 & 0.9157 & 0.9192 & 0.9190 & 0.8535 \\ \hline \hline \multicolumn{4}{|c|}{**Contrastive Learning Models**} \\ \hline \hline miniLM & that our trained model retrieves a large proportion of relevant ads for a query from the entire set of available relevant ads. Given that all vendors in our dataset have at least five ads each, we emphasize the recall performance of our trained model for \(K\) values greater than or equal to 5. In addition to Precision@\(K\) and Recall@\(K\), we also evaluate the performance of our trained models using MAP@\(K\), which considers the ordering of the retrieved items. It calculates the average precision across all relevant items within the top-\(K\) positions, considering the precision at each position. As can be observed, our retrieval system prioritizes ads from the same vendor when presented with a query advertisement from vendor A. The high MAP scores achieved by our trained models for all K values indicate that the retrieved ads are effectively ranked, with the most relevant ones appearing at the top of the list. Finally, we evaluate our trained models using the R-Precision metric, which focuses on precision when the number of retrieved ads matches the number of relevant ads (\(X\)) for a vendor. This metric disregards irrelevant items and solely concentrates on accurately retrieving relevant ones. The results show that the DeCLUTR-small models achieve an approximate 88% precision for retrieving relevant ads among the top-ranked items. In other words, our vendor verification setup effectively captures and presents the most relevant ads for a given query, achieving high precision at the rank equal to the number of relevant ads. Consequently, we establish the DeCLUTR-small model as the benchmark for the vendor verification task on the IDTraffickers dataset. However, there is significant fluctuation in standard deviation, indicating that precision varies depending on the vendor and ad frequency. Further investigation confirms this observation, with lower Precision@\(K\), Recall@\(K\), MAP@\(K\), and R-Precision@\(X\) values for vendors with fewer ads. ### Qualitative Analysis To generate interpretable results, we use transformers-interpret (Pierse, 2021) build upon Captum (Kokhlikyan et al., 2020) to compute local word attributions in our text ads using the DeCLUTR-small classifier benchmark, indicating the contribution of each word to its respective vendor prediction. Figures 5(a) and 5(b) showcase True Positive and False Positive predictions from our trained classifier. Furthermore, for each query in the test dataset, we employ FAISS to identify the most similar ad in the training dataset. The figures depict positive (in green), zero, and negative (in red) word attribution scores for queries and their corresponding anchors associated with their vendor label. Similar writing patterns and word attributions in the True Positive explanations confirm that the ads are associated with the same vendor. This inference is supported by the consistent use of "@" and "/" preceding the digits of the masked phone number "/NN N / NN N/ NN NN." Conversely, the False Positive explanations shed light on instances where the model generated incorrect predictions, likely due to significant content and writing style similarities between vendors 4310 and 742. Both vendors frequently used a continuous sequence of "?" and mentioned Japanese services in their ads. Further examination reveals several instances where the classifier predicted the wrong vendor classes due Figure 6: Model Explanations from the trained DeCLUTR-small classifier to similar strong resemblances between the ads. This finding emphasizes the importance of carefully evaluating the quality of our classification labels. Note that vendor labels are established using the extracted phone numbers mentioned in the ads, which enabled us to connect ads and form vendor communities. In some cases, ads mentioned multiple phone numbers, aiding us in forming these connections. For others, the absence of this information led us to create a new vendor label. The hallucinations observed in our explanations result from this label assignment process, indicating the possibility of two vendors being the same entity. Inspired by Rethmeier et al. (2020), figure 7 employs global feature attributions to examine the discrepancies in writing styles between two specific vendors, namely vendors 11178 and 11189. The analysis involves collecting word attributions and part-of-speech (POS) tags for all the advertisements from both vendors. The resulting bar plot presents the normalized POS density activated within the vendor ads and scatter points highlighting the two most attributed tokens for each POS tag associated with the respective vendor 5. The visualization clearly illustrates that both vendors employ distinct grammatical structures and word attributions in their advertisements, suggesting they are separate entities. This analysis enables law enforcement agencies (LEAs) to enhance their understanding of the connections between multiple vendors without solely relying on the ground truth. Footnote 5: Please note that we generate the plots using Plotly, which offers infinite zooming capabilities for any number of scatter points. However, we only display the two most attributed tokens for better clarity and visibility in the paper. ## 6 Conclusion In this research, we attempt to bridge the gap between connecting escort ads to potential HT vendors by introducing IDTraffickers, an authorship attribution dataset collected from the Backpage escort market advertisements (ads) within the United States geography. The dataset contains 87,595 ads with a text sequence of title and description and 5,244 vendor labels. Since these ads lack ground truth for the authorship task, we generate the labels by connecting the phone numbers. First, we establish a benchmark for the authorship identification task by training a DeCLUTR-small classifier with a macro-F1 of 0.8656 in a closed-set classification environment. Then, we utilize the style representations from the trained classifier to perform an open-set ranking task and establish a benchmark with an r-precision score of 0.8852 for the authorship verification task. Our experiments reveal a massive difference between the language in IDTraffickers and existing authorship datasets. By performing the authorship identification task, we allow our classifier to adapt and benefit from this domain knowledge for the authorship verification task. Furthermore, we utilize the local and global feature attribution techniques to perform qualitative analysis on the trained classifier. Finally, our analysis reveals that most misclassifications in the trained classifier occur due to the possibility of multiple labels attributing to the same vendor. However, despite the lower performance, the classifier succeeds in generating style representations that allow us to identify these vendors through the ranking task. We believe that the availability of our dataset, benchmarks, and analyses will empower future researchers and LEAs to utilize the findings, aiding in the effective linkage of escort ads and developing more robust approaches to identifying HT indicators. ## 7 Limitations Assumption:This research relies upon the classification task to adapt and benefit from domain knowledge. We assume each class label to represents a different vendor in the classification process. However, our qualitative analysis reveals misclassification by the trained classifier due to heavy resemblance in writing style and content, indicating the possibility of multiple vendors being the same entity. While we cannot establish an absolute ground truth to validate our hypothesis, this presents a sig Figure 7: Word attribution collected over POS-distribution for ads of vendor 11178 and 11189. nificant challenge. Nevertheless, we acknowledge that training a classifier with better-quality vendor labels would enhance the performance of our benchmarks. Larger Architectures:Due to limited computational resources, we conducted our experiment using a distilled and small transformers-based architecture. It is worth noting that training the model on larger architectures has the potential to improve overall performance. Additionally, in this research, we utilized pre-trained representations to initialize our classifier architectures. However, incorporating supervised contrastive finetuning on our data can enhance the generation of stylometric representations, leading to better performance. Therefore, in the future, we plan to introduce a supervised contrastive pre-training baseline into our experiments. Zero-Shot Performance:While we ensure not to use ads from the training dataset as queries for the ranking task, it is worth noting that the classifier was trained on the same dataset for the authorship identification task. To gain insights into the zero-shot capabilities of our trained representations, we intend to evaluate the authorship verification benchmark on unseen data. In order to achieve this, we plan to expand our data collection efforts, potentially sourcing data from other escort markets. This expansion will allow us to assess whether our model can generate stylometric representations that are universally applicable. Explainability:This research employs local and global feature attribution techniques for qualitative analysis. However, it is important to address the limitations of these approaches Das and Rad (2020); Krishna et al. (2022). Local feature attribution techniques are susceptible to adversarial attacks and network sparsity. Moreover, they lack consideration for the broader context and dependencies in the data, and their explanations may not be entirely intuitive. Similarly, global feature attribution techniques suffer from a lack of granularity and contextual information. Both methods exhibit disagreements and significant inconsistencies in their explanations when applied to different XAI frameworks Saxena et al. (2023). Therefore, in the future, we plan to develop more dependable explainability approaches that can help us better understand model behavior and ultimately foster trust amongst LEAs. ## 8 Broader Impact Data Protocols:We collected our dataset from the Backpage Escort Markets, posted between December 2015 and April 2016 across various locations in the United States. In a related work Krotov and Silva (2018), the ethical implications of web scraping are characterized using seven guidelines. Adhering to these guidelines, we confirm that no explicit prohibitions are stated in terms of use policy on the Backpage website against data scraping. Furthermore, we intend to make our dataset available through the Dataverse data repository to mitigate any potentially illegal or fraudulent use of our dataset. The access to the data will be subject to specific conditions, including the requirement for researchers to sign a non-disclosure agreement (NDA) and data protection agreements. These agreements will prohibit sharing the data and its use for unethical commercial purposes. Considering that the data was collected prior to the seizure of the Backpage escort markets, we are confident that it does not pose any substantial harm to the website or its web server. Privacy Considerations and Potential Risks:While we acknowledge the potential privacy concerns associated with the information within escort advertisements, we have taken measures to mitigate these risks. Specifically, we have masked sensitive details such as phone numbers, email addresses, age information, post IDs, dates, and links within the advertisements. Furthermore, as explained in preprocessing section 3, we also experiment with various entity recognition techniques to try masking escort names and the posted locations mentioned in the advertisements. However, due to noise in our data, we encountered challenges in accurately masking these segments, resulting in false positive entity predictions. Nevertheless, considering that previous research indicates escorts often use pseudonyms in their advertisements Carter et al. (2021); Lugo-Graulich (2016) and no public records of these advertisements exist after the seizure of Backpage Escort Markets in 2016, we find it unlikely that anyone can exploit the personal data in our advertisements to harm these individuals. Finally, justifications for processing personal data apply (Article 6 of the General Data Protection Regulation (GDPR))). Given the nature of human trafficking, escort markets, and their interconnections, we believe that our research has the potential to assist Law Enforcement Agencies (LEAs) and researchers in their efforts to combat human trafficking and ultimately reduce harm and save lives. Legal Impact:We cannot predict the specific impact of our research on the law enforcement process. Through this research, we aim to only assist Law Enforcement Agencies (LEAs) in comprehending vendor connections in online escort markets. Hence, we strongly recommend that LEAs and researchers not solely depend on our analysis as evidence for criminal prosecution. Instead, they should view our findings as tools to aid their investigations, not as direct evidence. Environmental Impact:We conducted all our experiments on a private infrastructure, 100-SXM2-32GB (TDP of 300W), with a carbon efficiency of 0.432 kgCO\({}_{2}\)eq/kWh. The training process for establishing all the baselines in our research took a cumulative 191 hours. Based on estimations using the Machine Learning Impact calculator presented in Lacoste et al. (2019), the total estimated emissions for these experiments amount to 24.62 kgCO\({}_{2}\)eq. ## 9 Acknowledgement This research is supported by the Sector Plan Digital Legal Studies of the Dutch Ministry of Education, Culture, and Science, and Bashpole Softwares, Inc. Finally, the experiments were made possible using the Data Science Research Infrastructure (DSRI) hosted at Maastricht University.
2306.07575
A Half de Sitter Holography
A long-standing and intriguing question is: does the holographic principle apply to cosmologies like de Sitter spacetime? In this work, we consider a half dS spacetime wherein a timelike boundary encloses the bulk spacetime, presenting a version of de Sitter holography. By analyzing the holographic entanglement entropy in this space and comparing it with that in AdS/CFT, we argue that gravity on a half dS$_{d+1}$ is dual to a highly non-local field theory residing on dS$_d$ boundary. This non-locality induces a breach in the subadditivity of holographic entanglement entropy. Remarkably, this observation can be linked to another argument that time slices in global de Sitter space overestimate the degrees of freedom by redundantly counting the same Hilbert space multiple times.
Taishi Kawamoto, Shan-Ming Ruan, Yu-ki Suzuki, Tadashi Takayanagi
2023-06-13T06:53:47Z
http://arxiv.org/abs/2306.07575v2
# A Half de Sitter Holography ###### Abstract A long-standing and intriguing question is: does the holographic principle apply to cosmologies like de Sitter spacetime? In this work, we consider a half dS spacetime wherein a timelike boundary encloses the bulk spacetime, presenting a version of de Sitter holography. By analyzing the holographic entanglement entropy in this space and comparing it with that in AdS/CFT, we argue that gravity on a half dS\({}_{d+1}\) is dual to a highly non-local field theory residing on dS\({}_{d}\) boundary. This non-locality induces a breach in the subadditivity of holographic entanglement entropy. Remarkably, this observation can be linked to another argument that time slices in global de Sitter space overestimate the degrees of freedom by redundantly counting the same Hilbert space multiple times. ###### Abstract We consider a \(d space (AdS\({}_{d+1}\)) becomes equivalent to \(d\)-dimensional conformal field theory (CFT\({}_{d}\)) [3; 4; 5]. Despite the resounding success attained in AdS/CFT, we are still at a nascent stage in the development of the holographic duality pertaining to gravity in de Sitter space. The potential dS holography promises paramount application to realistic cosmological spacetime. Let us commence by exploring the explicit differences between dS holography and AdS/CFT. The \(d+1\)-dimensional de Sitter space (dS\({}_{d+1}\)) can be described in global coordinates as follows (see _e.g.,_[6; 7] for comprehensive reviews): \[ds^{2}=-dt^{2}+\cosh^{2}td\Omega_{d}^{2},. \tag{1}\] whose Penrose diagram is shown in figure 1. Here, we have set the dS radius to unity, and \(d\Omega_{d}^{2}\) represents the metric of the unit \(d\)-dimensional sphere: \[d\Omega_{d}=d\theta^{2}+\sin^{2}\theta d\Omega_{d-1}^{2}\,, \tag{2}\] The constant time slice in global dS\({}_{d+1}\) coordinate is depicted in the left diagram of figure 3. In the case of \(d=2\), we simply write \(d\Omega_{d-1}^{2}=d\phi^{2}\) in this paper. As Figure 1: The Penrose diagram of dS\({}_{d+1}\) bulk spacetime. The conformal time \(T\in[-\frac{\pi}{2},+\frac{\pi}{2}]\) is associated with the global time \(t\) by \(\cosh t=\frac{1}{\cos T}\). We introduce a timelike boundary at \(\theta=\theta_{0}\) which is described by a \(d-\)dimensional dS spacetime. The dual bulk dS\({}_{d+1}\) spacetime is given by the gray shaded region. shown in the left panel of figure 1, the conformal boundaries of global dS spacetime are spacelike surfaces _i.e.,_\(S^{d}\), located at the future and past infinity \(t=\pm\infty\). The original dS/CFT correspondence postulates that the gravitational dynamics in the dS\({}_{d+1}\) bulk is dual to a Euclidean conformal field theory (CFT) on the sphere S\({}^{d}\) at future infinity \(t\rightarrow\infty\)[6; 8; 9]. This proposal assumes that the quantum state is generated by the Euclidean instanton using the Hartle-Hawking prescription and is then continued to Lorentzian dS at \(t=0\). However, the dual CFT becomes non-unitary and exhibits numerous exotic characteristics. For instance, in the case of dS\({}_{4}\)/CFT\({}_{3}\), the 3D CFT dual to higher-spin gravity on dS\({}_{4}\) is described by the \(SP(N)\) model, which incorporates ghost fields [10; 11]. Similarly, for dS\({}_{3}\)/CFT\({}_{2}\), the 2D CFT dual to Einstein gravity on dS\({}_{3}\) is obtained through an analytical continuation of current algebra or Liouville CFT, with an imaginary central charge [12; 13; 14; 15]. Furthermore, holography for dS\({}_{2}\) has been investigated in [16; 17; 18]. Despite the peculiar holographic properties, studies from the perspective of gravity have also been developed, with a partial list of references including [19; 20; 21; 22; 23]. One of the manifestations of the non-unitary nature inherent in the dS/CFT correspondence is the computation of holographic entanglement entropy. In the context of the AdS/CFT correspondence, the holographic entanglement entropy \(S_{A}\) for a subsystem \(A\) in the dual CFT can be determined by evaluating the area of the extremal Figure 2: The left panel shows the geometry of time slices for a global dS\({}_{3}\). In the right panel we keep a half of dS\({}_{3}\) space with a dS\({}_{2}\) boundary (denoted by green circle) at \(\theta=\theta_{0}\). surface \(\Gamma_{A}\) in AdS, which ends on the the boundary of the subsystem \(A\)[24; 25; 26]: \[S_{A}=\frac{A(\Gamma_{A})}{4G_{\rm N}}\,, \tag{3}\] where \(G_{\rm N}\) denotes the Newton constant and \(A(\Gamma_{A})\) represents the area of the extremal surface \(\Gamma_{A}\). In principle, this geometric computation can be extended to various spacetimes, including de Sitter space. However, in de Sitter space, the absence of a spacelike extremal surface connecting two distinct points on \({\rm S}^{d}\) at future infinity leads to the holographic entanglement entropy being complex-valued [27; 13; 28]. In this context, the holographic entanglement entropy is computed by using the timelike extremal surface in de Sitter space. Subsequently, in [29; 30], this complex-valued entropy was appropriately interpreted as the pseudo-entropy [31; 32; 33], which generalizes the notion of entanglement entropy to non-Hermitian density matrices (see [34; 35] for closely related ideas). This consideration suggests a connection between the emergent time coordinate in dS/CFT and the imaginary part of the pseudo-entropy. Notably, an intriguing quantum entanglement structure of dS holography has been recently proposed in [36]. Several other approaches to de Sitter holography have been explored. One notable example is the holography for de Sitter space in the static patch, which has recently garnered significant attention and discussions from multiple perspectives [37; 38; 39; 40; 41; 42; 43; 44]. Additionally, another approach to dS holography has been investigated, based on the TTbar deformation in AdS/CFT [45] and the dS/dS duality [46], with studies conducted in [47; 48]. An interesting dS/dS duality setup can also be found in [49]. The quantum information structure in de Sitter has been analyzed by applying the surface/state duality [50]. Notably, it has been observed that the state dual to the \(t=0\) slice is maximally entangled. In this paper, we propose a novel approach to dS holography that adheres to the standard holographic formalism, wherein gravity in a given bulk space is dual to a non-gravitational theory on its timelike boundary. However, since de Sitter space lacks timelike boundaries, we introduce a procedure to create one. Our proposal involves cutting a de Sitter space into a half at \(\theta=\theta_{0}\) by confining the sphere \(S^{d}\) to a semi-sphere. More generally, we can cut the bulk dS space by putting a boundary at \(\theta=\theta_{0}\), as depicted in figure 1 and 2. This resulting spacetime is referred to as a half de Sitter space (or simply half dS). The boundary of the \(d+1\)-dimensional half de Sitter space corresponds to \(d\)-dimensional global de Sitter spacetime dS\({}_{d}\), which is described by the metric: \[ds^{2}=-dt^{2}+\cosh t^{2}\sin^{2}\theta_{0}d\Omega_{d-1}^{2}\,. \tag{4}\] In this setup, we argue that _gravity on a half dS\({}_{d+1}\) is dual to a field theory without gravity on dS\({}_{d}\)_. We expect that such a field theory exhibits high non-locality due to the finite geometric cut-off in dS\({}_{d+1}\). It is worth noting that this spacetime possesses the SO\((d,1)\) symmetry, which is a subgroup of the original SO\((d+1,1)\) symmetry of dS\({}_{d+1}\). Although our approach shares common features with the TTbar approach [47; 48] and surface/state duality [50], our focus lies specifically on the half dS space, as it contains a timelike boundary. Nevertheless, in principle, our analysis of half dS holography can be extended to full dS geometry by combining two copies of our holographic duality, as demonstrated in the case of gluing AdS/CFT [51]. To probe our holographic proposal, we would investigate the holographic entanglement entropy (3) in dS holography as a fundamental tool. Here we would like to emphasize that we stay with the standard calculation of holographic entanglement entropy [24; 25; 26], where we minimize the area, as opposed to the different prescription of maximizing the area in the version of dS holography discussed in [38]. It is noteworthy that we can also apply the standard AdS/CFT correspondence to study the field theory on dS\({}_{d}\)[52; 53]. In this case, a CFT on dS\({}_{d}\) is dual to gravity in AdS\({}_{d+1}\) whose conformal boundary is given by dS\({}_{d}\). Given the well-established nature of this holographic duality, we will initially focus on this situation. Subsequently, we will delve into the main subject of this paper, _i.e.,_ holography for the half dS. Throughout this paper, we will consider two different prescriptions for both holography in AdS and half dS, which are described below and depend on how we handle the future infinity at \(t\to\infty\). **Case 1: Schwinger-Keldysh prescription without EOW** We apply the Hartle-Hawking prescription to construct the initial state at \(t=0\) using the Euclidean instanton geometry, which takes the form of a \(d+1\) dimensional semi-sphere. Subsequently, we examine its Lorentzian time evolution. The density matrix for this state at time \(t\) is determined by the corresponding geometry dictated by the Schwinger-Keldysh prescription, as depicted in figure 4. For a detailed understanding of the Schwinger-Keldysh prescription in holography and holographic entanglement entropy, we refer readers to [54; 55]. It is important to note that the asymptotic infinity as \(t\to\infty\) is absent in this particular setup. In line with conventional Lorentzian holography in AdS, we anticipate that the gravitational dynamics in half dS\({}_{d+1}\) geometry is dual to a field theory residing on its boundary, namely, dS\({}_{d}\). We expect this field theory to exhibit strong non-locality, primarily due to the fact that the boundary at \(\theta=\theta_{0}\) is not an asymptotic boundary. Unlike in the AdS/CFT correspondence, the metric, in this case, does not exhibit divergent behaviour, indicating that the dS boundary plays the role of a finite cutoff. **Case 2: Final state projection with EOW** On the other hand, if we consider the full geometry of a half dS\({}_{d+1}\), we need to impose a boundary condition at the infinity \(t\rightarrow\infty\). If we impose the Dirichlet b.c., then we expect a dual CFT lives there in addition to the dS\({}_{d}\) boundary. To avoid this complicated situation, we focus on the other case: Neumann boundary condition. Namely this means that the asymptotic boundary \(t=\infty\) is an end-of-the-world (EOW) brane. In the context of AdS/CFT, the EOW brane is dual to the boundary conformal field theory (BCFT), whose holographic duality is called the AdS/BCFT [56; 57; 58]. In our dS case, we expect that the dual theory is a non-local field theory on dS\({}_{d}\) with a final state projection at future infinity \(t=\infty\). In the presence of post-selection, a useful quantity is pseudo entropy [31], which is a natural generalization of entanglement entropy such that it depends on two different quantum states \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\). This quantity is defined as follows. We introduce the Figure 3: The left panel shows a sketch of global de Sitter space, which has only space-like boundaries at \(t=\pm\infty\). The right one is a sketch of our holographic setup which is obtained by cutting in half a de Sitter space. Notice that this geometry has both the time-like and space-like boundary. reduced transition matrix \[\tau_{A}=\text{Tr}_{B}\left[\frac{|\psi_{1}\rangle\langle\psi_{2}|}{\langle\psi_{ 2}|\psi_{1}\rangle}\right]. \tag{5}\] The pseudo entropy is defined by \[S_{A}=\text{Tr}\left[-\tau_{A}\log\tau_{A}\right]. \tag{6}\] Note that this quantity in general takes complex values as \(\tau_{A}\) is not hermitian. Interestingly, the gravity dual of this quantity is given by (3) when \(\Gamma_{A}\) is the minimal surface in an Euclidean time-dependent asymptotically AdS [31]. We can also regard the area of extremal surface as a holographic pseudo entropy in Lorentzian AdS in the presence of final state projection [59]. We will assume an extension of this correspondence to de Sitter spaces in this paper. As studies of quantum many-body systems and CFTs suggest [32; 33; 59] the real part of pseudo entropy typically measures the amount of quantum entanglement in the intermediate states between \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\). This paper is organized as follows. In section 2, we study the AdS/CFT for CFTs on de Sitter space. We calculate the entanglement entropy in both case 1 and case 2 and discuss physical interpretations. In section 3 we consider the holography for a half de Sitter space in case 1. We calculate the holographic entanglement entropy and study its property including the violation of subadditivity. We will discuss its implication on the Hilbert structure of its dual field theory. In section 4 we study the holographic Figure 4: The geometry which describes the Schwinger-Keldysh contour of a half dS\({}_{d+1}\). The top and bottom region presents the Lorentzian and Euclidean evolution, respectively. This is dual to a field living on dS\({}_{d}\) boundary which is parametrized by the green surface. uality for a half de Sitter space in case 2, taking into account the presence of EOW brane. In section 5 we summarize our conclusions and discuss future problems. In appendix A, we show explicit calculations of geodesics in de Sitter space. ## 2 CFT on de Sitter Space from AdS/CFT Before we work on the holography of de Sitter spaces, we would like to examine the AdS/CFT correspondence with CFT living on the de Sitter space as a warm up exercise. In this section, we focus on the holographic CFT\({}_{2}\) living on dS\({}_{2}\) spacetime whose metric is defined by \[ds_{\text{dS}_{2}}^{2}=-dt^{2}+\cosh^{2}td\phi^{2}=\frac{1}{ \cos^{2}T}\left(-dT^{2}+d\phi^{2}\right)\,. \tag{1}\] In the following dS metrics, we have chosen the de Sitter radius to be a unit, and use the transformation between the global time \(t\) and conformal time \(T\) given by \(\cosh t=\frac{1}{\cos T}\). We note that the conformal time is compactified due to \(T\in[-\frac{\pi}{2},\frac{\pi}{2}]\), while the global time is not, i.e. \(t\in(-\infty,+\infty)\). For simplicity, we mainly consider AdS\({}_{3}\)/CFT\({}_{2}\) where the CFT lives on dS\({}_{2}\). The holographic dual is described by the global AdS\({}_{3}\), \[ds_{(g)}^{2}=-\cosh^{2}\rho d\tau^{2}+d\rho^{2}+\sinh^{2}\rho d \phi^{2}\,. \tag{2}\] where \(\rho\) is the radial coordinate and \(\tau\) is the global time coordinate of AdS bulk spacetime. Since the extremal surface in AdS\({}_{3}\) is nothing but a geodesic, we can Figure 5: The geometry of a half dS\({}_{d+1}\). Notice that there are both timelike (green surface) and spacelike (purple surface) boundaries. evaluate the area of the extremal surface using the geodesic distance \(D_{12}^{(g)}\) in the global AdS metric between two points \((\rho_{1},\tau_{1},\phi_{1})\) and \((\rho_{2},\tau_{2},\phi_{2})\), as given by \[\cosh D_{12}^{(g)}=\cos(\tau_{1}-\tau_{2})\cosh\rho_{1}\cosh\rho_{2}-\cos(\phi_ {1}-\phi_{2})\sinh\rho_{1}\sinh\rho_{2}\,. \tag{3}\] ### de Sitter and Hyperbolic Slicing of AdS \({}_{3}\) To describe a CFT on dS\({}_{2}\), we employ the de Sitter sliced AdS\({}_{3}\) (refer to the left plot in figure 6): \[ds_{(d)}^{2}=d\eta^{2}+\sinh^{2}\eta(-dt^{2}+\cosh^{2}td\phi^{2}), \tag{4}\] where \((t,\eta)\) is related to the global coordinates \((\tau,\rho)\) via \[\sinh\rho=\cosh t\sinh|\eta|,\ \ \ \ \tan\tau=\tanh\eta\,\sinh t \tag{5}\] Figure 6: Left: Penrose diagram of AdS\({}_{3}\) with de Sitter slicing which is defined in eq. (4). The solid curves denote the constant \(\eta\) surfaces with \(\eta>0\) and the negative ones are described by the dashed curves. Right: Penrose diagram of AdS\({}_{3}\) with hyperbolic slicing as described by the metric (8). The solid and dashed curves present constant \(\mu\) surfaces with positive and negative values, respectively. The coordinate transformation (5) leads to the geodesic length between two points \((t_{1},\eta_{1},\phi_{1})\) and \((t_{2},\eta_{2},\phi_{2})\) in the dS sliced metric: \[\cosh D^{(d)}_{12}=\cosh\eta_{1}\cosh\eta_{2}-\sinh\eta_{1}\sinh\eta_{2}\left( \cosh t_{1}\cosh t_{2}\cos\left(\phi_{2}-\phi_{1}\right)-\sinh t_{1}\sinh t_{2} \right)\,. \tag{6}\] The holographic CFT\({}_{2}\) on dS\({}_{2}\) with a physical metric (1) is living on the conformal boundary of AdS\({}_{3}\) which is defined by \(\eta\to\infty\). We can fix the UV cut-off \(\epsilon\) in the CFT\({}_{2}\) by taking the cut-off surface at \(\eta=\eta_{\infty}\) with \[e^{\eta_{\infty}}=\frac{1}{\tanh\frac{\epsilon}{2}}\approx\frac{2}{\epsilon}\,. \tag{7}\] For a later purpose it is also useful to consider a hyperbolic sliced AdS\({}_{3}\) (refer to the right plot in figure 6), whose metric reads \[ds^{2}_{(h)}=-d\mu^{2}+\cos^{2}\mu(d\xi^{2}+\sinh^{2}\xi d\phi^{2})\,. \tag{8}\] The coordinate \((t,\eta)\) is related to that of the global AdS\({}_{3}\)\((\tau,\rho)\) via \[\sinh\rho=\sinh\xi\cos\mu,\ \ \ \ \tan\tau=\cot\mu\,\cosh\xi, \tag{9}\] The geodesic length \(D^{(h)}_{12}\) between two points at \((\mu_{1},\xi_{1},\phi_{1})\) and \((\mu_{2},\xi_{2},\phi_{2})\) can be derived from \[\cosh D^{(h)}_{12}=\cos\mu_{1}\cos\mu_{2}\left(\cosh\xi_{1}\cosh\xi_{2}-\cos( \phi_{1}-\phi_{2})\sinh\xi_{1}\sinh\xi_{2}\right)+\sin\mu_{1}\sin\mu_{2}\,. \tag{10}\] It is also useful to note that the hyperbolic sliced metric and the de Sitter sliced one are related by the following analytical continuation: \[\eta=i\left(\mu-\frac{\pi}{2}\right)\,,\qquad\xi=i\frac{\pi}{2}-t\,. \tag{11}\] ### Case 1:Entanglement entropy in Schwinger-Keldysh prescription In this section, we aim to compute the holographic entanglement entropy [24; 25; 26] for a CFT living on dS\({}_{2}\) spacetime, following the Schwinger-Keldysh prescription (case 1) of the CFT on a time-dependent background. Refer to [53] for calculations of holographic entanglement entropy for CFTs on de Sitter spaces in higher dimensions. Specifically, we assume that the quantum state at \(t=0\) is generated by an Euclidean path integral on a semi-sphere using the Hartle-Hawking prescription, as depicted in figure 7. To define the entanglement entropy \(S_{A}\), we take the subsystem \(A\) as an interval with endpoints \((t_{0},\phi_{1})\) and \((t_{0},\phi_{2})\), as shown in figure 7. To derive the holographic entanglement entropy, we construct the gravity dual by gluing two copies of AdS\({}_{3}\), each truncated at a time \(t=t_{0}\), following the holographic Schwinger-Keldysh prescription [54; 55]. Hence, there is no need to consider the treatment of the conformal boundary of the dS\({}_{2}\) at \(t=\infty\). Utilizing the geodesic length given in eq. (6), we can obtain the holographic entanglement entropy, _viz,_ \[S_{A}^{\rm con}=\frac{c}{3}\log\left[\frac{2\cosh t_{0}\sin\frac{|\phi_{1}-\phi_ {2}|}{2}}{\epsilon}\right]\,, \tag{12}\] where we adopt the cutoff surface specified by eq. (7). Notably, this entanglement entropy exhibits a linear growth as \(S_{A}\simeq\frac{c}{3}t_{0}\) at late times. We attribute this contribution to a connected geodesic that connects two boundary points. Thus, in the Schwinger-Keldysh setup, this result (12) provides the final expression for the holographic entanglement entropy. Figure 7: The setup of computing entanglement entropy of CFT\({}_{2}\) on dS\({}_{2}\) via AdS\({}_{3}\)/CFT\({}_{2}\). We replace the region \(t<0\) by the Euclidean instanton namely the half sphere, following Hartle-Hawking prescription (blue shaded region). In the case of Schwinger-Keldysh setup, we restrict the Lorentzian path-integral to the region \(0\leq t\leq t_{0}\) and glue the same path-integral at \(t=t_{0}\). ### Case 2: Holographic pseudo entropy with a final state projection We now consider the setup in which there is a final state projection with a boundary state \(|\mathrm{B}\rangle\) at future infinity \(t=\infty\), and interpret the area of the extremal surface, _i.e.,_\(S_{A}\) as the holographic pseudo entropy [31]. To construct the gravity dual of the holographic CFT\({}_{2}\) with a projection, we insert an end-of-the-world brane (EOW) brane on which we impose Neumann boundary condition, _i.e.,_ \[K_{ij}-Kh_{ij}+Th_{ij}=0\,,\qquad\text{with}\qquad K=2T\,, \tag{13}\] where \(K_{ij}\) is the extrinsic curvature of the brane and the constant \(T\) is the brane tension. Since the bulk is AdS\({}_{3}\) spacetime, one can show that the solutions of eq. (13) have to be maximally symmetric spacetime as well. More explicitly, one can obtain Figure 8: The holographic dual of CFT\({}_{2}\) living on dS\({}_{2}\) with a projection is shown as the gray shaded region. The angular direction of AdS\({}_{3}\) is not shown in this figure. The solid black curve represents the two dimensional dS spacetime where CFT\({}_{2}\) lives. Left: The projection is performed at the infinity time \(t_{\text{\tiny P}}\to\infty\) with a dS\({}_{2}\) brane (red curve) parametrized by a constant \(\eta_{*}\). Right: The projection at a finite time \(t_{\text{\tiny P}}\). (see _e.g.,_[51]) \[R_{ij}[h]=h_{ij}\left(\varepsilon\,T^{2}-1\right)\,,\qquad\text{with}\quad R[h]=2( \varepsilon\,T^{2}-1)\,, \tag{14}\] for a timelike brane with \(\varepsilon=+1\) and a spacelike brane with \(\varepsilon=-1\), respectively. In order words, the intrinsic geometry of the brane is nothing but two dimensional Minkowski (\(|T|=1\)), de Sitter (\(|T|>1\)), AdS (\(|T|<1\)) spacetime or hyperbolic space (\(\varepsilon=-1\)). The de Sitter EOW brane in AdS/BCFT was introduced in [60] and was applied to the calculation of holographic pseudo entropy in [59] in the context of black hole final state proposal [61]. With taking the intersection of the brane and the dS\({}_{2}\) boundary at \(t\to\infty\), we can find that there are two types of solutions of the brane: a timelike dS brane and a spacelike hyperbolic brane. From the dS slicing and hyperbolic slicing of AdS\({}_{3}\), one can easily read the corresponding brane profiles in global AdS\({}_{3}\) (as shown in figure 6), _i.e.,_ \[\begin{split}\text{dS}_{2}\text{ brane:}\qquad\cosh\rho\cos \tau&=\pm\cosh\eta_{*}\,,\\ \text{H}_{2}\text{ brane:}\qquad\cosh\rho\cos\tau&= \sin\mu_{*}\,,\end{split} \tag{15}\] where \(\eta_{*},\mu_{*}\) is a constant along the brane and is determined by the tension which is given by the boundary entropy of the boundary state. Furthermore, the parameter \(\eta_{*}\) and \(\mu_{*}\) should be determined by the boundary condition at \(t=\infty\). We begin with the case with a dS\({}_{2}\) EOW brane. Let us first note that the trace of the extrinsic curvature of EOW brane can be expressed as follows: \[K\big{|}_{\eta=\eta_{*}}=-\frac{\partial_{\eta}\sqrt{-\gamma}}{\sqrt{-\gamma}} \Big{|}_{\eta=\eta_{*}}=-\frac{2\cosh\eta_{*}}{\sinh\eta_{*}}\,, \tag{16}\] which indicates that the tension and curvature of the EOW brane are given by \(T=-\coth\eta_{*}<0\) and \(R=\frac{2}{\sinh^{2}\eta_{*}}\), respectively. The corresponding boundary entropy associated with the boundary state \(|B\rangle\) projected at \(t\to\infty\) can be calculated as [59] \[S_{\rm bdy}=\frac{c}{6}\log\sqrt{\frac{1-\mathcal{T}}{1+\mathcal{T}}}=\frac{c} {6}\log\sqrt{\frac{|\mathcal{T}|-1}{|\mathcal{T}|+1}}-i\,\frac{\pi c}{12}=- \frac{c}{6}\eta_{*}-i\,\frac{\pi c}{12}\,. \tag{17}\] which decodes the information of the boundary condition at \(t\to\infty\). Due to the appearance of the EOW brane in the bulk, we must account for the disconnected geodesic that connects a endpoint \((t_{0},\eta_{\infty},\phi_{i})\) of boundary interval \(A\) to a point \((t_{*},\eta_{*},\phi_{*})\) on the de Sitter EOW brane. Thanks to the rotation invariance, it is straightforward to get the extremal surface is given by \(\phi_{*}=\phi_{i}\). Furthermore, we can fix the coordinate values of \(t_{*}\) on the brane by ensuring that the geodesic length remains stationary. For the de Sitter EOW brane, the disconnected contribution for pseudo entropy can be derived from eq. (6) as follows: \[S_{A}^{\rm dis}=\frac{c}{3}(\eta_{\infty}-\eta_{*})=\frac{c}{3}\log\frac{2}{ \epsilon}-\frac{c}{3}\eta_{*} \tag{18}\] where the constant part is the real part of the boundary entropy defined in eq. (17). If we compare the connected contribution \(S_{A}^{\rm con}\) and the disconnected part \(S_{A}^{\rm dis}\), we find that in the early time \(S_{A}^{\rm con}\) is favored and holographic pseudo entropy of the interval \(A\) grows linearly in time. However, \(S_{A}^{d{\rm dis}}\) is always favored in the late time and holographic pseudo entropy becomes a constant. This phase transition is plotted in figure 9. At \(\eta_{*}=0\), this phase transition happens when \[\cosh t_{0}\cdot\sin\frac{|\phi_{1}-\phi_{2}|}{2}=1. \tag{19}\] This describes the null geodesic in dS\({}_{2}\) as sketched in figure 14, which is the same as (10). For the hyperbolic EOW brane, the disconnected PE is estimated from (6) as follows: \[S_{A}^{\rm dis}=\frac{c}{3}\log\frac{2}{\epsilon}+i\frac{c}{3}\left(\frac{\pi }{2}-\mu_{*}\right), \tag{20}\] where \(\mu_{*}\) takes the range \(-\frac{\pi}{2}\leq\mu_{*}\leq\frac{\pi}{2}\). Before we go on, we would like to mention a possibility that this holographic calculation of pseudo entropy may actually be interpreted as genuine entanglement entropy. This is because the global AdS\({}_{3}\) has the periodicity in the time direction \(\tau\). This may imply the periodicity for the time evolution by \(\pi\) of the boundary state \(e^{-i\pi H}|\mathrm{B}\rangle\propto|\mathrm{B}\rangle\). Even though this is suggested by the classical geometry, we are not completely sure if this is true at the quantum level which is dual to the full dynamics of the CFT. We leave this issue for a future problem. ### Holographic pseudo entropy under final state projection at \(t=t_{\mathrm{P}}\) For later purpose, it is useful to consider a CFT on dS\({}_{2}\) with a final state projection at a finite time \(t=t_{\mathrm{P}}\). In the AdS\({}_{3}\)/BCFT\({}_{2}\), this is dual to inserting the EOW brane earlier as depicted in the right panel of figure 8. It is obvious that \(S_{A}\) vanishes at \(t=t_{\mathrm{P}}\). Therefore the time evolution of \(S_{A}\) looks like a Page curve as sketched in the right panel of figure 10. Since the brane profile is not completely covered by the dS slicing coordinates, it is more convenient to work on the global coordinates (2). We denote the points on the dS boundary as \((\tau_{0},\rho_{\infty},\phi_{0})\). The dS boundary is thus given by \[\cosh\rho_{\infty}\cos\tau_{0}=\mathrm{constant}=\cosh\eta_{\infty}\,, \tag{21}\] where \(\eta_{\infty}\) determines the position of the dS boundary. Corresponding, the point on the brane is referred to as \((\tau_{*},\rho_{*},\phi_{*})\). Since the brane is still represents a two-dimensional dS spacetime, we can parametrize the brane profile as \[\cosh\rho_{*}\cos\left(\tau_{*}+\tau_{\mathrm{shift}}\right)=\mathrm{constant} =\cosh\eta_{*}\,, \tag{22}\] where \(\eta_{*}\) is related to the tension of the brane. Since we parametrize the projection time is \(t=t_{\mathrm{P}}\) or \(T=T_{\mathrm{P}}\) in terms of the time coordinate on the dS boundary, we can find that this fix the brane parameter \(\tau_{\mathrm{shift}}\) as \[\tau_{\mathrm{shift}}=\arccos\left(\frac{\cosh\eta_{*}\cos T_{0}}{\cosh\eta_{ \infty}}\right)-\tau_{\mathrm{P}}=\arccos\left(\frac{\cosh\eta_{*}\cos T_{0}} {\cosh\eta_{\infty}}\right)-\arctan\left(\tanh\eta_{\infty}\tan T_{\mathrm{P} }\right)\,, \tag{23}\] with using the coordinate transformations for the points on dS boundary as \[\begin{cases}\cosh\rho=\sqrt{\cosh^{2}\eta+\sinh^{2}\eta\tan^{2}T}\,,\\ \tan\tau=\tanh\eta\tan T\,.\end{cases} \tag{24}\] On the other hand, the geodesic distance \(D_{0b}\) from the dS boundary to dS brane is derived as \[\cosh D_{0b}=\cos(\tau_{0}-\tau_{*})\cosh\rho_{\infty}\cosh\rho_{*}-\cos(\phi_{0} -\phi_{*})\sinh\rho_{\infty}\sinh\rho_{*}. \tag{25}\] Extremization over the spatial direction simply results in \(\phi_{0}=\phi_{*}\), which is expected by symmetries. Furthermore, we need to find that maximal value over the timelike direction by solving \[\frac{\partial D_{0b}}{\partial\tau_{*}}=0\,,\quad\text{with}\quad\rho_{*}= \rho_{*}(\tau_{*})\,, \tag{26}\] which reduces to \[\cosh\eta_{*}\sinh\rho_{\infty}\tan\left(\tau_{*}+\tau_{\text{shift}}\right)- \frac{\sin\left(\tau_{*}+\tau_{\text{shift}}\right)}{\cos\left(\tau_{0}+\tau_{ \text{shift}}\right)}\cosh\rho_{\infty}\sqrt{\cosh^{2}\eta_{*}-\cos^{2}\left( \tau_{0}+\tau_{\text{shift}}\right)}=0\,. \tag{27}\] The advantage of working on global coordinates is obvious in the above extremization equation. After straightforward algebras, one can derive the solutions as \[\begin{split}\cos\left(\tau_{*}+\tau_{\text{shift}}\right)& =\cosh\eta_{*}\sqrt{\frac{\cosh^{2}\rho_{\infty}\sin^{2}(\tau_{0}+ \tau_{\text{shift}})-\sinh^{2}\rho_{\infty}}{\cosh^{2}\rho_{\infty}\sin^{2}( \tau_{0}+\tau_{\text{shift}})-\sinh^{2}\rho_{\infty}\cosh^{2}\eta_{*}}}\,,\\ \cosh\rho_{*}&=\sqrt{\frac{\cosh^{2}\rho_{\infty} \sin^{2}(\tau_{0}+\tau_{\text{shift}})-\sinh^{2}\rho_{\infty}\cosh^{2}\eta_{*} }{\cosh^{2}\rho_{\infty}\sin^{2}(\tau_{0}+\tau_{\text{shift}})-\sinh^{2}\rho_{ \infty}}}\,.\end{split} \tag{28}\] Substituting the above solutions to the expression for the distance of geodesics, we can obtain the distance of the extremal geodesic between the dS boundary and brane, _i.e.,_ the area of the disconnected HRT surface. We are interested in the case where the dS boundary is taken as the cut-off surface located at \(\eta_{\infty}\sim\frac{2}{\epsilon}\). In this limit, we can rewrite the global coordinates on the boundary as \[\tau_{0}=T_{0}\,,\qquad\rho_{\infty}\approx\log\left(\frac{2}{ \epsilon\cos T_{0}}\right)\,,\qquad\tau_{\text{shift}}=\frac{\pi}{2}-T_{\text {\tiny P}}\,. \tag{29}\] The extremal point on the brane reduces to \[\begin{split}\sin\left(T_{\text{\tiny P}}-\tau_{*}\right)& =\cosh\eta_{*}\sqrt{\frac{\sin^{2}(T_{\text{\tiny P}}-T_{0})}{ \cosh^{2}\eta_{*}-\cos^{2}(T_{\text{\tiny P}}-T_{0})}}\,,\\ \cosh\rho_{*}&=\sqrt{1+\frac{\sinh^{2}\eta_{*}}{ \sin^{2}(T_{\text{\tiny P}}-T_{0})}}\,,\end{split} \tag{30}\] and the corresponding geodesic distance is given by \[D_{0b}\approx\log\left(\frac{2}{\epsilon}\frac{\cos(T_{0}+\tau_{\rm shift})}{\cos T _{0}}e^{-\eta_{*}}\right)\,. \tag{31}\] Thus, the holographic entanglement entropy from the disconnected HRT surface is recast as \[S_{A}^{\rm dis}=\frac{2D_{0b}}{4G_{\rm N}}=\frac{c}{3}\log\left(\frac{2}{ \epsilon}\frac{\sin\left(T_{\rm P}-T_{0}\right)}{\cos T_{0}}\right)-\frac{c}{ 3}\eta_{*}\,. \tag{32}\] In terms of the de Sitter time \(t\), we finally find the following expression: \[S_{A}^{\rm dis}=\frac{c}{3}\log\left[\frac{2}{\epsilon}\cdot\frac{\sinh t_{\rm P }-\sinh t_{0}}{\cosh t_{\rm P}}\right]-\frac{c}{3}\eta_{*}, \tag{33}\] where we have used \(\sinh t_{\rm P}=\tan T_{\rm P}\). The holographic pseudo entropy in the presence of final state projection is given by the smaller one among the two: \(S_{A}=\min[S_{A}^{\rm con},S_{A}^{\rm dis}]\). ### Entanglement/Pseudo entropy of CFT\({}_{2}\) on dS\({}_{2}\) It is intriguing to compare previous holographic results of entanglement entropy and pseudo entropy with that derived in 2d CFT. Indeed, as in the general replica method for 2d CFTs [62; 63], we can easily compute the two-point function of twist operators of local CFT on the dS\({}_{2}\) by considering the Weyl scaling: \[\langle\sigma(t_{0},\phi_{1})\overline{\sigma}(t_{0},\phi_{2})\rangle_{\rm dS _{2}}=(\cos T_{0})^{2\Delta_{n}}\langle\sigma(T_{0},\phi_{1})\overline{\sigma }(T_{0},\phi_{2})\rangle_{\rm cylinder}\,. \tag{34}\] Figure 10: Time evolution of holographic entanglement entropy (denoted by the black curve) with respect to the global time \(t_{0}\). The projection time is chosen as \(t_{\rm P}=6\) with \(\eta_{*}=3\). We set \(\phi_{2}-\phi_{1}=\frac{\pi}{5},\epsilon=\frac{1}{100}\) for this numerical plot. ere we have define the conformal coordinates \((T,\phi)\) by \[ds^{2}_{\rm dS_{2}}=-dt^{2}+\cosh^{2}td\phi^{2}=\frac{1}{\cos^{2}T}\left(-dT^{2}+ d\phi^{2}\right) \tag{35}\] with the transformation between the global time and conformal time given by \(\cosh t=\frac{1}{\cos T}\). The Euclidean part of the Hartle-Hawking state is described by a half sphere whose metric reads \[ds^{2}_{\rm HH}=dt^{2}_{\rm E}+\cos^{2}t_{\rm E}d\phi^{2}=\frac{1}{\cosh^{2}T_ {\rm E}}\left(dT^{2}_{\rm E}+d\phi^{2}\right)\,, \tag{36}\] where the Euclidean conformal time \(T_{\rm E}\) is defined by \(\cos t_{\rm E}=\frac{1}{\cosh T_{\rm E}}\) with \(-\frac{\pi}{2}\leq t_{\rm E}\leq 0,-\infty\leq T_{\rm E}\leq 0\). #### Case 1: Schwinger-Keldysh description without EOW First of all, it is known that the entanglement entropy for an interval in the cylinder is shown as \[S^{\rm cylinder}_{A}=\frac{c}{3}\log\left(\frac{2}{\epsilon}\sin\frac{|\phi_ {1}-\phi_{2}|}{2}\right). \tag{37}\] where we have used the fact that the periodicity of the cylinder is \(2\pi\). Noting the contribution from Weyl scaling, we can obtain \[S^{\rm dS_{2}}_{A}=\frac{c}{3}\log\cosh t_{0}+\frac{c}{3}\log\left(\frac{2}{ \epsilon}\sin\frac{|\phi_{1}-\phi_{2}|}{2}\right), \tag{38}\] which reproduces the holographic result shown in eq. (12). Figure 11: The conformal maps used for computing the pseudo entropy. The projection is performed at a fixed time \(t_{\rm p}\) which is indicated by the red circle. #### Case 2: Final projection at \(t=t_{\rm p}\) To compute the pseudo entropy of CFT\({}_{2}\) living on dS\({}_{2}\), we follow the recipe illustrated in figure 11. Let us begin with the following two-point function and its Weyl scaling: \[\begin{split}&\left\langle{\rm B}\right|e^{iH(T_{\rm p}-T)}\sigma(T_{0},\phi_{1})\overline{\sigma}(T_{0},\phi_{2})\left|{\rm HH}\right\rangle_{\rm dS _{2}}\\ =&(\cos T_{0})^{2\Delta_{n}}\left\langle{\rm B} \right|e^{-iH(T_{\rm P}-T)}\sigma(T_{0},\phi_{1})\overline{\sigma}(T_{0},\phi _{2})\left|0\right\rangle_{\rm cylinder}\,,\end{split} \tag{39}\] where \(\left|{\rm B}\right\rangle\) is the boundary state (Cardy state) in the boundary conformal field theory (BCFT) [64] and \(\left|{\rm HH}\right\rangle\) denotes the Hartle-Hawking state of dS\({}_{2}\). We can first focus on the transition matrix, _i.e.,_ \[\left\langle{\rm B}\right|e^{-iH(T_{\rm P}-T)}\sigma(T_{0},\phi_{1})\overline {\sigma}(T_{0},\phi_{2})\left|0\right\rangle_{\rm cylinder}\,, \tag{40}\] with ignoring the Weyl factor. The corresponding cylinder for evaluating the transition matrix is composed of the infinitely long Euclidean cylinder \(-\infty\leq T_{\rm E}\leq 0\) and also a finite Lorentzian cylinder with \(0\leq T\leq T_{\rm P}\). By Wick rotation, we consider the Euclidean cylinder \(-\infty\leq T_{\rm E}\leq T_{\rm P}^{E}=iT_{\rm P}\). With denoting the complex coordinate for the cylinder as \[w=T_{\rm E}+i\phi\,, \tag{41}\] we can map the Euclidean cylinder to the upper half plane. The explicit conformal map is given by \[\frac{e^{w}}{e^{T_{\rm P}^{E}}}=\frac{z-i}{z+i}\Leftrightarrow z=f(w)=-i\frac {e^{w-T_{\rm P}^{E}}+1}{e^{w-T_{\rm P}^{E}}-1}\,. \tag{42}\] See the figure 11 for the illustration. By assuming the mirror trick and the factorization, it is straightforward to evaluate the transition (40). Similar to the connected and disconnected geodesics, one can also obtain two distinct contributions, _i.e.,_ 1. Connected part: \[S_{A}^{\rm con,cyl}=\frac{c}{6}\log\frac{|f(w_{1})-f(w_{2})|^{2}}{\epsilon^{2 }|f^{\prime}(w_{1})||f^{\prime}(w_{2})|}.\] (43) After substituting \[w_{1}=i(T_{0}+\phi_{1}),w_{1}^{*}=i(T_{0}-\phi_{1}),w_{2}=i(T_{0}+\phi_{2}), w_{2}^{*}=i(T_{0}-\phi_{2})\,,\] (44) we obtain the same formula (38) which it was derived in holographic spacetime for Case 1. 2. Disconnected part: \[S_{A}^{\rm dis,cyl}=\frac{c}{6}\log\frac{|f(w_{1})-\bar{f}(\bar{w}_{1})||f(w_{2})- \bar{f}(\bar{w}_{2})|}{\epsilon^{2}|f^{\prime}(w_{1})||f^{\prime}(w_{2})|}+2S_{ \rm bdy}.\] (2.45) Again, by substituting eq. (2.44), we can derive the contribution from the disconnected part, _viz,_ \[S_{A}^{\rm dis,cyl}=\frac{c}{6}\log\frac{-4}{\epsilon^{2}}\sin^{2}\left(T_{p}- T_{0}\right)+2S_{\rm bdy}=\frac{c}{3}\log\left(\frac{2}{\epsilon}\sin\left(T_{p}-T_{0 }\right)\right)+\frac{\pi i}{12}c+2S_{\rm bdy}\,.\] (2.46) Finally, we need to consider the extra contribution from Weyl scaling and rewrite the final answer as \[S_{A}^{\rm dis,dS}=\frac{c}{3}\log\left(\frac{2}{\epsilon}\frac{\sin\left(T_{p }-T_{0}\right)}{\cos T_{0}}\right)+\frac{\pi i}{12}c+2S_{\rm bdy}\,.\] (2.47) Combining the two distinct results, we can also find that the pseudo entropy present the same behavior as shown in figure 10. Especially, when the post-selection is fixed at the future infinity, _i.e.,_\(T_{p}=\frac{\pi}{2}\), the disconnected part reduces to a constant \[S_{A}^{\rm dis,dS}=\frac{c}{3}\log\left(\frac{2}{\epsilon}\right)+\frac{\pi i }{12}c+2S_{\rm bdy}\,,\] (2.48) As a summary, we find that pseudo entropy derived from boundary 2d CFT agree with the holographic results for \(S_{A}^{dis}\) (2.18) and (2.33) via the prescription (2.17). ### Holographic entanglement in higher dimensions In this section, we study the holographic entanglement (pseudo) entropy in higher dimensional setups, which can be found from the area of extremal surfaces in the \((d+1)\)-dim AdS space. The de Sitter sliced metric is given by \[ds^{2}=d\eta^{2}+\sinh^{2}\eta(-dt^{2}+\cosh^{2}t(d\phi^{2}+\sin^{2}\phi d \Omega_{d-2})). \tag{2.49}\] **Case 1: Schwinger-Keldysh without EOW** First let us evaluate the connected entropy \(S_{A}^{con}\). We consider the global AdS coordinates \[ds^{2}=d\rho^{2}-\cosh^{2}\rho d\tau^{2}+\sinh^{2}\rho(d\phi^{2}+\sin^{2}\phi d \Omega_{d-2}^{2}) \tag{2.50}\] and take the subsystem \(A\) as \[0\leq\phi\leq\phi_{0},\quad\tau=\tau_{0},\quad\rho=\rho_{\infty}. \tag{2.51}\] In this coordinate, the extremal surface is labeled by \(\phi=\phi(\rho)\). The holographic entanglement entropy is the area of this surface divided by \(4G_{N}\) \[S_{A}^{\rm con}=\frac{\text{Vol}(\text{S}^{d-2})}{4G_{N}}\int d\rho\sqrt{1+\sinh^{ 2}\rho\phi^{{}^{\prime}2}}(\sinh\rho\sin\phi)^{d-2}, \tag{52}\] where we set \(\phi^{\prime}=\frac{\partial\phi}{\partial\rho}\). By taking the variation, we obtain the Euler-Lagrange equation \[\phi^{\prime\prime}-\frac{d-2}{\tan\phi\sinh^{2}\rho}+\frac{d\phi^{\prime}}{ \tanh\rho}-\frac{(d-2)\phi^{{}^{\prime}2}}{\tan\phi}+(d-1)\sinh\rho\cosh\rho \phi^{{}^{\prime}3}=0. \tag{53}\] We find a suitable solution \(\tanh\rho\cos\phi=C,\) where the \(C\) is integration constant. We impose the boundary condition (51) and obtain the solution \[\tanh\rho\cos\phi=\tanh\rho_{\infty}\cos\phi_{0}. \tag{54}\] Then, the holographic entanglement entropy reads \[S_{A}^{\rm con}=\frac{\text{Vol}(\text{S}^{d-2})}{4G_{N}}\int_{1}^{\frac{\cosh \rho_{\infty}}{\cosh\rho_{\rm min}}}du(u^{2}-1)^{\frac{d-3}{2}}\,, \tag{55}\] where \(\rho_{\rm min}\) is defined by \(\tanh\rho_{\rm min}=\tanh\rho_{\infty}\cos\phi_{0}\). In particular, \(S_{A}^{\rm con}\) for \(d=3\) reduces to \[\begin{split} S_{A}^{\rm con}=\frac{\pi}{2G_{N}}\left(\frac{\cosh \rho_{\infty}}{\cosh\rho_{\rm min}}-1\right)&=\frac{\pi}{2G_{N}} \left(\sqrt{\cosh^{2}t_{0}\sin^{2}\phi_{0}\sinh^{2}\eta_{\infty}+1}-1\right) \\ &\approx\frac{\pi}{2G_{N}}\left(\frac{\cosh t_{0}\sin\phi_{0}}{ \epsilon}-1\right)\,,\end{split} \tag{56}\] We plot this as a function of \(t_{0}\) and \(\phi_{0}\) in figure 12. Figure 12: Holographic entanglement entropy \(S_{A}^{\rm con}\) (56) and \(S_{A}^{\rm dis}\) (61) from Schwinger-Keldysh prescription without EOW brane. We choose \(d=3,\phi_{0}=\frac{\pi}{2},\eta_{\infty}=3\) and \(\eta_{*}=1\)for both plots. #### Case 2: Final projection at \(t=\infty\) Next, let us evaluate the disconnected extremal surfaces stretching between the dS boundary and the dS EOW brane, which contributes to the holographic pseudo entropy. Particularly, we consider the extremal surface connecting the two points between \((\eta_{\infty},t_{0},\phi_{0},\Omega_{0})\) and \((\eta_{*},t_{*},\phi_{*},\Omega_{0})\) and then maximize the area by taking variation with respect to \(t_{*}\). In the AdS\({}_{3}\) case, the extremal surface satisfies \(t=\)const and \(\phi=\)const. However, in higher-dimensional cases this is no more true. Suppose that \(\phi_{*}=\frac{\pi}{2}\) along the extremal surface1. Then, its area is given by Footnote 1: This can be justified only in the \(\phi=\frac{\pi}{2}\) case since we have a \(\mathbf{Z}_{2}\) symmetry \(\theta\rightarrow\frac{\pi}{2}-\phi\). In general the extremal surface does depend on \((\phi,t,\eta)\). \[S_{A}^{\rm dis}=\frac{\text{Vol}(\text{S}^{d-2})}{4G_{N}}\int d\eta\sqrt{1- \sinh^{2}\eta t^{{}^{\prime}2}}(\sinh\eta\cosh t)^{d-2}. \tag{57}\] The Euler-Lagrange equation reads \[\frac{(d-2)\tanh t}{\sinh^{3}\eta}+\frac{dt^{\prime}}{\tanh\eta\sinh\eta}- \frac{(d-2)\tanh tt^{{}^{\prime}2}}{\sinh\eta}-(d-1)\cosh\eta t^{{}^{\prime}3 }+\frac{t^{\prime\prime}}{\sinh\eta}=0. \tag{58}\] A particular solution, which is the one at \(\tau=\)const. in the global AdS, reads \[\tanh\eta\sinh t=\tanh\eta_{\infty}\sinh t_{0}=\tanh\eta_{*}\sinh t_{*}. \tag{59}\] We expect more general solutions and we need to maximize the areas of such solutions with respect to the end points on the EOW brane. However, this is highly complicated and we will focus on the above simple solution, which gives the minimal value of \(\text{S}_{A}^{\rm dis}\). By substituting this into the functional we obtain \[S_{A}^{\rm dis} = \frac{\text{Vol}(\text{S}^{d-2})}{4G_{N}}\int d\eta\sqrt{1-\sinh ^{2}\eta t^{{}^{\prime}2}}(\sinh\eta\cosh t)^{d-2} \tag{60}\] \[= \frac{\text{Vol}(\text{S}^{d-2})}{4G_{N}}\int_{\sqrt{1+\cosh^{2} t_{*}\sinh^{2}\eta_{*}}}^{\sqrt{1+\cosh^{2}t_{0}\sinh^{2}\eta_{*}}}dv(v^{2}-1)^{ \frac{d-3}{2}}.\] Especially, in the \(d=3\) case we find \[S_{A}^{\rm dis}=\frac{\pi\sqrt{\tanh^{2}\eta_{\infty}\sinh^{2}t_{0}+1}}{2G_{N }}(\cosh\eta_{\infty}-\cosh\eta_{*})\simeq\frac{\pi}{2G_{N}}\cosh t_{0}\left( \frac{1}{\epsilon}-\cosh\eta_{*}\right)\,, \tag{61}\] The results are plotted in the figure 12. Unlike the AdS\({}_{3}\) case, both \(S_{A}^{\rm con}\) and \(S_{A}^{\rm dis}\) diverge exponentially as a function of \(t_{0}\). Comparing two expressions in eqs. (56) and (61), we always have \[S_{A}^{\rm dis}\leq S_{A}^{\rm con}\,, \tag{62}\] due to \(\cosh X\geq 1\). #### More on Case 2: Final projection at \(t=t_{\rm p}\) Next, we consider the final projection at finite time. For this, we shift the projection brane along the global time as in the section 2.4. If we shift the brane backward in \(\tau_{\rm shift}\), the projection brane is defined by \(\cosh\rho\cos(\tau+\tau_{\rm shift})=\cosh\eta_{c}\), where \(\eta_{c}\) is related with the tension of the dS brane. The special solution to the extremal surface condition is already obtained in the previous section as (59) for \(\phi_{0}=\frac{\pi}{2}\). Therefore the intersection of the extremal surface and the projection brane defines the end point of the surface \(\tanh\eta_{*}\sinh t_{*}=\tanh\eta_{\infty}\sinh t_{0}\) and \(\cosh\rho_{*}\cos(\tau_{*}+\tau_{\rm shift})=\cosh\eta_{c}\). After short algebra we obtain \[\cosh\eta_{*}=\frac{\cosh\eta_{c}}{\cos\tau_{\rm shift}-\tanh\eta_{\infty} \sinh t_{0}\sin\tau_{\rm shift}}, \tag{63}\] which gives for \(d=3\) \[S_{A}^{\rm dis}=\frac{\pi\sqrt{\tanh^{2}\eta_{\infty}\sinh^{2}t_{0}+1}}{2G_{N}} \left(\cosh\eta_{\infty}-\frac{\cosh\eta_{c}}{\cos\tau_{\rm shift}-\tanh\eta_{ \infty}\sinh t_{0}\sin\tau_{\rm shift}}\right)\,. \tag{2.64}\] The figure 13 shows the plots. As in the AdS\({}_{3}\) case, the disconnected entropy reaches zero at the projection time \(t_{\rm p}\). ## 3 Holography for a half dS without EOW brane (Case 1) Now we would like to move on to the our main target: holography with a positive cosmological constant. In order to interpret the holography in de Sitter space as in the standard framework where the bulk gravity is dual to a quantum system on its time-like boundary, we focus on a half of de Sitter space defined by restricting the global dS\({}_{d+1}\) given by the metric (1.1) and (1.2) to the region \[0\leq\theta\leq\theta_{0}, \tag{3.1}\] as depicted in figure 2 and figure 3. The standard idea of holographic principle predicts that the bulk gravity on the half dS\({}_{d+1}\) (3.1) is dual to a certain quantum system on its boundary, namely, dS\({}_{d}\) at \(\theta=\theta_{0}\), given by the metric (1.4). We assume the range \(0\leq\theta_{0}\leq\frac{\pi}{2}\). If we choose \(\theta_{0}=\frac{\pi}{2}\) in particular, it is exactly a half of the original dS. Below we will study how the holography for a half dS looks like by analyzing the holographic entanglement entropy. For simplicity we will mainly focus on the three dimensional space i.e. dS\({}_{3}\). In this case, the holographic entanglement entropy \(S_{A}\), defined for an interval \(A\), can be computed from the geodesic length which connects two end points of \(A\). This consideration also suggests the range \(0\leq\theta_{0}\leq\frac{\pi}{2}\). This is because the geodesics which compute the holographic entanglement entropy are not included in the half dS if \(\theta_{0}\geq\frac{\pi}{2}\), even if the subsystem is very small. ### Geodesic Length in dS\({}_{3}\) To prepare for our analysis of holographic entanglement entropy \(S_{A}\), let us see the behavior of the geodesics in a dS\({}_{3}\) whose metric reads \[ds^{2}=-dt^{2}+\cosh^{2}t(d\theta^{2}+\sin^{2}\theta d\phi^{2}). \tag{3.2}\] If we choose two points \(P_{1}\) and \(P_{2}\) on dS\({}_{3}\): \[P_{1}=(t_{1},\theta_{1},\phi_{1})\,,\qquad P_{2}=(t_{2},\theta_{2},\phi_{2})\,, \tag{3.3}\] hen the geodesic distance between \(P_{1}\) and \(P_{2}\), denoted by \(D_{12}\), can be found as \[D_{12}=(\cos\theta_{1}\cos\theta_{2}+\sin\theta_{1}\sin\theta_{2}\cos(\phi_{1}- \phi_{2}))\cosh t_{1}\cosh t_{2}-\sinh t_{1}\sinh t_{2}\,. \tag{3.4}\] Especially for the geodesic which connected two points \(\phi=\phi_{1}\) and \(\phi=\phi_{2}\) at the boundary \(\theta=\theta_{0}\) and at the same time \(t=t_{0}\), we have: \[\cos D_{12}=\left(\cos^{2}\theta_{0}+\sin^{2}\theta_{0}\cos(\phi_{1}-\phi_{2}) \right)\cosh^{2}t_{0}-\sinh^{2}t_{0}. \tag{3.5}\] In order for a space-like geodesic to be present between the two points, the following condition should be satisfied: \[\cos^{2}\theta_{0}+\sin^{2}\theta_{0}\cos(\phi_{1}-\phi_{2})\geq 1-\frac{2}{ \cosh^{2}t_{0}}. \tag{3.6}\] The boundary of this boundary looks like a past light-cone of the future infinity as depicted in figure 15. We write the value of \(\Delta\phi=\phi_{1}-\phi_{2}\) which saturates this bound as \(\Delta\phi_{\rm max}\). The space-like geodesic exists only when \(\Delta\phi\leq\Delta\phi_{\rm max}\). For explicit construction of such a geodesic refer to the appendix A. It is clear that for two different points \(\phi_{1}\neq\phi_{2}\), this inequality gets violated at enough later time. On the other hand, if \(t_{0}\) and \(\theta_{0}\) satisfy \[\cos(2\theta_{0})\geq 1-\frac{2}{\cosh^{2}t_{0}}, \tag{3.7}\] Figure 14: The Penrose diagram of dS\({}_{2}\) boundary. The colorful curves denote various spacelike geodesics on dS\({}_{2}\) where the null geodesic is presented by the green dashed curves. then for any values of \(\phi_{1}\) and \(\phi_{2}\), the space-like geodesic which connects the two points does exist. The geodesic length \(D_{12}\) as a function of \(\Delta\phi\) at \(\theta_{0}=\frac{\pi}{8}\) and \(\theta_{0}=\frac{\pi}{2}\) is depicted in figure 16. When the bound (3.6) is violated, the geodesic length gets complex valued such that its real part is \(\pi\): \[D_{12}=\pi+i\text{arccosh}\left[\sinh^{2}t-\left(\cos^{2}\theta_{0}+\sin^{2} \theta_{0}\cos(\phi_{1}-\phi_{2})\right)\cosh^{2}t_{0}\right]. \tag{3.8}\] The imaginary contribution comes from the time-like geodesic and the final real part \(\pi\) does from the geodesic in an Euclidean space _i.e.,_ a semi-sphere (see [13]). Consider the maximal case \(\theta_{0}=\frac{\pi}{2}\). In this case, due to the \(\text{Z}_{2}\) symmetry \(\theta\to\pi-\theta\), the geodesics which connect two points on the boundary \(\theta=\theta_{0}=\frac{\pi}{2}\) are all within the boundary \(\text{dS}_{2}\), as shown in figure 14. At \(t_{0}=0\), there is always a geodesic which connects two points on the boundary and the geodesic length is simply found to be \[D_{12}=\Delta\phi. \tag{3.9}\] However for \(t_{0}\neq 0\), a space-like geodesic does not exist for \(\Delta\phi=|\phi_{1}-\phi_{2}|>\Delta\phi_{\text{max}}\), where the bound is explicitly given by \[\Delta\phi_{\text{max}}=2\left(\frac{\pi}{2}-\arccos\left(\frac{1}{\cosh t_{0 }}\right)\right)=\pi-2\arctan\left[\sinh t_{0}\right]\,. \tag{3.10}\] This bound coincides with the boundary of the past light cone of a point at \(t=\infty\) and is identical to the condition (2.19) found in our previous analysis of AdS/CFT, shown in the left panel of figure 15. These behaviors of geodesic length are plotted in the right panel of figure 16. The profile of the geodesic for \(\Delta\phi\leq\Delta\phi_{\text{max}}\) is sketched in the left panel of figure 17. When \(\Delta\phi>\Delta\phi_{\text{max}}\), the geodesic length gets complex valued and can be interpreted as the union of time-like geodesic and space-like one as depicted in the right panel of figure 17. Note that this transition at \(\Delta\phi=\Delta\phi_{\text{max}}\) is peculiar to our dS setup and can not be seen in the AdS setups. ### Holographic Entanglement Entropy and Violation of Subadditivity Now let us consider the calculation of the holographic entanglement entropy (1.3) in Case 1 (figure 4), namely Schwinger-Keldysh dS geometry, where there is no EOW brane. In this case, \(S_{A}\) for an interval \(A\), can be computed from the connected geodesic as \[S_{A}=\frac{D_{12}}{4G_{N}}, \tag{3.11}\] whose length was already studied in the previous subsection. Consider the holographic entanglement entropy at a fixed time \(t=t_{0}\) for an interval \(A\) defined by \(\phi_{1}\leq\phi\leq\phi_{2}\). For a generic choice of the boundary \(\theta_{0}\), the holographic entanglement entropy computed as the geodesic length, takes real and positive values only for \(\Delta\phi\leq\Delta\phi_{\rm max}\), as depicted in figure 16. For the maximal value \(\Delta\phi=\Delta\phi_{\rm max}\) we find \(D_{12}=\pi\), which is a half of the length of the de Sitter horizon and thus \(S_{A}\) becomes a half of de Sitter entropy \(S_{A}=\frac{1}{2}S_{\rm dS}\). In terms of the time evolution, for an earlier time \(0\leq t\leq t_{\rm max}\), \(S_{A}\) is well-defined for any value of \(\Delta\phi\), where \(t_{\rm max}\) is the time when (3.7) is saturated i.e. \[t_{\rm max}=\arccosh\left(\frac{1}{\sin\theta_{0}}\right)=\log\left(\cot\left( \frac{\theta_{0}}{2}\right)\right)\,. \tag{3.12}\] However, for \(t>t_{\rm max}\), the behavior of holographic entanglement looks confusing partly because it takes a complex value for \(\Delta\phi>\Delta\phi_{\rm max}\) and also because it is a convex function of \(\Delta\phi\) even for \(\Delta\phi\leq\Delta\phi_{\rm max}\) as can be seen from figure 16. The latter fact shows the violation of (strong) subadditivity. Indeed, the second derivative of the geodesic length \[\frac{d^{2}D_{12}}{d\Delta\phi d\Delta\phi}=\frac{\sqrt{2}\cosh t\sin\theta_{0 }\sin\left(\frac{\theta_{0}}{2}\right)\left(\cosh^{2}t\sin^{2}\theta_{0}-1 \right)}{\left(1+\cos D_{12}\right)^{3/2}}, \tag{3.13}\] becomes positive when \(t>t_{\rm max}\). In terms of the conformal time \(T\) (\(\cosh t\equiv\frac{1}{\cos T}\)), the critical time is rewritten as \[T_{\rm crt}=\frac{\pi}{2}-\theta_{0}\,, \tag{3.14}\] which is nothing but the intersection between the boundary \(\theta=\theta_{0}\) and the cosmological horizon located at \(T=\frac{\pi}{2}\pm\theta\). To see why the convex entropy function violates the subadditivity, first note that \(S_{A}\) is a function of the length \(y=L(A)\) of the interval \(A\), owing to the translational invariance. It is obvious from the convex nature \(\partial_{y}^{2}S_{A}(y)>0\) that we have \(S_{A}(2y)>2S_{A}(y)\), which violates the subadditivity \(S_{A}+S_{B}\geq S_{AB}\). Of course, this shows that the strong subadditivity, which is expressed as \(S_{AB}+S_{BC}\geq S_{ABC}+S_{B}\), is broken. Refer to figure 18 for plots of the regions where the subadditivity is satisfied, whose further interpretation will be given later. This behavior is a complete contrast to that of AdS/CFT, where the strong subadditivity is always satisfied [65; 66]. ### Static Time Slices To study the nature of subaddivity violation, let us first focus on the maximal case \(\theta_{0}=\frac{\pi}{2}\). In this case, it is useful to consider the static coordinate \((\chi,s)\) of dS\({}_{2}\) introduced as follows \[\cos\chi=\cos\phi\cosh t,\ \ \ \ \ \cosh s=\frac{\sin\phi}{\sqrt{\frac{1}{\cosh^{ 2}t}-\cos^{2}\phi}}, \tag{3.15}\] which leads to the metric \[ds^{2}=d\chi^{2}-\sin^{2}\chi ds^{2}. \tag{3.16}\] Then the geodesic length in the bulk dS\({}_{3}\) which connects between two points \((\chi_{1},s_{0})\) and \((\chi_{2},s_{0})\) on the boundary dS\({}_{2}\) at the same constant time \(s_{0}\) is simply given by \(D_{12}=|\chi_{1}-\chi_{2}|\). This means that the holographic entanglement entropy on the constant time slice in this static coordinate is linear about the subsystem size \(|A|\) and saturates the subadditivity. This clearly shows that if we consider time slices other than \(s=\)const., including the constant \(t\) slices, the strong subadditivity gets violated because the subsystem size \(|A|\) gets shorter as \[|A|=\int_{\chi_{1}}^{\chi_{2}}d\chi\sqrt{1-\sin^{2}\chi\left(\frac{ds}{d\chi} \right)^{2}}\leq\chi_{2}-\chi_{1}=D_{12}. \tag{3.17}\] This analysis shows that at \(\theta_{0}=\frac{\pi}{2}\), the static slice \(s=\)const. is the only time slice which is consistent with the Hilbert space structure of dual unitary quantum systems. We can provide another feature of this restriction from the spacetime structure of de Sitter space. Consider a Wheeler-DeWitt patch in the bulk dS\({}_{3}\) for a generic time slice of the boundary dS\({}_{2}\). Since space-like geodesics which connects two boundary points are all on the boundary dS\({}_{2}\), the geodesics are outside of the Wheeler-DeWitt patch, except when the time slice is the constant \(s\) slice, as depicted in figure 19 by comparing this with that in AdS. In a sensible holography we expect that any bulk counterpart dual to objects in the boundary theory at a specific time will be within its Wheeler-DeWitt patch. For generic values of \(\theta_{0}\), the situation gets less sharp but looks qualitatively similar. This can be seen from the left panel of figure 18. As the time evolves following constant \(t\) (or equally \(T\)) slices, the region which satisfies the subadditivity gets squeezed into the future direction and eventually disappears. As \(\theta_{0}\) gets closer to \(\frac{\pi}{2}\), the region which satisfies the subadditivity becomes very narrow as in the right panel of figure 18. ### Time slices and dual Hilbert space Having in mind the analysis of holographic entanglement entropy, we would like to consider the holographic dual interpretation. For simplicity let us focus on the case \(\theta_{0}=\frac{\pi}{2}\). From the Hilbert space viewpoint of its dual quantum system, assuming that it is unitary, we can only allow the static time slice (constant \(s\) one) on the boundary dS\({}_{2}\) as we have seen in the previous subsection. Owing to the \(SO(2,1)\) symmetry of dS\({}_{2}\), we can boost and rotate the canonical static time slice \(t=0\) by this symmetry as depicted in the left panel of figure 20. Below we call these \(SO(2,1)\) transformation of \(t=0\) slice nice slices. Note that this family of nice time slices do not include \(t=t_{0}>0\) slices but does the constant \(s\) ones in the coordinate (3.16). Therefore we expect that all nice slices correspond to an identical Hilbert space \(\mathcal{H}_{\rm dS}\) of a quantum system dual to a half dS\({}_{3}\). Since the dual state looks maximally entangled due to the fact that the extremal surface \(\Gamma_{A}\) is included in the boundary dS\({}_{2}\), the dimension of \(\mathcal{H}_{\rm dS}\) should be given by \(S_{dS}=\frac{2\pi}{4G_{N}}\), i.e. the de Sitter entropy of the dS\({}_{3}\) gravity. This argument looks very similar to the surface/state duality proposed in [50], which argues that a codimension two space-like surface \(\Sigma\) in general gravitational spacetimes is holographically dual to a certain quantum state \(|\Phi_{\Sigma}\rangle\). The difference is that in the present paper we put a real boundary by restricting the dS\({}_{3}\) to the half dS\({}_{3}\) so that it has a genuine boundary given by dS\({}_{2}\) at \(\theta=\theta_{0}\). However, it is natural that there is a holographic duality even before we put the time-like boundary. In this context, our lesson from this present analysis is that in order for the surface/state duality works well we need to require that the external surface \(\Gamma_{A}\) should be within the Wheeler-DeWitt patch of \(\Sigma\). We can decompose a constant \(t=t_{0}>0\) slice into multiple nice time slices as Figure 19: Sketches of Wheeler-DeWitt patches (blue regions) and the geodesics (red curves) in AdS\({}_{3}\) (left), in dS\({}_{3}\) at \(t=0\) (middle) and in dS\({}_{3}\) at \(t>0\) (right). depicted in the right panel of figure 20. As \(t_{0}\) gets larger, we need more nice time slices to cover it as can be seen from that fact that the length of \(t=t_{0}\) slice exponentially grows \(\sim e^{t_{0}}\). This shows that the constant \(t\) slice (except \(t=0\) one) overestimates the dimension of Hilbert space. Indeed this is obvious from the fact that to cover the \(t=t_{0}\) slice we need many nice slices whose Hilbert spaces clearly have overlaps as they cover the \(t=0\) slice many times. Thus we cannot expect that such a slice describes a well-defined Hilbert space, which is consistent with our observation of the subadditivity violation. The division of the constant \(t\) slice into subsystems does not correspond to the factorization of a Hilbert space. This behavior is quite different from that in local quantum field theories, where the choice of time slice on a given causal diamond gives the identical subsystem of a Hilbert space. In our de Sitter holography, even though the segment \(A\) on the constant \(t\) slice and \(\Gamma_{A}\) on the nice slice both live on the same causal diamond as in the left panel of figure 17, only the latter has a well-defined meaning as a proper subsystem of \({\cal H}_{\rm dS}\). ### Non-locality and Holographic Entanglement Entropy Then it is natural to ask why only nice slices (constant \(s\) slices) can describe a proper Hilbert space. Even though we do not give a conclusive answer to this question, we Figure 20: Sketches of nice time slices (constant \(s\) slices) in the boundary dS\({}_{2}\) which describe the Hilbert space of dual quantum system. In the left panel we showed these slices in a global dS\({}_{2}\). In the right panel, we showed that a constant \(t\) slice (red curve) is decomposed into multiple constant \(s\) slices (purple, blue and green curves). would like to suggest that the non-local nature of dual quantum system plays a crucial role. First of all, it is obvious that the way we put the boundary of a half dS\({}_{3}\) leads to a finite cut off in the dual field theory. Also the volume law entanglement which we observe for nice slices is typical for vacuum states in highly non-local field theories [67; 68]. Moreover, we can even find that a growth of entanglement entropy, defined by the replica method, can exceed the volume law i.e. \(S_{A}\sim|A|^{p},\ \ p>1\) in highly non-local field theories, which shows the violation of subadditivity. Indeed, as shown in [67] via the replica method, a non-local free scalar field theory in \(d\) dimensions with the action \[S=\int dx^{d}\phi(x)e^{(-\partial_{x}^{2})^{q}}\phi(x)\,, \tag{3.18}\] leads to the entanglement entropy whose UV divergence scales as \[S_{A}\propto\left(\frac{L_{A}}{\epsilon}\right)^{d-2+2q}\,, \tag{3.19}\] where \(L_{A}\) is the linear size of the subsystem \(A\). Thus, \(S_{A}\) grows faster than the volume law if \(q>\frac{1}{2}\). Here note that the entanglement entropy defined from the replica method based on Euclidean path-integral is not guaranteed to satisfy the subadditivity, though the entanglement entropy defined from a quantum state in a Hilbert space automatically satisfies the subadditivity. In our holographic dual of a half dS\({}_{3}\), it is natural to expect that a similar non-local field appears and this may explain the violation of subadditivity on generic time slices. As the value of \(\theta_{0}\) gets smaller, this non-local effects get milder and the range of time slices which satisfy the subadditivity, gets broader as depicted in figure 18. In this way, even though the proper Hilbert space interpretation becomes difficult for generic time slices including constant \(t\) slices, we may be able to formally define the entanglement entropy by the replica method, which allows the violation of strong subadditivity. Notice that in such a non-local field theory, Hamiltonian cannot be defined in a standard way as the action involves derivatives with respect to the Euclidean time whose order is higher than two. If this interpretation is true, we should be able to define \(S_{A}\) even when \(A\) is so large that the two end points of \(A\) cannot be connected by a space-like geodesic. If we apply the original holographic entanglement entropy formula (3.11), we find that it is complex valued \[D_{12}=\pi+i\text{arccosh}\left[\sinh^{2}t-\cos(2\theta_{0})\cosh^{2}t_{0} \right]\,. \tag{3.20}\] See [69; 13] for more studies about complex geodesics in dS space. As depicted in the right panel of figure 17, the real part comes from the geodesic in the Euclidean instanton (semi-sphere) and the imaginary part comes from the time-like geodesic in the de Sitter space. However, if we consider the holographic calculation of entanglement entropy via the replica method (for the derivation of the covariant HEE in AdS/CFT see [55]), there are two candidates of extremal surface: one is in the bra geometry and the other is the ket geometry as in figure 21. This prescription was analyzed in [70] for a bra-ket wormhole in AdS/CFT. The correct result is given by the sum of two contributions: \[Z_{\rm tot}^{(n)}=e^{(1-n)S}+e^{(1-n)S^{*}}, \tag{3.21}\] where \(n\) is the replica number. \(S_{A}\) is given by \[S_{A}=\lim_{n\to 1}\frac{1}{1-n}\log Z_{\rm tot}^{(n)}\simeq\frac{S+S^{*}}{2}= \text{Re}[S]. \tag{3.22}\] Thus we find that the actual \(S_{A}\) is given by the real part: \[S_{A}=\frac{\pi}{4G_{N}}=\frac{1}{2}S_{\rm dS}, \tag{3.23}\] where \(S_{\rm dS}=\frac{2\pi}{4G_{N}}\) is the de Sitter entropy. Figure 21: Sketches of calculation of holographic entanglement entropy in the Schwinger-Keldysh geometry. The sum of the contributions associated with two extremal surfaces \(\Gamma_{A}^{(1)},\Gamma_{A}^{(2)}\) gives rise to the real entanglement entropy. In this way, we find that the saturation behavior i.e. \(S_{A}\) grows monotonically for \(0\leq\Delta\phi\leq\Delta\phi_{\rm max}\) and it takes the constant value \(\frac{1}{2}S_{\rm dS}\) for \(\Delta\phi\geq\Delta\phi_{\rm max}\). For \(\Delta\phi>\pi\), the standard holographic prescription guarantees the relation \(S_{A}=S_{A^{c}}\), where \(A^{c}\) is the complement of \(A\). Explicit plots are shown in figure 22. ### Overcounting of de Sitter Hilbert Space In previous analyses regarding the holographic entanglement entropy of a single interval, it was observed that the entropy is bounded by half of the dS entropy, _i.e.,_ \[S_{A}\leq\frac{1}{2}S_{\rm dS}\,, \tag{3.24}\] as illustrated in figure 22. This outcome is expected since the corresponding bulk dual represents only half of the dS spacetime. Generalizing this calculation to cases where subsystem \(A\) comprises multiple disconnected intervals is also possible. Since this calculation follows the standard prescription [71], explicit results will not be presented in detail. However, it is important to note that the holographic entanglement entropy of a subregion consisting of multiple intervals is not bounded by the dS entropy \(S_{\rm dS}\) at late times, as the sum of lengths of disconnected geodesics can be very large. Focusing on the case with the boundary dS space located at \(\theta=\theta_{0}=\frac{\pi}{2}\), it has been derived in eq. (A.4) that the entropy of a single interval reaches the maximal value \(\frac{1}{2}S_{\rm dS}\) when its length corresponds to the critical size \(\Delta\phi_{\rm max}\). It should be noted that the maximum size \(\Delta\phi_{\rm max}(t_{0})\) decreases to zero as the boundary time \(t_{0}\) increases. Consequently, let us Figure 22: Plots of holographic entanglement entropy \(S_{A}\) on a constant \(t\) time slice based on the prescription (3.22) for \(d=2\). We plotted \(S_{A}\) as a function of \(\Delta\phi\) in the range \(0\leq\Delta\phi\leq 2\pi\) for \(\theta_{0}=\frac{\pi}{2}\) (left) and \(\theta_{0}=\frac{\pi}{8}\) (right). The blue and orange curves correspond to \(t=0\) and \(t=2\). Note that we have the critical time \(t_{\rm max}=0\) for \(\theta_{0}=\frac{\pi}{2}\) and \(t_{\rm max}\simeq 1.61\) for \(\theta_{0}=\frac{\pi}{8}\). consider a time slice at \(t=t_{0}\) which can be divided into \(2N\) identical intervals, where \(\frac{\pi}{N}\geq\Delta\phi_{\rm max}(t_{0})\). Figure 23 provides an explicit example with \(N=3\). By considering a subsystem with \(N\) intervals, one can find that the holographic entanglement entropy is shown as \[S_{A_{1}\cup A_{2}\cup\cdots\cup A_{N}}=N\times S_{A_{i}}=\frac{N}{2}S_{\rm dS} \geq S_{\rm dS}\,. \tag{3.25}\] This value can even approach infinity as \(N\) tends to infinity, while \(\lim_{t_{0}\to\infty}\Delta\phi_{\rm max}\to 0\). It might be questioned whether this violates the entropy bound of de Sitter space, whose Hilbert space is expected to be finite. However, as observed from the violation of (strong) subadditivity, we expect that a generic time slice does not correspond to a pure state in a single de Sitter Hilbert space. Instead, the unbounded entropy of multiple intervals, as expressed in eq. (3.25), can be interpreted as the result of overcounting the dimension of the de Sitter Hilbert space. As shown in figure 23, each critical interval \(A_{i}\) at \(t=t_{0}\) could be understood as a maximally entangled state in a single de Sitter Hilbert space. Back to the original time at \(t=0\), those single Hilbert spaces are overlapping with each other along this time slice. The recent paper [72] discusses the tensor network representation of dS space with overlapping qubits. However, we note that the studies in [72] focus on bulk dS spacetime which is different from our dS Figure 23: Penrose diagram of dS\({}_{2}\) spacetime which corresponds to the dS boundary located at \(\theta=\theta_{0}=\frac{\pi}{2}\) in dS\({}_{3}\) bulk spacetime. The green dashed lines denote the null limit of geodesics. On the time slice \(T=T_{0}\) or \(t=t_{0}\), we divide the system into six parts. boundary picture as shown in figure 23. ### Higher dimensional half de Sitter space It is straightforward to generalize our analysis for a half dS\({}_{3}\) to a half dS\({}_{d+1}\) for \(d\geq 3\). Below we will provide analysis for \(\theta_{0}=\frac{\pi}{2}\) in the global dS\({}_{d+1}\) described by (1) and (2). The time-like boundary of this space is given by dS\({}_{d}\) described by the global metric \[ds^{2}=-dt^{2}+\cosh^{2}t(d\phi^{2}+\sin^{2}\phi d\Omega_{d-2}^ {2}), \tag{111}\] where \(d\Omega_{d-1}^{2}\) is the metric of the unit sphere S\({}^{d-2}\). As in the previous case of \(d=2\), we would like to argue that gravity on a half dS\({}_{d+1}\) is dual to non-local field theory on the dS\({}_{d}\). To examine this duality we would like to calculate the holographic entanglement entropy. This is given by the area of extremal surface \(\Gamma_{A}\), which ends on the boundary of the subsystem \(A\), by (3). For this we choose the subsystem \(A\) to be a disk on the boundary dS\({}_{d}\): \[t=t_{0},\ \ 0\leq\phi\leq\phi_{1}. \tag{112}\] In this case, we can find the profile of the extremal surface \(\Gamma_{A}\) which ends on the boundary \(\phi=\phi_{1}\) of the subsystem \(A\) (112) as \[\frac{\cos\phi}{\tanh t}=\frac{1+L^{2}}{1-L^{2}}. \tag{113}\] This surface can be simply obtained by mapping the extremal surface \[t^{2}-\sum_{i=1}^{d-1}(x_{i})^{2}=L^{2}, \tag{114}\] in the Poincare dS\({}_{d}\) given by the metric \[ds^{2}=\frac{-dt^{2}+\Sigma_{i=1}^{d-1}dx_{i}^{2}}{t^{2}}. \tag{115}\] The constant \(L\) is related to the choice of the subsystem \(A\) via \[\frac{1+L^{2}}{1-L^{2}}=\frac{\cos\phi_{1}}{\tanh t_{0}}. \tag{116}\] The profile of the extreme surface (113) is plotted in figure 24. This \(d-1\) dimensional extremal surface \(\Gamma_{A}\) is wrapped on \({\rm S}^{d-2}\). The induced metric on \(\Gamma_{A}\) reads \[ds^{2}=\frac{4L^{2}(1+L^{2})^{2}}{\left((1+L^{2})^{2}-(1-L^{2})^{2 }\cos^{2}\phi\right)^{2}}d\phi^{2}+\frac{(1+L^{2})^{2}\sin^{2}\phi}{(1+L^{2})^ {2}-(1-L^{2})^{2}\cos^{2}\phi}d\Omega_{d-2}^{2}. \tag{3.32}\] This surface \(\Gamma_{A}\) is space-like when \(0<L^{2}\leq 1\). It becomes light-like when \(L^{2}=0\) and time-like when \(L^{2}<0\). The area of \(\Gamma_{A}\) is computed as \[A(\Gamma_{A})={\rm Vol}({\rm S}^{d-2})\int_{0}^{\phi_{1}}d\phi\frac{2L(1+L^{2} )^{d-1}(\sin\phi)^{d-2}}{\left((1+L^{2})^{2}-(1-L^{2})^{2}\cos^{2}\phi\right)^{ \frac{d}{2}}}. \tag{3.33}\] Since this is a function of \(\phi_{1}\) and \(t_{0}\), we write this as \(A(\phi_{1},t_{0})\). It is straightforward to see that at \(t_{0}=0\) and \(\phi_{1}=\frac{\pi}{2}\) we have \[A\left(\frac{\pi}{2},0\right)=\frac{1}{2}{\rm Vol}(S^{d-1}). \tag{3.34}\] This means that the corresponding holographic entanglement entropy coincides with the half of de Sitter entropy \(\frac{1}{2}S_{dS}\). The behavior of \(A(\phi_{1},t_{0})\) is plotted in figure 25. [FIGURE Figure 24: Profiles of extremal surfaces (3.28) in global dS. The horizontal and vertical coordinate are \(\phi\) and \(T\), respectively. The blue, yellow, green,and red curves describe the extremal surface for \(L^{2}=1,1/4,0\) and \(-1/4\), respectively. general it is a monotonically increasing function of \(t_{0}\) and \(\phi_{1}\) and gets finally saturated to the maximal value \(\frac{1}{2}S_{dS}\). Note that when \(L^{2}\) becomes negative i.e when \(\Gamma_{A}\) gets time-like, we choose the other extremal surface which goes past and which is wrapped on the semi-sphere in the Euclidean part as in right panel of figure17. Thus as in the same argument of (3.23), the holographic entanglement entropy is given by the real part of \(A(\Gamma_{A})\), namely \(\frac{1}{2}S_{\rm dS}\). ## 4 Holography for a half dS with EOW brane (Case 2) Now we would like to study another setup of holography for a half de Sitter space in the presence of EOW brane (case 2). We regard the future boundary \(t=\infty\) of the de Sitter space as an EOW brane where we impose the Neumann boundary condition for the gravitational field. In the dual field theory, we have the final state projection as in the case of AdS/CFT analysis done in section 2.3. To obtain the properties of the gravity dual, we would like to analyze the holographic entanglement entropy given by the area of extremal surface as in (3) and (3). However, since the field theory sides has the final state projection, it should properly be regarded as the holographic pseudo entropy [31]. For simplicity focus on the maximal case \(\theta_{0}=\frac{\pi}{2}\) in dS\({}_{d+1}\) for simplicity. We argue gravity in the half dS\({}_{d+1}\) with the EOW brane at \(t=t_{P}\) is dual to a non-local field theory on its boundary dS\({}_{d}\) with a finial state projection at \(t=t_{p}\). The new aspect here is that the space-like extremal surface can end on the boundary of the EOW brane. Figure 25: The behaviors of the holographic entanglement entropy for a half dS\({}_{4}\) (\(d=3\)) in case 1. In the left panel, we plotted \(\frac{1}{2\pi}A(\Gamma_{A})\) as a function of \(\phi_{1}\) at \(t=0\) (blue), \(t=1/2\) (orange) and \(t=1\) (green). In the right panel, we showed \(\frac{1}{2\pi}A(\phi_{1},t_{0})\) as a function of \(t_{0}\) for \(\phi_{1}=\pi/8\) (blue), \(\phi_{1}=\pi/3\) (orange), \(\phi_{1}=\pi/2\) (green). Consider the case where the EOW brane is situated at the future infinity \(t_{p}\to\infty\), which is dual to the final state projection at the future infinity. First let us analyze the holographic pseudo entropy in this setup for \(d=2\). When the interval \(A\) is small such that \(\Delta\phi<\Delta\phi_{\rm max}\), the holographic pseudo entropy \(S_{A}\) is given by the length of connected space-like geodesic \(D_{12}^{\rm con}\) as we had in the case 1. However, for \(\Delta\phi>\Delta\phi_{\rm max}\), when a connected space-like geodesic does not exist, \(S_{A}\) can be computed from the two disconnected geodesics which connect one of the end points of \(A\) with a point on the EOW brane as depicted in figure 26. Note also that such a disconnected geodesic becomes time-like for \(\Delta\phi>\Delta\phi_{\rm max}\), which shows that the holographic pseudo entropy is pure imaginary. This is the crucial difference from the analysis of \(S_{A}\) in the case 1. In the case 2, in the presence of EOW brane, the geodesics \(\Gamma_{A}\) can end on it and the real part of \(S_{A}\) for this contribution (see the right panel of figure 26) becomes smaller than the connected one (see the right panel of figure 17) which has a positive real part. In summary, the behavior of \(S_{A}\) for \(d=2\) is identical to that in section 3.5 for \(\Delta\phi<\Delta\phi_{\rm max}\), while we have \({\rm Re}[S_{A}]=0\) for \(\Delta\phi>\Delta\phi_{\rm max}\): \[{\rm For}\ 0\leq\Delta\phi<\Delta\phi_{\rm max}:\ \ \ \ S_{A}= \frac{D_{12}^{\rm con}}{4G_{N}},\] \[{\rm For}\ \Delta\phi>\Delta\phi_{\rm max}:\ \ \ \ {\rm Re}[S_{A}]=0. \tag{101}\] It is straightforward to generalize this to higher dimensions by employing the results of \(A(\phi_{1},t_{0})\) in case 1 presented in section 3.7. When \(\phi_{1}\) is smaller than the saturation Figure 26: Sketches of geodesics in dS\({}_{2}\) which are used to calculation the holographic pseudo entropy in a half dS\({}_{3}\). When the subsystem \(A\) is small \(\Delta\phi\leq\Delta\phi_{\rm max}\), the geodesic \(\Gamma_{A}\) is space-like and connected shown in the left panel. On the other hand when \(A\) gets larger \(\Delta\phi>\Delta\phi_{\rm max}\), \(\Gamma_{A}\) consists of two disconnected time-like geodesics as in the right panel. value in case 1, \(S_{A}\) takes the same value as that in case 1. For large values of \(\phi_{1}\), we have \(\text{Re}[S_{A}]=0\). An explicit plot for \(d=3\) is shown in figure 27. Now let us interpret the behavior of \(S_{A}\) in terms of dual field theory on \(\text{dS}_{d}\). For \(\Delta\phi<\Delta\phi_{\text{max}}\), \(S_{A}\) is identical to that in case 1, its interpretation is the same. Thus, we find the violation of subadditivity and this is expected to be due to the non-local nature of the dual field theory. For \(\Delta\phi>\Delta\phi_{\text{max}}\), we find a behavior special to case 2 that the real part of \(S_{A}\) does vanish. This transition at \(\Delta\phi=\Delta\phi_{\text{max}}\) may be natural from the viewpoint of the dual field theory on \(\text{dS}_{d}\). To see this, let us remember that the pseudo entropy is defined from the transition matrix (5), which depends on the two states \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\). In our setup \(|\psi_{1}\rangle\) is the state which is obtained by the forward time evolution of Hartle-Hawking state, while \(|\psi_{2}\rangle\) is the one created by the backward time evolution of the final state. It is also useful to note that the phenomenological observation [32; 33; 59] implies that the real part of pseudo entropy typically measures the amount of entanglement in the intermediate state between the two states, though the interpretation of imaginary part is not well understood at present. Now the final state is defined by imposing a boundary condition at \(t=t_{p}=\infty\) and the analysis of the space-like EOW brane [60] suggests that the state has no real space entanglement as in the standard boundary state [73]. At the time slice \(t=t_{0}\), we can have such an disentangling effect from a point at future infinity when two points are separated more than \(2(t_{p}-t_{0})\) assuming the light like propagation of physical signals on \(\text{dS}_{d}\). Indeed this border is \(\Delta\phi_{\text{max}}\). Therefore it is natural that the real part of pseudo entropy gets vanishing for \(\Delta\phi>\Delta\phi_{\text{max}}\). On the Figure 27: The behaviors of the holographic pseudo entropy for a half \(\text{dS}_{4}\) (\(d=3\)) in case 2. In the left panel, we plotted \(\frac{1}{2\pi}A(\Gamma_{A})\) as a function of \(\phi_{1}\) at \(t=0\) (blue), \(t=1/2\) (orange) and \(t=1\) (green). In the right panel, we showed \(\frac{1}{2\pi}A(\Gamma_{A})\) as a function of \(t_{0}\) for \(\phi_{1}=\pi/8\) (blue), \(\phi_{1}=\pi/3\) (orange), \(\phi_{1}=\pi/2\) (green). other hand, for \(\Delta\phi<\Delta\phi_{\rm max}\), such two points do not feel the existence of the EOW brane and thus the result is the same as that in case 1. In a similar way, we can analyze the holographic pseudo entropy when the final projection is inserted at a finite time \(t=t_{p}\). Obviously at \(t=t_{p}\), we have \(S_{A}=0\). Moreover, the real part of \(S_{A}\) gets vanishing when the geodesic \(\Gamma_{A}\) becomes light-like. This can happen \(t<t_{p}\) if the subsystem \(A\) is large enough, in which case a imaginary part of \(S_{A}\) starts to be non-zero until \(t=t_{p}\) as \(\Gamma_{A}\) becomes time-like, as depicted in figure 28 for plots. It is curious to note that this page curve like behavior of \({\rm Re}S_{A}\) is qualitatively similar to the one found for the AdS/CFT setup in eq. (33). ## 5 Summary and Discussions The primary objective of this paper is to investigate the holographic duality involving gravity in de Sitter spaces. Unlike many other approaches, our focus centers on a half de Sitter space, achieved by introducing a timelike boundary in global dS spacetime. Within this framework, we propose that the gravity on a \(d+1\)-dimensional half de Sitter space is dual to a non-local field theory residing on its \(d\)-dimensional boundary. Before delving into our investigation of de Sitter holography, we conducted an analysis of the holographic duality linking gravity in a \(d+1\)-dimensional AdS and a CFT living on a \(d\)-dimensional dS. This particular scenario can be regarded as a special case within the framework of the AdS/CFT correspondence. Of course, its validity and fundamental computational methods are well-established. We examined two distinct setups, referred to as the Case 1 and the Case 2, respectively. In the Case 1, the quantum state in the dual CFT is described by the Schwinger-Keldysh formalism, whereas in the Case 2, we consider the dual CFT incorporating a final state projection, as illustrated in figure 4 and figure 5. In particular, the gravity dual in the Case 2 is given by adding an end-of-the-world brane (EOW brane) on an AdS geometry. In these setups, we evaluated holographic entanglement entropy, which is determined by the area of an extremal surface. In the Case 1, we observed that the holographic entanglement entropy for a subsystem of fixed size consistently increases as the size of de Sitter space is inflating with respect to the global time. Conversely, in the Case 2, it initially presents growth but eventually decreases to zero. At the time \(t=t_{\text{\tiny P}}\), corresponding to the implementation of the final state projection, the entanglement entropy vanishes. It is important to note that the extremal surface area we computed should be more appropriately interpreted as the pseudo entropy due to the presence of the final state projection. Additionally, we also confirmed that independent CFT calculations in CFT\({}_{2}\) reproduce the results which agree with that from gravity dual. With the AdS/CFT results as our foundation, we proceeded to examine holography for gravity in a half de Sitter space. We focused on two setups, namely the Case 1 and the Case 2, and investigated the behaviour of holographic entanglement entropy. Note that we employed the standard calculation of holographic entanglement entropy [24; 25; 26], where we minimize the area, as opposed to the prescription in [38] where the area is maximized. Remarkably, we discovered that the properties of holographic entanglement entropy in a half de Sitter space diverge from those in the AdS/CFT correspondence. Notably, connecting two arbitrary points in a global de Sitter space using a spacelike geodesic is not always possible. Consequently, for a two-dimensional dS space, which serves as the timelike boundary of a three-dimensional half de Sitter space, there is typically no spacelike geodesic linking the endpoints of an interval. As a result, the definition of holographic entanglement entropy, denoted as \(S_{A}\), in a conventional sense is problematic. Similarly, we observed the same limitation for extremal surfaces in higher dimensions (\(d>2\)). However, a resolution to this issue arises when we consider both timelike and spacelike geodesics within a Hartle-Hawking contour, as illustrated in the right panel of figure 17. The joint geodesics allow us to connect two endpoints beyond critical size. Consequently, the holographic entanglement entropy, denoted as \(S_{A}\), acquires a complex value. In the Case 1, we contend that the appropriate holographic entanglement entropy can be obtained by taking its real part, which is nothing but half of the de Sitter entropy \(\frac{1}{2}S_{dS}\). In the Case 2, with the presence of the EOW brane, we argue that the holographic pseudo entropy can be computed by utilizing timelike geodesics terminating on the EOW brane, as depicted in the right panel of figure 26. Furthermore, even within parameter regions where a spacelike geodesic exists, the holographic entanglement entropy generally exhibits super-extensive behaviour relative to the subsystem size. Consequently, the corresponding function describing holographic entanglement entropy violates the (strong) subadditivity property. Notably, we have discovered that this issue is resolved solely by focusing on the time slices associated with the static coordinate in the case of the maximal half de Sitter space (\(\theta_{0}=\frac{\pi}{2}\)). On these particular time slices, the holographic entanglement entropy \(S_{A}\) adheres to the volume law. In this regard, we anticipate that the quantum state represented by the half de Sitter space manifests maximal entanglement specifically on these special time slices. This implies that we can establish a well-defined Hilbert space solely for static time slices. Conversely, a generic time slice in the boundary de Sitter space spans the same Hilbert space as the static time slice multiple times, resulting in an overcounting of the genuine degrees of freedom (refer to figure 20). Additionally, it is worth noting that when subadditivity is violated, the extremal surface extends beyond the Wheeler-DeWitt patch, as depicted in figure 19. Although providing a Hilbert space interpretation for generic time slices, including the constant time slice of the global coordinate, presents challenges, we propose a potential understanding of the entanglement entropy by employing the replica method and defining the area of extremal surface \(S_{A}\) as the entropy. Notably, in highly non-local field theories, the realization of super-extensive entanglement entropy can be realized. It is important to note that in such non-local field theories, a standard Hilbert space cannot be defined due to the appearance of infinitely many time derivatives in the action. Applying this perspective, the holographic entanglement entropy in the Case 1 exhibits initial growth, followed by saturation at \(\frac{1}{2}S_{dS}\), as depicted in figure 22 and figure 25. Note that this saturation was absent in the evolution of \(S_{A}\) derived in the Case 1 in the framework of the AdS/CFT correspondence and that it shows that the entropy is bounded in de Sitter space. In the Case 2, we observe that the real part of the holographic pseudo entropy exhibits initial growth and eventually vanishes at the critical time, while the imaginary part remains non-zero, as depicted in figure 27 and figure 28. We identify that the real part of \(S_{A}\) becomes zero when the subsystem \(A\) surpasses the light cone (refer to the right panel of figure 26). This behaviour arises due to the influence of the EOW brane, where the reflection of two points on \(A\) by the EOW brane boundary condition affects \(S_{A}\). The EOW brane boundary state possesses vanishing quantum entanglement, thereby diminishing the overall quantum entanglement. There are several promising avenues for future investigation. Firstly, it would be intriguing to explore alternative frameworks that provide a more manageable description of the "overcounting" phenomenon in the Hilbert space dual to a half de Sitter space in global coordinates. One possible approach could involve qubit systems or tensor networks, offering a more controllable perspective. Additionally, it is of great interest to extend our analysis to holography in more generic asymptotically dS spacetimes, such as de Sitter black holes, as well as various cosmological models. Lastly, a pivotal and profound question lies in understanding how the creation and evolution of the universe can be described in terms of the dual field theory, utilizing the insights gained from the holographic duality. ## Acknowledgements We are grateful to Takato Mori, Yusuke Taki and Zixia Wei for useful discussions. This work is supported by the Simons Foundation through the "It from Qubit" collaboration and by MEXT KAKENHI Grant-in-Aid for Transformative Research Areas (A) through the "Extreme Universe" collaboration: Grant Number 21H05187. TT is also supported by Inamori Research Institute for Science, and by JSPS Grant-in-Aid for Scientific Research (A) No. 21H04469. SMR is also supported by JSPS KAKENHI Research Activity Start-up Grant NO. 22K20370. YS is supported by Grant-in-Aid for JSPS Fellows No.23KJ1337. T. K. is supported by Grant-in-Aid for JSPS Fellows No. 23KJ1315. ## Appendix A Explicit Space-like Geodesics in dS\({}_{3}\) Here we study the connected geodesic anchored on the boundaries of an interval \(A\) in dS\({}_{3}\). This is the geodesic which connects \((t,\theta,\phi)=(t_{0},\frac{\pi}{2},-\phi_{0}+\frac{\pi}{2})\) and \((t,\theta,\phi)=(t_{0},\frac{\pi}{2},\phi_{0}+\frac{\pi}{2})\) in the coordinate of dS\({}_{3}\) (3.2). We assume \(t_{0}\geq 0\) without losing generality. The length \(L\) of a curve is \[L=\int_{-\phi_{0}+\frac{\pi}{2}}^{\phi_{0}+\frac{\pi}{2}}d\phi\sqrt{\cosh^{2}t- \left(\frac{dt}{d\phi}\right)^{2}}.\] (A.1) This leads to the differential equation \[\frac{dt}{d\phi}=\cosh t\sqrt{1-\frac{\cosh^{2}t}{\cosh^{2}t_{*}}},\] (A.2) where the middle point \(\phi=\frac{\pi}{2}\) is the turning point and we set \(t=t_{*}\) at this point. By integrating this, we find \[\phi_{0}=\frac{\pi}{2}-\arctan\left[\frac{\cosh t_{*}\sinh t_{0} }{\sqrt{\cosh^{2}t_{*}-\cosh^{2}t_{0}}}\right],\] \[L=\pi-2\arctan\left[\frac{\sinh t_{0}}{\sqrt{\cosh^{2}t_{*}- \cosh^{2}t_{0}}}\right].\] (A.3) Note that assuming \(t_{0}>0\), there is an upper bound of \(\phi_{0}\), reached at \(t_{*}\rightarrow\infty\), which we call \(\phi_{\rm max}\): \[\phi_{\rm max}=\frac{\pi}{2}-\arctan\left[\sinh t_{0}\right].\] (A.4) In this maximal case, we find \(L|_{\phi=\phi_{*}}=\pi\), which is a half of de Sitter horizon length. This maximal value \(\phi=\phi_{\rm max}\) corresponds to the limit where the original space-like geodesic gets light-like as depicted in the left panel of figure 17. Therefore we cannot find an appropriate space-like geodesic when the subsystem \(A\) is larger than this maximal size.
2307.10320
Reproducibility in Machine Learning-Driven Research
Research is facing a reproducibility crisis, in which the results and findings of many studies are difficult or even impossible to reproduce. This is also the case in machine learning (ML) and artificial intelligence (AI) research. Often, this is the case due to unpublished data and/or source-code, and due to sensitivity to ML training conditions. Although different solutions to address this issue are discussed in the research community such as using ML platforms, the level of reproducibility in ML-driven research is not increasing substantially. Therefore, in this mini survey, we review the literature on reproducibility in ML-driven research with three main aims: (i) reflect on the current situation of ML reproducibility in various research fields, (ii) identify reproducibility issues and barriers that exist in these research fields applying ML, and (iii) identify potential drivers such as tools, practices, and interventions that support ML reproducibility. With this, we hope to contribute to decisions on the viability of different solutions for supporting ML reproducibility.
Harald Semmelrock, Simone Kopeinik, Dieter Theiler, Tony Ross-Hellauer, Dominik Kowald
2023-07-19T07:00:22Z
http://arxiv.org/abs/2307.10320v1
# Reproducibility in Machine Learning-Driven Research ###### Abstract Research is facing a reproducibility crisis, in which the results and findings of many studies are difficult or even impossible to reproduce. This is also the case in machine learning (ML) and artificial intelligence (AI) research. Often, this is the case due to unpublished data and/or source-code, and due to sensitivity to ML training conditions. Although different solutions to address this issue are discussed in the research community such as using ML platforms, the level of reproducibility in ML-driven research is not increasing substantially. Therefore, in this mini survey, we review the literature on reproducibility in ML-driven research with three main aims: (i) reflect on the current situation of ML reproducibility in various research fields, (ii) identify reproducibility issues and barriers that exist in these research fields applying ML, and (iii) identify potential drivers such as tools, practices, and interventions that support ML reproducibility. With this, we hope to contribute to decisions on the viability of different solutions for supporting ML reproducibility. Keywords:Machine Learning Artificial Intelligence Reproducibility Replicability ## 1 Introduction Similar to other scientific fields [4][41], research in artificial intelligence (AI) in general, and machine learning (ML) in particular, is facing a reproducibility crisis [24]. Here, especially unpublished source-code and sensitivity to ML training conditions make it nearly impossible to reproduce existing ML publications, which also makes it very hard to verify the claims and findings stated in the publications. One potential solution for enhancing reproducibility in ML is the use of ML platforms such as OpenML, Google Cloud ML, Microsoft Azure ML or Kaggle. However, in a recent study [21] found that the same experiment executed on different platforms leads to different results. This suggests that still a lot of research is needed until out-of-the-box reproducibility can be provided. However, a systematic overview of the literature on ML reproducibility is still missing, especially with respect to the barriers and drivers of reproducibility that can be found in the literature. An example of a driver could be code sharing or hosting reproducibility tracks/challenges at scientific conferences [16]. One example for this is the reproducibility track at the European Conference on Information Retrieval (ECIR) [28, 31] With respect to potential barriers, it is still not clear to what extent the use of ML could even fuel reproducibility issues [17], e.g., via bad ML practices such as data leakage [26]. This work aims to provide an overview of the situation and identify the different drivers and barriers present. This should allow for a better understanding of the following three aspects: * The situation of ML reproducibility in different research fields (see Section 2). * Reproducibility issues that exist in research fields applying ML, and the barriers that cause these issues (see Section 3). * The drivers that support ML reproducibility, including different tools, practices, and interventions (see Section 4). ### Degrees of Reproducibility According to [20], there are three degrees of reproducibility in ML, which can be seen in Table 1. \begin{table} \begin{tabular}{l l} **Type** & **Requirement** \\ (R1) Experiment Reproducibility & The same implementation (including same software versions, hyperparameters, etc.) of the ML method (i.e., the algorithm) must produce (exactly) the same results when using the same training and test data. If one produces different results, then only the hardware where the ML experiment was reproduced could be the reason for this difference. \\ (R2) Data Reproducibility & An alternative implementation of the ML method must produce (almost) the same results when executed using the same data. If there are differences in the results, then probably differences in the concrete implementation (e.g., different versions of a library) are the reasons for this. \\ (R3) Method Reproducibility & An alternative implementation of the ML method executed on different data must produce the same results (or at least findings). \\ \end{tabular} \end{table} Table 1: **Different degrees of reproducibility according to [20]** We see that R1 is concerned with the exact reproduction of results, i.e., output, over multiple runs of the same ML method implementation and data. This is also often referred to as computational reproducibility. R2, however, is a bit more general, in a way that the ML method is implemented differently, but should still be able to produce almost the same results given the same data. This "Data Reproducibility" requirement makes sure that the information drawn from the data is consistent and not too heavily dependent on minor implementation differences. R3 generalizes this even more and is only concerned with the findings. Relying only on the use of the same ML method, findings should always be consistent, no matter the exact implementation or exact data. Accordingly, R3 leads to the highest form of generalizability but also to the weakest form of reproducibility. This is conversely true for R1, which leads to the highest form of reproducibility and lowest form of generalizability. This interplay between generalizability and reproducibility of the different degrees can be seen in Figure 1. Furthermore, the structure of Figure 1 gives information about what building blocks the different degrees of reproducibility are concerned with. Thus, R3 is only concerned with the ML method, R2 is concerned with the ML method and data and R1 requires all three building blocks: the ML method, the data, and the experiment. This is where the names of the degrees come from. ### Reproducibility versus Replicability The two terms Reproducibility and Replicability are often used interchangeably in science, even though the difference is often crucial. For computer science and ML, the Association for Computing Machinery (ACM) distinguishes the terms in the following way: * the results can be obtained by a different team with the same experimental setup * the results can be obtained by a different team with a different experimental setup Figure 1: **Degrees of reproducibility**. Adapted from [20] These definitions harmonize with the definitions seen in the literature [6]. Comparing the definitions to the different degrees of reproducibility coined by [20], we can see that degree R1 is somewhat similar to what is here referred to as Reproducibility, and both R2 and R3 can be seen as similar to what is referred to as Replicability. For the purpose of this work, the definitions concerning the three degrees of reproducibility [20] will be used. ## 2 ML Reproducibility in Different Research Fields We mainly focus on two research fields, in which ML reproducibility is discussed: (i) Computer Science, and (ii) health / life science. Additionally, we mention a few other research fields, which have seen similar trends in connection with the reproducibility crisis. ### Computer Science In Computer Science, reproducibility is mainly discussed in two specific sub-fields of ML i.e., Deep learning and Reinforcement learning, and in two application fields of ML i.e., Natural language processing and Recommender systems. #### 2.1.1 Deep learning Neural networks are widely used in ML and can be applied to both supervised and unsupervised learning tasks. Neural networks, however, are known to be inherently non-deterministic and produce different results on multiple reruns, due to the many sources of randomness during training [2]. Not being able to get the same results using the exact same code and data is a big challenge to reproducibility, specifically w.r.t. degree R1. These different outcomes, however, may not necessarily be statistically different, such that the same conclusions can still be drawn. This is a distinction, researchers in the area of using ML for imaging have come to [37]. Their research showed that while the performance of Convolutional Neural Networks (CNN) on image segmentation shows slightly different results on multiple reruns, the obtained results are not statistically different, and the findings are robust. In order to counteract the effect of different outputs on multiple reruns in general, the idea of controlling the sources of randomness has been proposed [2]. To add to this, [42] found that the concrete versions of deep learning frameworks, such as PyTorch or TensorFlow, can have a big impact on performance, in a way where upgrading the version of a model being used can drastically increase or decrease the performance. #### 2.1.2 Reinforcement learning Reinforcement Learning (RL) is the subfield of ML, where reproducibility issues are seemingly discussed the most. RL is especially susceptible to reproducibility issues, partially because of additional sources of non-determinism in the learning process, which other areas are not subjected to [32]. To overcome this obstacle, frameworks, which provide reproducible environments, are commonly used in reinforcement learning research [10][15][46]. These frameworks act as testbeds where different RL algorithms can be evaluated in a common environment. Additionally, different papers have outlined the implementation of deterministic reinforcement learning algorithms, by controlling all the sources of non-determinism [33][1]. Furthermore, [27] propose to standardize an evaluation pipeline, which can be used for reproducible benchmarking of different reinforcement algorithms, similar to competitions like Pommerman and the Learning to Run Challenge. #### 2.1.1 Natural language processing A review of the status quo in terms of reproducibility in Natural language processing (NLP) by [8] highlights some core issues, which overlap with the ones mentioned for neural networks. Even simple rerunning of code can yield different results, again undermining reproducibility of degree R1. When trying to reproduce NLP research, it was evident that even minor changes to parameter settings, such as rare-word thresholds, can drastically change performance. One striking finding, among others, was that whenever the results of a reproduction deviated from the original reported results, the reproduced version performed worse than the original. A different paper by [14] tried to tackle these performance benchmarking issues and proposed a statistically sound replicability analysis framework, where NLP algorithms are benchmarked against each other using multiple different datasets. This is done in a way that not a specific dataset is picked for evaluation, on which the algorithm performs well, but multiple datasets are used to measure performance. #### 2.1.2 Recommender systems Recommender systems research has also been affected by the reproducibility crisis. The research area has observed that a stagnation in progress could happen due to the difficulty of benchmarking new solutions against existing solutions. This has created a form of "phantom progress" where it is not evident whether proposed new methods actually perform better than traditional ones [13]. A study by [7] has evaluated the reasons for this and proposed solutions to the problem, which heavily overlap with the proposals of other research areas, e.g., modern publication practices that foster the use of reproducibility frameworks, and the establishment of best-practice guidelines. ### Health and Life Science There is a debate that the widespread adoption and use of ML has fueled reproducibility issues in health and life science [17]. Accordingly, domain-specific research has been done to address and handle these reproducibility problems. While health and life science may have specific problems and solutions for reproducibility, a review of the state-of-the-art is important to come to a conclusion about reproducibility in ML as a whole. Areas of health / life science making use of ML (often coined ML4H) have had a lot of trouble with missing data when trying to reproduce a research result. Data in the medical area is often private and cannot be publicized nor shared [6][30]. Importantly, however, health and medical science are fields where reproducibility is of critical importance, since the verification of results is important before ML results can be used clinically [6]. To tackle the problem and overcome the reproducibility crisis in health / life science, a solution has been proposed by [30]. This solution addresses three main stakeholders: the data providers, the journals / conferences and the ML4H research community. The idea is that the three stakeholders should interact in rigorous and open collaboration. This solution, however, relies on the collaborative sharing of data, which still has privacy issues connected to it. Possible solutions to this have also been mentioned by [30], such as (i) Privacy-Preserving Analysis techniques or specific data collection regimes, e.g., the Verily Project Baseline. In general, this proposal follows the ideas of the FAIR data principles (findability, accessibility, interoperability, and reusability) and appeals that the scientific data of the area should adhere to these FAIR principles [12]. In addition to this, many domain-specific tools, such as NiLearn, Clinica or OSF have been proposed to help with making the research more reproducible [12]. ### Other Research Fields Apart from computer science and health / life science, many different research fields have benefitted tremendously from ML, but, however, have recently also faced issues regarding reproducibility. Most of these reproducibility issues stem either from data leakage [5] or the lack of computational reproducibility. Across different research fields applying ML, the need for reproducibility is discussed in literature, a few of which are highlighted below: #### 2.3.1 Chemistry ML is used very commonly to interpret patterns in data for chemistry. To keep reproducibility issues at a minimum, best practices have been proposed by [3], which introduce guidelines to follow and inform about best practices, concerning, e.g., code and data sharing, and data leakage. #### 2.3.2 Materials science ML models are used for the informed design of new materials. Recently, however, these workflows have increasingly faced reproducibility issues, especially effects of randomness during training and lack of reproducibility across platforms and versions [39]. #### 2.3.3 Genomics A systematic review by [5] has found that reported results are often inflated in recent research and are hard to evaluate, since data leakage is a prevalent issue in the research field. Galaxy is a biomedical research platform, which is used in genomics and biomedical research for running analyses and aims to improve reproducibility [18]. #### 2.0.2 Satellite imaging All kinds of ML techniques, e.g., unsupervised learning techniques, supervised learning techniques and deep learning are used within image segmentation for satellite imaging. The Monte-Carlo cross-validation is often used to verify new methods, however has been found to be very susceptible to data leakage [34], which negatively impacts reproducibility. ## 3 Barriers in ML Reproducibility When it comes to the different barriers associated with reproducibility in ML, it is important to make a distinction between the different degrees of reproducibility. Some barriers only affect certain degrees of reproducibility. Computational problems, as an example, which prevent researchers from reproducing the exact same results using the same code and data is only a concern for reproducibility of degree R1. On the contrary, degrees R2 and R3 are more concerned with the findings and conclusions than with the exact outputs. ### Computational Problems Recent studies have shown, that sharing of code and data alone is not enough to ensure reproducibility even of degree R1. In some cases, not even the assistance of the original author was enough to reproduce the original results [22]. This barrier of computational reproducibility can be attributed to a few factors, which are discussed in the following. #### 3.1.1 Inherent nondeterminism The inherent nondeterminism in ML is a major reason why reproducibility of degree R1 cannot be easily achieved. Even if both the data and code used in an experiment are known, the pseudo-random numbers generated throughout the training of the ML model can heavily alter its results [2]. Because of this, fixed random number seeds can be vital to the reproducibility of ML. #### 3.1.2 Environment differences Studies have shown that both hardware differences, such as different GPUs or CPUs, as well as different compiler settings can result in different computation outcomes [23]. Furthermore, a comparison between the same ML algorithm with fixed random seeds that was executed using PyTorch and TensorFlow also results in different results [39]. Additionally, a survey, which was conducted by [21] to investigate whether different ML platforms, such as OpenML or Kaggle, provide out-of-the-box reproducibility, also uncovered reproducibility issues. ### Missing Data and Code A study by [25] found that published ML research is often not accompanied by available data and code. Only one third of researchers share the data, and even fewer share the source code. This can have a lot of reasons, such as private data or code that itself is based on unpublished code. Furthermore, the problem may also be attributed to the increasing pressure for researchers to publish quickly, which in turn does not allow them to polish the code and decreases the willingness to release the code. ### Methodological Problems When computational problems are handled and code and data are available, then the reproducibility of degree R1 of concrete experiments should, in theory, be possible. However, in reality, the results are often based on methodological errors made throughout the experiment. One well-known methodological error is data leakage, which can come in different forms [26]. Data leakage has become a widespread issue of research applying ML recently [26]. This can be attributed to the increasing amount of non-experts using ML for various research fields [17], which is fueled by the ease of application of auto-ML libraries and no-code off-the-shelf AI tools. In essence, data leakage happens when some form of data, which the ML model should not be trained on, leaks into the training process. Data leakage can be categorized into 3 subcategories. #### 3.2.1 L1 - No clean train/test split The subcategory L1 summarizes the most obvious cases of data leakage and is further split into 4 variants: (1) the training data and test data are not split at all, (2) the test data is also used for feature selection, (3) the test data is also used for imputation during preprocessing, and (4) there are duplicates in the dataset, which occur in both the test and training data. #### 3.2.2 L2 - Use of non-legitimate data Subcategory L2 is concerned with training data, which contains non-legitimate features. For example, when the use of anti-hypertensive drugs is used to predict hypertension [26]. This data is non-legitimate, since it would not be available in a real world scenario and cannot realistically be used to predict hypertension for a new patient. Features, which result in data leakage of subcategory L2 are often synonymous with the target variable. Generally, deciding whether a feature is non-legitimate for a specific task requires a lot of domain-knowledge. #### 3.2.3 L3 - Test set is not drawn from distribution of scientific interest L3 consists of 3 different types of data leakage. Firstly, temporal leakage is concerned with ML models, which try to predict future outcomes. In that case, temporal leakage happens when some training samples have a later timestamp than samples available in the test set. This would mean that the ML model is being trained on information coming "from the future". Accordingly, the ML model makes use of information, which it cannot have in a realistic scenario, and therefore performs better than it would in reality. Secondly, the train and test data need to be independent of each other. This means that there cannot be samples in both the train and test data which are, e.g, from the same person. Thirdly, the test set should not be drawn selectively. An example of this is when the model is only evaluated on specific data, which it performs well on, and then these results are generalized to all use cases. Doing this would result in a selection bias. ### Structural Problems #### 3.4.1 Privacy concerns Availability of data is key for the reproducibility of any supervised or unsupervised ML algorithm. Even for reproducibility of degree R3, where different data should yield similar conclusions. In this case, it is important to have some representative dataset, which holds comparable information. However, the public release of data is not always an option, since the data can be private, such as health-related patient data or user data collected by companies. To counteract this, privacy preserving technologies have been proposed [30]. There are a few different ways how privacy preserving techniques could work, e.g., to allow the training of ML models without sharing the data, and to work with encrypted or simulated data. #### 3.4.2 Competitive advantage Comparing the status-quo in reproducibility of ML in academic research versus industrial applications gives interesting insights [19]. While both academia and industry still have a lot of work to do in terms of supporting reproducibility, there are different main reasons why reproducibility is not ensured in these two areas. Academic research may just lack enough incentive or rewards for the increased effort of making sure the results are reproducible. In contrast to this, in industrial applications it is often a problem that providing reproducibility may come with a decreased competitive advantage since. ## 4 Drivers for ML Reproducibility After review of the status-quo in the different research fields, combined with the identified barriers, we now present an outline of the main proposals / drivers to address reproducibility. These drivers could be of critical help in overcoming the reproducibility crisis. ### Standardized Environments Container software, such as e.g., Docker, could be used to standardize the ML environment and allow for simple sharing of the container, i.e., environment and code, with other researchers [9]. Furthermore, Code Ocean is a computational research platform, which has been specifically developed and tailored towards the needs of researchers. Because of this, it allows researchers to focus on the research questions instead of the standardization of environments [11]. ### Checklists and Guidelines The idea of checklists and guidelines has been applied effectively in the past, e.g., for safety-critical systems. To build on top of that, [38] proposed a ML reproducibility checklist, which should ensure the inclusion of necessary information for reproducibility. This checklist has been found promising during the NeurIPS 2019 reproducibility challenge, and has been suggested as best practice by researchers of different fields [3]. In Addition to the use of checklists to make sure specific requirements are met by different papers (such as proper scientific method or achievement of certain reproducibility standards etc.), [29] propose to use Large Language Models (LLMs) to verify these checklists and use them to review papers in general. By doing this, both the efficiency, and the accuracy of verifying checklists can be improved, thus further enhancing the viability of checklists. [29] found that GPT-4 is the LLM, which stood out amongst others like Bard, Alpaca or LLaMa and is especially capable of doing this task and was able to achieve an 86.6% accuracy of verifying the fulfillment of different checklist questions. ### Model Info Sheets Similar to checklists, model info sheets are questionnaires specifically tailored towards handling data leakage, i.e., detection and prevention. [26] propose these so-called model info sheets, which should be filled and published with the work, in order to allow other researchers to quickly verify the data used to train the ML model. To do this, the model info sheets require the authors to answer detailed questions about the data and respective train/test splits, which are aimed towards each type of data leakage[26]. Model info sheets seem like a promising solution for a lot of types of data leakage, given that research of non-experts in ML often falls prey to different types of data leakage [26]. Additionally, the model info sheets are a low-effort solution for the problem they are trying to solve. ### Awareness Awareness of the reproducibility crisis and informing about ways to support reproducibility can be a very strong factor in overcoming the crisis, as also shown by the reproducibility crisis in Psychology [44]. Many different initiatives have been started to increase awareness on the reproducibility crisis, a few of which are highlighted below: * The ReScience journal publishes peer-reviewed papers, which are specifically about trying to reproduce results of original publications. These reproductions are then made available on GitHub for other researchers [40]. * ReproducedPapers.org is a Webs it with a similar idea but focuses on teaching. It is an online repository for papers, which are reproducible. The ReproducedPapers initiative also aims to include the reproduction of at least one ML paper in the curriculum of ML students [45]. * Reproducibility challenges, where several researchers try to reproduce many recent publications in parallel, are being held yearly. These challenges allow for an analysis of the success rate of reproduction and can be used to evaluate progress over multiple years [38]. ### Journals To increase the awareness and enforce a minimum of reproducibility standards, journals could adjust their requirements for publication. Recently, it has already become a standard procedure in many journals to require data and/or code availability for publication [38][36][22]. Furthermore, to address the problem of researchers deliberately tweaking results for a higher likelihood of being published, journals could introduce the possibility of preregistration. Preregistration allows researchers to submit their research intention ahead of time to become registered for the publication in the future. The journal decides whether to publish the paper based on the research plan, and researchers then do not have to worry about the outcome of the experiments, which should improve the credibility of the findings [43][35]. The ACM TORS journal (Transactions on Recommender Systems), is a good example of promoting reproducibility in ML. Firstly, it allows for preregistration, and secondly, it also allows for publication of "reproducibility papers", which are specifically concerned with reproduction studies and tools for enhancing reproducibility. ## 5 Conclusion and Future Work Almost all areas of ML, as well as research fields using ML (e.g., health and life science), have seen signs of a reproducibility crisis. Appropriately, this reproducibility crisis has received increasing attention recently and is being discussed in research. It is time to tackle this reproducibility crisis, which at first needs a common understanding of the definitions and different degrees of reproducibility. This work gave an overview over the current state-of-the-art in different research areas. Additionally, it gave an insight into the different barriers and drivers for ML reproducibility. For the continuation of this work, we will dive deeper into the concepts outlined so far and will compare them across different research fields. With this, we hope that a decision on the viability of different solutions can be supported. #### 5.0.1 Acknowledgements. This research is supported by the Horizon Europe project TIER2 under grant agreement No 101094817.
2305.09008
Criteria for supersolvability of saturated fusion systems
Let $p$ be a prime number. A saturated fusion system $\mathcal{F}$ on a finite $p$-group $S$ is said to be supersolvable if there is a series $1 = S_0 \le S_1 \le \dots \le S_m = S$ of subgroups of $S$ such that $S_i$ is strongly $\mathcal{F}$-closed for all $0 \le i \le m$ and such that $S_{i+1}/S_i$ is cyclic for all $0 \le i < m$. We prove some criteria that ensure that a saturated fusion system $\mathcal{F}$ on a finite $p$-group $S$ is supersolvable provided that certain subgroups of $S$ are abelian and weakly $\mathcal{F}$-closed. Our results can be regarded as generalizations of purely group-theoretic results of Asaad.
Fawaz Aseeri, Julian Kaspczyk
2023-05-15T20:45:02Z
http://arxiv.org/abs/2305.09008v1
# Criteria for supersolvability of saturated fusion systems ###### Abstract Let \(p\) be a prime number. A saturated fusion system \(\mathcal{F}\) on a finite \(p\)-group \(S\) is said to be _supersolvable_ if there is a series \(1=S_{0}\leq S_{1}\leq\cdots\leq S_{m}=S\) of subgroups of \(S\) such that \(S_{i}\) is strongly \(\mathcal{F}\)-closed for all \(0\leq i\leq m\) and such that \(S_{i+1}/S_{i}\) is cyclic for all \(0\leq i<m\). We prove some criteria that ensure that a saturated fusion system \(\mathcal{F}\) on a finite \(p\)-group \(S\) is supersolvable provided that certain subgroups of \(S\) are abelian and weakly \(\mathcal{F}\)-closed. Our results can be regarded as generalizations of purely group-theoretic results of Asaad [3]. 0 Footnote 0: Mathematics Subject Classification (2020): 20D10, 20D20. 0 Footnote 0: Mathematics Subject Classification (2020): 20D10, 20D20. ## 1 Introduction In finite group theory, the term "fusion" refers to conjugacy relations between \(p\)-elements and \(p\)-subgroups. The study of fusion in finite groups has a long history, and many results concerning fusion in finite groups had a significant impact on finite group theory. Some well-known examples are Burnside's fusion theorem [15, Lemma 5.12], Frobenius' \(p\)-nilpotency criterion [15, Theorem 5.26], Alperin's fusion theorem [1] and Glauberman's \(Z^{*}\)-theorem [11]. A modern approach to study problems concerning fusion in finite groups is the theory of fusion systems. The standard examples of fusion systems are the fusion categories of finite groups over \(p\)-subgroups. Given a prime number \(p\), a finite group \(G\) and a \(p\)-subgroup \(S\) of \(G\), the _fusion category_ of \(G\) over \(S\) is defined to be the category \(\mathcal{F}_{S}(G)\) given as follows: The objects of \(\mathcal{F}_{S}(G)\) are the subgroups of \(S\), the morphisms in \(\mathcal{F}_{S}(G)\) are the group homomorphisms between subgroups of \(S\) induced by conjugation in \(G\), and the composition of morphisms in \(\mathcal{F}_{S}(G)\) is the usual composition of group homomorphisms. Abstract fusion systems can be regarded as a generalization of this concept. Given a prime number \(p\) and a finite \(p\)-group \(S\), a _fusion system_ on \(S\) is a category \(\mathcal{F}\) whose objects are the subgroups of \(S\) and whose morphisms behave as if they are induced by conjugation inside a finite group containing \(S\) as a \(p\)-subgroup (see [4, Part I, Definition 2.1]). A fusion system is said to be _saturated_ if it satisfies certain additional axioms (see [4, Part I, Definition 2.2]). Any fusion category of a finite group over a Sylow subgroup is saturated (see [4, Part I, Theorem 2.3]), but not every saturated fusion system appears as the fusion category of a finite group over a Sylow subgroup (see [4, Part III, Section 6]). If \(S\) is a Sylow \(p\)-subgroup of a finite group \(G\) for some prime \(p\), then we refer to \(\mathcal{F}_{S}(G)\) as the \(p\)_-fusion system_ of \(G\). The book [4] provides a detailed introduction to the theory of fusion systems, and the reader is asked to consult that book for any definitions on fusion systems we do not explain here. For unfamiliar definitions on groups, the reader is referred to [14, 15]. This paper is concerned with the concept of a supersolvable saturated fusion system introduced in [24]. Given a prime \(p\) and a finite \(p\)-group \(S\), a saturated fusion system \(\mathcal{F}\) on \(S\) is said to be _supersolvable_ if there is a series \(1=S_{0}\leq S_{1}\leq\cdots\leq S_{m}=S\) of subgroups of \(S\) such that \(S_{i}\) is strongly \(\mathcal{F}\)-closed for all \(0\leq i\leq m\) and such that \(S_{i+1}/S_{i}\) is cyclic for all \(0\leq i<m\) (see [24, Definition 1.2]). By [24, Proposition 1.3], for any prime \(p\), the supersolvable saturated fusion systems on finite \(p\)-groups are precisely the \(p\)-fusion systems of supersolvable finite groups, and they are also precisely the \(p\)-fusion systems of \(p\)-supersolvable finite groups. After their introduction in [24], supersolvable saturated fusion systems (and related concepts) were further studied in the papers [6, 22]. The goal of this paper is to obtain criteria for supersolvability of saturated fusion systems. Our results are inspired from a current line of research in finite group theory. Namely, there is currently very active research on subgroup embedding properties, and a problem of particular interest is to study the structure of a finite group \(G\) under the assumption that some given subgroups of \(G\) satisfy a given embedding property. In this context, many results of the following form were obtained: Given a fixed prime \(p\) and a finite group \(G\), it is assumed that all (or at least sufficiently many) \(p\)-subgroups of \(G\) with some fixed order satisfy a certain embedding property. Sometimes, a number of additional conditions are assumed, and the conclusion usually is that \(G\) is \(p\)-supersolvable or \(p\)-nilpotent. As an example, we mention a result of Berkovich and Isaacs. Given a prime \(p\), a natural number \(e\geq 3\) and a finite group \(G\) with a noncyclic Sylow \(p\)-subgroup of order exceeding \(p^{e}\), they proved that \(G\) is \(p\)-supersolvable if any noncyclic subgroup of \(G\) with order \(p^{e}\) is normal in \(G\) (see [8, Theorem C]). Other results of this kind were obtained, for example, in the papers [2, 3, 12, 17, 19, 26]. Our principal motivation is to prove supersolvability criteria of a similar spirit for saturated fusion systems. Indeed, given a prime \(p\) and a saturated fusion system \(\mathcal{F}\) on a finite \(p\)-group \(S\), we will prove criteria ensuring that \(\mathcal{F}\) is supersolvable provided that all subgroups of \(S\) with some fixed order are "suitably embedded" in \(S\) with respect to \(\mathcal{F}\). More precisely, we will assume that all subgroups of \(S\) with some fixed order are weakly \(\mathcal{F}\)-closed and that sufficiently many of them are abelian. Our results generalize some purely group-theoretic results of Asaad [3], and before stating our results, we consider the corresponding results from [3]. First, let us recall some definitions. A subgroup \(H\) of a group \(G\) is said to be _pronormal_ in \(G\) if \(H\) and \(H^{g}\) are conjugate in \(\langle H,H^{g}\rangle\) for each \(g\in G\). By [19, Theorem 4.3], if \(S\) is a Sylow \(p\)-subgroup of a finite group \(G\) for some prime \(p\), then a subgroup \(Q\) of \(S\) is pronormal in \(G\) if and only if \(Q\) is weakly \(\mathcal{F}_{S}(G)\)-closed. Following Asaad [3], we say that a subgroup \(H\) of a group \(G\) is _weakly pronormal_ in \(G\) if there is a subgroup \(K\) of \(G\) such that \(G=HK\) and such that \(H\cap K\) is pronormal in \(G\). Note that, if \(S\) is a Sylow \(p\)-subgroup of a finite group \(G\) for some prime \(p\), a subgroup \(Q\) of \(S\) is weakly pronormal in \(G\) if and only if there is a subgroup \(K\) of \(G\) such that \(G=QK\) and such that \(Q\cap K\) is weakly \(\mathcal{F}_{S}(G)\)-closed. The following theorem is one of the main results of [3]. **Theorem 1.1**.: _([3, Theorem 1.3]) Let \(G\) be a nontrivial finite group, \(p\) be the smallest prime dividing \(|G|\) and \(S\) be a noncyclic Sylow \(p\)-subgroup of \(G\). Suppose that there is a subgroup \(D\) of \(S\) with \(1<D<S\) such that any subgroup of \(S\) with order \(|D|\) or \(p|D|\) is abelian and weakly pronormal in \(G\). Then \(G\) is \(p\)-nilpotent._ Note that, because of Burnside's \(p\)-nilpotency criterion [15, Corollary 5.14], the condition in Theorem 1.2 that \(S\) is noncyclic is not necessary. We will prove the following generalization of Theorem 1.1. **Theorem A**.: _Let \(G\) be a finite group, \(p\) be a prime and \(S\) be a Sylow \(p\)-subgroup of \(G\). Suppose that there is a subgroup \(D\) of \(S\) with \(1<D<S\) such that any subgroup of \(S\) with order \(|D|\) or \(p|D|\) is abelian and weakly pronormal in \(G\). Then \(\mathcal{F}_{S}(G)\) is supersolvable._ By [24, Theorem 1.9 (a)], if \(G\) is a finite group and if \(S\) is a Sylow \(p\)-subgroup of \(G\) for some prime \(p\) with \((|G|,p-1)=1\), then \(G\) is \(p\)-nilpotent provided that \(\mathcal{F}_{S}(G)\) is supersolvable. Of course, the condition \((|G|,p-1)=1\) is satisfied when \(p\) is the smallest prime divisor of \(|G|\), and hence, Theorem 1.1 is covered by Theorem A. When the hypotheses of Theorem A are satisfied, it is not necessarily true that \(G\) is \(p\)-supersolvable. For example, let \(G:=A_{5}\times A_{5}\) and \(S\) be a Sylow \(5\)-subgroup of \(G\). Then \(S\cong C_{5}\times C_{5}\) is abelian. Also, any subgroup of \(S\) with order \(5\) and \(S\) itself are complemented in \(G\) and hence weakly pronormal in \(G\). However, \(G\) is not \(5\)-supersolvable. Our further results are supersolvability criteria for abstract saturated fusion systems, and they are motivated by the following result of Asaad [3]. **Theorem 1.2**.: _([3, Corollary 3.1]) Let \(G\) be a nontrivial finite group, \(p\) be the smallest prime dividing \(|G|\) and \(S\) be a noncyclic Sylow \(p\)-subgroup of \(G\). Suppose that there is a subgroup \(D\) of \(S\) with \(1<D<S\) such that any subgroup of \(S\) with order \(|D|\) is abelian and pronormal in \(G\). If \(S\) is a nonabelian \(2\)-group, suppose moreover that any subgroup of \(S\) with order \(2|D|\) is abelian and pronormal in \(G\). Then \(G\) is \(p\)-nilpotent._ Note that, because of Burnside's \(p\)-nilpotency criterion [15, Corollary 5.14], the condition in Theorem 1.2 that \(S\) is noncyclic is not necessary. Let \(G\) be a nontrivial finite group, \(p\) be the smallest prime divisor of \(|G|\) and \(S\) be a Sylow \(p\)-subgroup of \(G\). As remarked before Theorem 1.1, a subgroup \(Q\) of \(S\) is pronormal in \(G\) if and only if \(Q\) is weakly \(\mathcal{F}_{S}(G)\)-closed. Also, \(G\) is \(p\)-nilpotent if and only if \(\mathcal{F}_{S}(G)\) is supersolvable (indeed, if \(\mathcal{F}_{S}(G)\) is supersolvable, then \(G\) is \(p\)-nilpotent by the remark following Theorem A, and conversely, if \(G\) is \(p\)-nilpotent, then \(\mathcal{F}_{S}(G)=\mathcal{F}_{S}(S)\) is supersolvable). Consequently, we can reformulate Theorem 1.2 as follows: _Let \(G\) be a nontrivial finite group, \(p\) be the smallest prime dividing \(|G|\) and \(S\) be a noncyclic Sylow \(p\)-subgroup of \(G\). Suppose that there is a subgroup \(D\) of \(S\) with \(1<D<S\) such that any subgroup of \(S\) with order \(|D|\) is abelian and weakly \(\mathcal{F}_{S}(G)\)-closed. If \(S\) is a nonabelian \(2\)-group, suppose moreover that any subgroup of \(S\) with order \(2|D|\) is abelian and weakly \(\mathcal{F}_{S}(G)\)-closed. Then \(\mathcal{F}_{S}(G)\) is supersolvable._ In view of this reformulation of Theorem 1.2, the following question naturally arises. **Question 1.3**.: _Let \(p\) be a prime number and \(\mathcal{F}\) be a saturated fusion system on a finite \(p\)-group \(S\). Suppose that there is a subgroup \(D\) of \(S\) with \(1<D<S\) such that any subgroup of \(S\) with order \(|D|\) is abelian and weakly \(\mathcal{F}\)-closed. If \(S\) is a nonabelian \(2\)-group, suppose moreover that any subgroup of \(S\) with order \(2|D|\) is abelian and weakly \(\mathcal{F}\)-closed. Is it true in general that \(\mathcal{F}\) is supersolvable?_ Our next result positively answers Question 1.3 for the case \(p=2\). **Theorem B**.: _Let \(\mathcal{F}\) be a saturated fusion system on a finite \(2\)-group \(S\). Suppose that there is a subgroup \(D\) of \(S\) with \(1<D<S\) such that any subgroup of \(S\) with order \(|D|\) is abelian and weakly \(\mathcal{F}\)-closed. If \(S\) is nonabelian, suppose moreover that any subgroup of \(S\) with order \(2|D|\) is abelian and weakly \(\mathcal{F}\)-closed. Then \(\mathcal{F}\) is supersolvable._ From [24, Proposition 1.3], we see that a saturated fusion system \(\mathcal{F}\) on a finite \(2\)-group \(S\) is supersolvable if and only if \(\mathcal{F}\) is nilpotent, i.e. if and only if \(\mathcal{F}=\mathcal{F}_{S}(S)\). Therefore, we could replace the word "supersolvable" by the word "nilpotent" in Theorem B. The following result gives a positive answer to Question 1.3 for the case that \(p\) is odd and that \(D\) is maximal in \(S\). In fact, it shows that even more is true. **Theorem C**.: _Let \(p\) be an odd prime number and \(\mathcal{F}\) be a saturated fusion system on a finite \(p\)-group \(S\). Suppose that any maximal subgroup of \(S\) is weakly \(\mathcal{F}\)-closed. If \(S\) is not cyclic, suppose moreover that \(S\) has more than one abelian maximal subgroup. Then \(\mathcal{F}\) is supersolvable._ In the statement of Theorem C, it would not be enough to assume that \(S\) has _at least_ one abelian maximal subgroup when \(S\) is not cyclic. This can be seen from the following example. **Example 1.4**.: Let \(G\) be the group indexed in GAP [10] as SmallGroup(324,160), \(S\) be a Sylow \(3\)-subgroup of \(G\) and \(\mathcal{F}:=\mathcal{F}_{S}(G)\). Then \(S\) has precisely four maximal subgroups, precisely one of them is abelian, and all of them are weakly \(\mathcal{F}\)-closed. However, \(\mathcal{F}\) is not supersolvable. Indeed, since \(G\) is solvable, the supersolvability of \(\mathcal{F}\) would imply that \(G\) is \(3\)-supersolvable (see [24, Theorem 1.9 (b)]), but one can check by using GAP [10] that \(G\) is not \(3\)-supersolvable. The next result gives a complete positive answer to Question 1.3 for the case that \(p\) is odd. **Theorem D**.: _Let \(p\) be an odd prime number and \(\mathcal{F}\) be a saturated fusion system on a finite \(p\)-group \(S\). Suppose that there is a subgroup \(D\) of \(S\) with \(1<D<S\) such that any subgroup of \(S\) with order \(|D|\) is abelian and weakly \(\mathcal{F}\)-closed. Then \(\mathcal{F}\) is supersolvable._ We remark that our proof of Theorem D indirectly relies on the classification of finite simple groups. More precisely, in our proof of Theorem D, we will apply [16, Theorem 3.5], and the classification of finite simple groups was used in the proof of that result. We finish the introduction with the following open question that naturally arises in view of Theorems C and D. **Question 1.5**.: _Let \(p\) be an odd prime number and \(\mathcal{F}\) be a saturated fusion system on a finite \(p\)-group \(S\). Suppose that there is a subgroup \(D\) of \(S\) with \(1<D<S\) such that any subgroup of \(S\) with order \(|D|\) is weakly \(\mathcal{F}\)-closed. If \(S\) is not cyclic, suppose moreover that \(S\) has more than one abelian subgroup with order \(|D|\). Is it true in general that \(\mathcal{F}\) is supersolvable?_ ## 2 Preliminaries In this section, we collect some lemmas needed for the proofs of our main results. **Lemma 2.1**.: _Let \(p\) be a prime number, \(\mathcal{F}\) be a fusion system on a finite \(p\)-group \(S\) and \(Q\leq R\leq S\). Suppose that \(Q\) is strongly \(\mathcal{F}\)-closed and that \(R\) is weakly \(\mathcal{F}\)-closed. Then \(R/Q\) is weakly \(\mathcal{F}/Q\)-closed._ Proof.: Let \(\psi:R/Q\to S/Q\) be a morphism in \(\mathcal{F}/Q\). We have to show that \(\psi(R/Q)=R/Q\). As an \(\mathcal{F}/Q\)-morphism, \(\psi\) is induced by an \(\mathcal{F}\)-morphism, i.e. there is a morphism \(\varphi\in\operatorname{Hom}_{\mathcal{F}}(R,S)\) such that \(\psi(rQ)=\varphi(r)Q\) for all \(r\in R\). Since \(R\) is weakly \(\mathcal{F}\)-closed, we have \(\psi(R/Q)=\varphi(R)Q/Q=RQ/Q=R/Q\), as required. **Lemma 2.2**.: _Let \(p\) be a prime number, \(\mathcal{F}\) be a fusion system on a finite \(p\)-group \(S\) and \(Q\leq R\leq S\). Suppose that \(Q\trianglelefteq\mathcal{F}\) and that \(R/Q\) is strongly \(\mathcal{F}/Q\)-closed. Then \(R\) is strongly \(\mathcal{F}\)-closed._ Proof.: Let \(P\) be a subgroup of \(R\) and \(\varphi:P\to S\) be a morphism in \(\mathcal{F}\). We have to show that \(\varphi(P)\leq R\). Since \(Q\) is normal in \(\mathcal{F}\) by hypothesis, \(\varphi\) extends to a morphism \(\psi\in\operatorname{Hom}_{\mathcal{F}}(PQ,S)\) with \(\psi(Q)=Q\). Let \[\overline{\psi}:PQ/Q\to S/Q,xQ\mapsto\psi(x)Q.\] Then \(\overline{\psi}\) is a morphism in \(\mathcal{F}/Q\). Since \(R/Q\) is strongly \(\mathcal{F}/Q\)-closed and \(PQ/Q\leq R/Q\), it follows that \(\overline{\psi}(PQ/Q)\leq R/Q\). So, we have \(\varphi(P)=\psi(P)\leq R\), as required. **Lemma 2.3**.: _Let \(p\) be a prime number, \(\mathcal{F}\) be a supersolvable saturated fusion system on a finite \(p\)-group \(S\) and \(Q\) be a weakly \(\mathcal{F}\)-closed subgroup of \(S\). Then \(Q\) is strongly \(\mathcal{F}\)-closed._ Proof.: Let \(P\) be a subgroup of \(Q\) and \(\varphi:P\to S\) be a morphism in \(\mathcal{F}\). We have to show that \(\varphi(P)\leq Q\). By [24, Proposition 2.3], \(S\) is normal in \(\mathcal{F}\). Hence, \(\varphi\) extends to an automorphism \(\psi\in\operatorname{Aut}_{\mathcal{F}}(S)\). Since \(Q\) is weakly \(\mathcal{F}\)-closed, we have \(\varphi(P)=\psi(P)\leq\psi(Q)=Q\), as required. **Lemma 2.4**.: _Let \(p\) be a prime number, \(\mathcal{F}\) be a supersolvable saturated fusion system on a finite \(p\)-group \(S\) and \(Q\) be a strongly \(\mathcal{F}\)-closed subgroup of \(S\). Then the following hold:_ 1. _There is a series_ \[1=S_{0}\leq S_{1}\leq\cdots\leq S_{m}=S\] _of subgroups of_ \(S\) _such that_ \(S_{i}\) _is strongly_ \(\mathcal{F}\)_-closed for all_ \(0\leq i\leq m\)_, such that_ \(S_{i+1}/S_{i}\) _is cyclic for all_ \(0\leq i<m\) _and such that_ \(S_{j}=Q\) _for some_ \(0\leq j\leq m\)_._ 2. \(\mathcal{F}/Q\) _is supersolvable._ Proof.: By [24, Proposition 1.3], there is a \(p\)-supersolvable finite group \(G\) with \(S\in\operatorname{Syl}_{p}(G)\) and \(\mathcal{F}=\mathcal{F}_{S}(G)\). Since \(S\) is normal in \(\mathcal{F}\) by [24, Proposition 2.3], we have \(\mathcal{F}=N_{\mathcal{F}}(S)=\mathcal{F}_{S}(N_{G}(S))\). Therefore, upon replacing \(G\) by \(N_{G}(S)\), we may (and will) assume that \(S\trianglelefteq G\). Since \(Q\) is strongly \(\mathcal{F}\)-closed and \(S\trianglelefteq G\), we have \(Q\trianglelefteq G\). Let \(1=H_{0}\leq H_{1}\leq\cdots\leq H_{n}=G\) be a chief series of \(G\) through \(Q\) and \(S\). Then \(S=H_{m}\) for some \(0\leq m\leq n\) and \(Q=H_{j}\) for some \(0\leq j\leq m\). Set \(S_{i}:=H_{i}\) for all \(0\leq i\leq m\). Then we have \(1=S_{0}\leq S_{1}\leq\cdots\leq S_{m}=S\). For each \(0\leq i\leq m\), the subgroup \(S_{i}\) is strongly \(\mathcal{F}\)-closed since \(S_{i}\trianglelefteq G\). For each \(0\leq i<m\), the group \(S_{i+1}/S_{i}\) is cyclic of order \(p\) since it is a chief factor of the \(p\)-supersolvable group \(G\). Moreover, \(S_{j}=H_{j}=Q\). The proof of (1) is now complete. It remains to prove (2). We have \[1=S_{j}/Q\leq S_{j+1}/Q\leq\cdots\leq S_{m}/Q=S/Q.\] For each \(j\leq k\leq m\), we have \(S_{k}/Q\trianglelefteq G/Q\), and since \(\mathcal{F}/Q=\mathcal{F}_{S/Q}(G/Q)\), it follows that \(S_{k}/Q\) is strongly \(\mathcal{F}/Q\)-closed. Moreover, for each \(j\leq k<m\), the quotient \((S_{k+1}/Q)/(S_{k}/Q)\cong S_{k+1}/S_{k}\) is cyclic of order \(p\). It follows that \(\mathcal{F}/Q\) is supersolvable, and so the proof of (2) is complete. To state the next lemma, which will play a key role in the proofs of all our main results, we introduce the following notation. **Notation 2.5**.: Let \(p\) be a prime number and \(\mathcal{F}\) be a saturated fusion system on a finite \(p\)-group \(S\). We set \[\mathcal{E}_{\mathcal{F}}^{*}:=\{Q\leq S\mid Q\text{ is $\mathcal{F}$- essential, or $Q=S$}\}.\] **Lemma 2.6**.: _Let \(p\) be a prime number and \(\mathcal{F}\) be a saturated fusion system on a finite \(p\)-group \(S\). Suppose that, for each \(Q\in\mathcal{E}_{\mathcal{F}}^{*}\), the fusion system \(N_{\mathcal{F}}(Q)\) is supersolvable. Then \(\mathcal{F}\) is supersolvable._ Proof.: Let \(Q\in\mathcal{E}_{\mathcal{F}}^{*}\). Since \(N_{\mathcal{F}}(Q)\) is supersolvable, \(\operatorname{Aut}_{N_{\mathcal{F}}(Q)}(Q)=\operatorname{Aut}_{\mathcal{F}}(Q)\) is \(p\)-closed by [24, Proposition 1.3]. Hence, \(\operatorname{Out}_{\mathcal{F}}(Q)\) is \(p\)-closed. From [4, Proposition A.7 (c)], we see that a \(p\)-closed finite group cannot possess a strongly \(p\)-embedded subgroup. Consequently, \(\operatorname{Out}_{\mathcal{F}}(Q)\) does not possess a strongly \(p\)-embedded subgroup, and so \(Q\) is not \(\mathcal{F}\)-essential. It follows that \(\mathcal{E}_{\mathcal{F}}^{*}=\{S\}\). Applying [4, Part I, Proposition 4.5], we conclude that \(S\trianglelefteq\mathcal{F}\). The proof is now complete since \(N_{\mathcal{F}}(S)=\mathcal{F}\) is supersolvable by hypothesis. **Lemma 2.7**.: _Let \(p\) be a prime number, \(\mathcal{F}\) be a saturated fusion system on a finite \(p\)-group \(S\) and \(Q\trianglelefteq\mathcal{F}\). Suppose that there is a series \(1=Q_{0}\leq Q_{1}\leq\cdots\leq Q_{n}=Q\) of subgroups of \(Q\) such that \(Q_{i}\) is strongly \(\mathcal{F}\)-closed for all \(0\leq i\leq n\) and such that \(Q_{i+1}/Q_{i}\) is cyclic for all \(0\leq i<n\). Suppose moreover that \(\mathcal{F}/Q\) is supersolvable. Then \(\mathcal{F}\) is supersolvable._ Proof.: Since \(\mathcal{F}/Q\) is supersolvable, there is a series \(Q=V_{0}\leq V_{1}\leq\cdots\leq V_{m}=S\) of subgroups of \(S\) such that \(V_{i}/Q\) is strongly \(\mathcal{F}/Q\)-closed for all \(0\leq i\leq m\) and such that \((V_{i+1}/Q)/(V_{i}/Q)\cong V_{i+1}/V_{i}\) is cyclic for all \(0\leq i<m\). By Lemma 2.2, \(V_{i}\) is strongly \(\mathcal{F}\)-closed for all \(0\leq i\leq m\). Set \(S_{i}:=Q_{i}\) for all \(0\leq i\leq n\) and \(S_{n+\ell}:=V_{\ell}\) for all \(1\leq\ell\leq m\). Then \(1=S_{0}\leq S_{1}\leq\cdots\leq S_{n+m}=S\). Also, \(S_{i}\) is strongly \(\mathcal{F}\)-closed for all \(0\leq i\leq n+m\), and \(S_{i+1}/S_{i}\) is cyclic for all \(0\leq i<n+m\). Consequently, \(\mathcal{F}\) is supersolvable. For the next lemma, we recall that the _supersolvable hypercentre_ of a finite group \(G\), denoted by \(Z_{\mathcal{U}}(G)\), is the largest normal subgroup of \(G\) such that every \(G\)-chief factor below it is cyclic. **Lemma 2.8**.: _Let \(p\) be a prime number and \(P\) be a nontrivial normal \(p\)-subgroup of a finite group \(G\). Suppose that \(P\) has a subgroup \(D\) with \(1<D<P\) such that every subgroup of \(P\) of order \(|D|\) is normal in \(G\). If \(P\) is a nonabelian \(2\)-group and \(|D|=2\), suppose moreover that every cyclic subgroup of \(P\) of order \(4\) is normal in \(G\). Then \(P\leq Z_{\mathcal{U}}(G)\)._ Proof.: This follows from [23]. **Lemma 2.9**.: _Let \(G\) be a finite group, \(p\) be a prime and \(S\) be a Sylow \(p\)-subgroup of \(G\). Suppose that, for any proper subgroup \(H\) of \(G\) with \(O_{p}(G)<S\cap H\) and \(S\cap H\in\operatorname{Syl}_{p}(H)\), the fusion system \(\mathcal{F}_{S\cap H}(H)\) is supersolvable. Suppose moreover that \(O_{p}(G)\leq Z_{\mathcal{U}}(G)\). Then \(\mathcal{F}_{S}(G)\) is supersolvable._ Proof.: Set \(\mathcal{F}:=\mathcal{F}_{S}(G)\), \(\overline{G}:=G/O_{p}(G)\) and \(\overline{\mathcal{F}}:=\mathcal{F}_{\overline{S}}(\overline{G})\). Since \(O_{p}(G)\leq Z_{\mathcal{U}}(G)\), there is a series \(1=U_{0}\leq U_{1}\leq\cdots\leq U_{n}=O_{p}(G)\) of subgroups of \(O_{p}(G)\) such that \(U_{i}\trianglelefteq G\) for all \(0\leq i\leq n\) and such that \(U_{i+1}/U_{i}\) is cyclic for all \(0\leq i<n\). In particular, \(U_{i}\) is strongly \(\mathcal{F}\)-closed for all \(0\leq i\leq n\). So, by Lemma 2.7, it suffices to show that \(\mathcal{F}/O_{p}(G)=\overline{\mathcal{F}}\) is supersolvable. This is clear when \(S=O_{p}(G)\), and so we assume that \(O_{p}(G)\neq S\). Let \(O_{p}(G)<Q\leq S\) such that \(\overline{Q}\) is fully \(\overline{\mathcal{F}}\)-normalized. Then \(N_{\overline{S}}(\overline{Q})=\overline{N_{S}(Q)}\) is a Sylow \(p\)-subgroup of \(N_{\overline{G}}(\overline{Q})=\overline{N_{G}(\overline{Q})}\). Hence, \(N_{S}(Q)\) is a Sylow \(p\)-subgroup of \(N_{G}(Q)\). Since \(O_{p}(G)<Q\), we have \(N_{G}(Q)<G\). Moreover, \(O_{p}(G)<Q\leq S\cap N_{G}(Q)\), and \(S\cap N_{G}(Q)=N_{S}(Q)\) is a Sylow \(p\)-subgroup of \(N_{G}(Q)\). So, by hypothesis, the fusion system \(N_{\mathcal{F}}(Q)=\mathcal{F}_{N_{S}(Q)}(N_{G}(Q))\) is supersolvable. Lemma 2.4 (2) implies that \(N_{\mathcal{F}}(Q)/O_{p}(G)=\mathcal{F}_{\overline{N_{S}(Q)}}(\overline{N_{G}( \overline{Q})})=N_{\overline{\mathcal{F}}}(\overline{Q})\) is supersolvable. Now, let \(O_{p}(G)\leq R\leq S\) such that \(\overline{R}\in\mathcal{E}_{\overline{\mathcal{F}}}^{*}\). Then \(\overline{R}\) is \(\overline{\mathcal{F}}\)-centric and fully \(\overline{\mathcal{F}}\)-normalized. Since \(\overline{S}\neq 1\), we have \(\overline{R}\neq 1\) and thus \(O_{p}(G)<R\), and the preceding paragraph implies that \(N_{\overline{\mathcal{F}}}(\overline{R})\) is supersolvable. Lemma 2.6 yields that \(\overline{\mathcal{F}}\) is supersolvable, as required. **Lemma 2.10**.: _Let \(p\) be a prime number and \(\mathcal{F}\) be a saturated fusion system on a finite \(p\)-group \(S\). Suppose that, whenever \(R\) is a subgroup of \(S\) with \(O_{p}(\mathcal{F})<R\), any proper saturated subsystem of \(\mathcal{F}\) on \(R\) is supersolvable. Suppose moreover that there is a series \(1=S_{0}\leq S_{1}\leq\cdots\leq S_{n}=O_{p}(\mathcal{F})\) of subgroups of \(O_{p}(\mathcal{F})\) such that \(S_{i}\) is strongly \(\mathcal{F}\)-closed for all \(0\leq i\leq n\) and such that \(S_{i+1}/S_{i}\) is cyclic for all \(0\leq i<n\). Then \(\mathcal{F}\) is supersolvable._ Proof.: Set \(\overline{S}:=S/O_{p}(\mathcal{F})\) and \(\overline{\mathcal{F}}:=\mathcal{F}/O_{p}(\mathcal{F})\). By Lemma 2.7, it suffices to show that \(\overline{\mathcal{F}}\) is supersolvable. This is clear when \(O_{p}(\mathcal{F})=S\), and so we assume that \(O_{p}(\mathcal{F})\neq S\). Let \(O_{p}(\mathcal{F})<Q\leq S\) such that \(\overline{Q}\) is fully \(\overline{\mathcal{F}}\)-normalized. Then, by [9, Proposition 5.58 (iii)], \(Q\) is fully \(\mathcal{F}\)-normalized. Hence, \(N_{\mathcal{F}}(Q)\) is a saturated subsystem of \(\mathcal{F}\) on \(N_{S}(Q)\). Since \(O_{p}(\mathcal{F})<Q\), we have \(O_{p}(\mathcal{F})<N_{S}(Q)\) and \(N_{\mathcal{F}}(Q)\neq\mathcal{F}\). So, by hypothesis, the fusion system \(N_{\mathcal{F}}(Q)\) is supersolvable. Lemma 2.4 (2) implies that \(N_{\mathcal{F}}(Q)/O_{p}(\mathcal{F})\) is supersolvable. By [9, Exercise 5.11], we have \(N_{\mathcal{F}}(Q)/O_{p}(\mathcal{F})=N_{\overline{\mathcal{F}}}(\overline{Q})\), and so \(N_{\overline{\mathcal{F}}}(\overline{Q})\) is supersolvable. Let \(O_{p}(\mathcal{F})\leq R\leq S\) such that \(\overline{R}\in\mathcal{E}_{\overline{\mathcal{F}}}^{*}\). Then \(\overline{R}\) is fully \(\overline{\mathcal{F}}\)-normalized and \(\overline{\mathcal{F}}\)-centric. Since \(\overline{S}\neq 1\), we have \(\overline{R}\neq 1\) and thus \(O_{p}(\mathcal{F})<R\), and the preceding paragraph implies that \(N_{\overline{\mathcal{F}}}(\overline{R})\) is supersolvable. Lemma 2.6 yields that \(\overline{\mathcal{F}}\) is supersolvable, as required. **Lemma 2.11**.: _Let \(p\) be an odd prime number and \(SL_{2}(p)\leq H\leq GL_{2}(p)\). Let \(V\) denote the group consisting of all row vectors \(\begin{pmatrix}x&y\end{pmatrix}\), where \(x,y\in\mathbb{F}_{p}\), together with the componentwise addition. Moreover, let \(G=V\rtimes H\) be the outer semidirect product of \(V\) and \(H\) with respect to the natural action of \(H\) on \(V\), i.e. \(G\) is the group consisting of all pairs \((h,v)\) with \(h\in H\) and \(v\in V\), together with the multiplication given by_ \[(h_{1},v_{1})\cdot(h_{2},v_{2}):=(h_{1}h_{2},v_{1}h_{2}+v_{2})\] _for all \(h_{1},h_{2}\in H\), \(v_{1},v_{2}\in V\). Let_ \[U:=\left\{\begin{pmatrix}1&a\\ 0&1\end{pmatrix}\ :\ a\in\mathbb{F}_{p}\right\}\leq SL_{2}(p)\leq H\] _and_ \[S:=\{(u,v)\ :\ u\in U,v\in V\}.\] _Then \(S\) is a Sylow \(p\)-subgroup of \(G\), and there is a maximal subgroup of \(S\) which is not weakly \(\mathcal{F}_{S}(G)\)-closed._ Proof.: Since \(|SL_{2}(p)|=(p^{2}-1)p\) by [14, Kapitel II, Hilfssatz 6.2 (2)] and since \(|H:SL_{2}(p)|\) is not divisible by \(p\), the largest power of \(p\) dividing \(|H|\) is \(p\). Therefore, \(p^{3}\) is the largest power of \(p\) dividing \(|G|\). Clearly, \(S\) is a subgroup of \(G\) of order \(p^{3}\). Thus \(S\in\operatorname{Syl}_{p}(G)\). Let \[A:=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}\in U,\quad\ B:=I_{2}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\in U,\] and \[a:=\begin{pmatrix}1&0\end{pmatrix}\in V,\quad b:=\begin{pmatrix}0&1\end{pmatrix} \in V.\] Set \(r:=(A,a)\in S\) and \(s:=(B,b)\in S\). A direct calculation shows that \(|\langle r\rangle|=|\langle s\rangle|=p\), \(rs=sr\) and \(\langle r\rangle\cap\langle s\rangle=1\). Therefore, \(S_{1}:=\langle r,s\rangle=\langle r\rangle\langle s\rangle\) has order \(p^{2}\), whence \(S_{1}\) is maximal in \(S\). We show now that \(S_{1}\) is not weakly \(\mathcal{F}_{S}(G)\)-closed. Let \[M:=\begin{pmatrix}-1&1\\ 0&-1\end{pmatrix}\in SL_{2}(p)\leq H\] and \(g:=(M,0_{V})\in G\). Then \[g^{-1}rg=(A,b-a)\in S\] and \[g^{-1}sg=(B,-b)\in S.\] Thus \(g^{-1}S_{1}g=\langle g^{-1}rg,g^{-1}sg\rangle\leq S\). Assume that \(S_{1}=g^{-1}S_{1}g\). Then \(g^{-1}rg\in S_{1}\), and so there exist \(0\leq i,j<p\) with \(g^{-1}rg=r^{i}s^{j}\). Then \(A=A^{i}\) and hence \(i=1\). It follows that \((A,b-a)=g^{-1}rg=rs^{j}=(A,a+jb)\), whence \(\begin{pmatrix}-1&1\end{pmatrix}=b-a=a+jb=\begin{pmatrix}1&j\end{pmatrix}\) and thus \(-1=1\). This is a contradiction since \(p\) is odd. So we have \(S_{1}\neq g^{-1}S_{1}g\leq S\), and this shows that \(S_{1}\) is not weakly \(\mathcal{F}_{S}(G)\)-closed. The next lemma is certainly well-known, but we include a proof for the convenience of the reader. **Lemma 2.12**.: _Let \(p\) be an odd prime number and \(n\in\{2,3\}\). Then the following hold:_ 1. _The Sylow_ \(p\)_-subgroups of_ \(GL_{n}(p)\) _have exponent_ \(p\)_._ 2. _If_ \(H\) _is a nontrivial cyclic_ \(p\)_-subgroup of_ \(GL_{n}(p)\)_, then_ \(H\cong C_{p}\)_._ Proof.: By [14, Kapitel II, Hilfssatz 6.2 (1)], we have \(|GL_{2}(p)|=(p^{2}-1)(p^{2}-p)=p(p^{2}-1)(p-1)\). Hence, the Sylow \(p\)-subgroups of \(GL_{2}(p)\) have order \(p\). In particular, they have exponent \(p\), and so (1) holds for \(n=2\). Again by [14, Kapitel II, Hilfssatz 6.2 (1)], we have \(|GL_{3}(p)|=(p^{3}-1)(p^{3}-p)(p^{3}-p^{2})=p^{3}(p^{3}-1)(p^{2}-1)(p-1)\). Hence, the Sylow \(p\)-subgroups of \(GL_{3}(p)\) have order \(p^{3}\). Set \[U:=\left\{\begin{pmatrix}1&a&b\\ 0&1&c\\ 0&0&1\end{pmatrix}\ :\ a,b,c\in\mathbb{F}_{p}\right\}.\] Then \(U\) is easily seen to be a subgroup of \(GL_{3}(p)\) with order \(p^{3}\). In other words, \(U\) is a Sylow \(p\)-subgroup of \(GL_{3}(p)\). Let \(a,b,c\in\mathbb{F}_{p}\) and \[x:=\begin{pmatrix}1&a&b\\ 0&1&c\\ 0&0&1\end{pmatrix}\in U.\] One can show by induction that \[x^{k}=\begin{pmatrix}1&ka&kb+\frac{(k-1)k}{2}ac\\ 0&1&kc\\ 0&0&1\end{pmatrix}\] for every positive integer \(k\). In particular, \(x^{p}\) is the identity matrix \(I_{3}\). Since \(x\) was an arbitrarily chosen element of \(U\), it follows that \(U\) has exponent \(p\). Hence, any Sylow \(p\)-subgroup of \(GL_{3}(p)\) has exponent \(p\), and so (1) holds for \(n=3\). Now, let \(n\in\{2,3\}\) and \(H\) be a nontrivial cyclic \(p\)-subgroup of \(GL_{n}(p)\). Then \(H=\langle y\rangle\) for some nontrivial \(p\)-element \(y\) of \(GL_{n}(p)\). By (1), \(y\) has order \(p\). Thus \(H=\langle y\rangle\cong C_{p}\), and so (2) holds. Recall that a group \(G\) is called _minimal nonnilpotent_ if any proper subgroup of \(G\) is nilpotent, while \(G\) itself is not nilpotent. **Lemma 2.13**.: _Let \(G\) be a minimal nonnilpotent finite group and \(S\) be a nonnormal Sylow subgroup of \(G\). Then \(S\) is cyclic._ Proof.: By [14, Kapitel III, Satz 5.2], we have \(|G|=p^{a}q^{b}\) with distinct prime numbers \(p\), \(q\) and positive integers \(a\), \(b\), where \(G\) has a normal Sylow \(p\)-subgroup and cyclic Sylow \(q\)-subgroups. Since \(S\) is not normal in \(G\), we have that \(S\) is a Sylow \(q\)-subgroup of \(G\). Consequently, \(S\) is cyclic. The next lemma is due to Oliver [20]. It will play an important role in the proof of Theorem C. **Lemma 2.14**.: _([20, Lemma 1.11]) Let \(p\) be a prime number, \(A\) be finite abelian \(p\)-group and \(G\) be a subgroup of \(\operatorname{Aut}(A)\). Suppose that the following hold:_ 1. _The Sylow_ \(p\)_-subgroups of_ \(G\) _have order_ \(p\) _and are not normal in_ \(G\)_._ 2. _For each_ \(x\in G\) _of order_ \(p\)_, the group_ \([x,A]\) _has order_ \(p\)_._ _Set \(H:=O^{p^{\prime}}(G)\), \(A_{1}:=C_{A}(H)\) and \(A_{2}:=[H,A]\). Then \(G\) normalizes \(A_{1}\) and \(A_{2}\), \(A=A_{1}\times A_{2}\), and \(A_{2}\cong C_{p}\times C_{p}\)._ Let \(p\) be a prime number and \(S\) be a finite \(p\)-group. Following [7, SS65], we say that \(S\) is an \(A_{2}\)_-group_ if \(S\) contains a nonabelian maximal subgroup, while any subgroup of \(S\) with index \(p^{2}\) is abelian. **Lemma 2.15**.: _Let \(p\) be a prime number and \(S\) be a finite \(p\)-group. Suppose that \(S\) is a nonmetacyclic \(A_{2}\)-group and that \(|S|>p^{4}\). Then the following hold:_ 1. _([_7_, Proposition 71.4]) If_ \(S\) _possesses precisely one abelian maximal subgroup and if_ \(S^{\prime}\leq Z(S)\)_, then_ \(Z(S)=\Phi(S)\)_._ 2. _([_7_, Proposition 71.5]) If any maximal subgroup of_ \(S\) _is nonabelian and if_ \(p\) _is odd, then_ \(|S|=p^{5}\) Proofs of the main results Proof of Theorem A.: Suppose that the theorem is false, and let \(G\) be a minimal counterexample. We will derive a contradiction in several steps. Set \(\mathcal{F}:=\mathcal{F}_{S}(G)\). (1) _Let \(H<G\) with \(S\cap H\in\operatorname{Syl}_{p}(H)\) and \(|S\cap H|\geq p|D|\). Then \(\mathcal{F}_{S\cap H}(H)\) is supersolvable._ By hypothesis, any subgroup of \(S\cap H\) with order \(|D|\) or \(p|D|\) is abelian and weakly pronormal in \(G\). Applying [3, Lemma 2.2 (2)], we conclude that any subgroup of \(S\cap H\) with order \(|D|\) or \(p|D|\) is abelian and weakly pronormal in \(H\). The minimality of \(G\) implies that \(\mathcal{F}_{S\cap H}(H)\) is supersolvable. (2) _Let \(Q\in\mathcal{E}_{\mathcal{F}}^{*}\). Then \(|Q|\geq p|D|\), and if \(Q\) is not normal in \(G\), then \(N_{\mathcal{F}}(Q)\) is supersolvable._ Suppose that \(|Q|<p|D|\). Then there is a subgroup \(R\) of \(S\) such that \(|R|=p|D|\) and \(Q<R\). By hypothesis, \(R\) is abelian, and so we have \(R\leq C_{S}(Q)\). As a member of \(\mathcal{E}_{\mathcal{F}}^{*}\), the subgroup \(Q\) is \(\mathcal{F}\)-centric. It follows that \(R\leq C_{S}(Q)\leq Q\), a contradiction. Thus \(|Q|\geq p|D|\). Suppose now that \(Q\) is not normal in \(G\). Hence, \(N_{G}(Q)\) is a proper subgroup of \(G\). As \(Q\) is fully \(\mathcal{F}\)-normalized, we have \(S\cap N_{G}(Q)=N_{S}(Q)\in\operatorname{Syl}_{p}(N_{G}(Q))\). Also, \(|N_{S}(Q)|\geq|Q|\geq p|D|\). Now, (1) implies that \(N_{\mathcal{F}}(Q)=\mathcal{F}_{N_{S}(Q)}(N_{G}(Q))\) is supersolvable. (3) \(|O_{p}(G)|\geq p|D|\). Assume that there is no \(Q\in\mathcal{E}_{\mathcal{F}}^{*}\) with \(Q\trianglelefteq G\). Then, for each \(Q\in\mathcal{E}_{\mathcal{F}}^{*}\), the fusion system \(N_{\mathcal{F}}(Q)\) is supersolvable by (2), and Lemma 2.6 implies that \(\mathcal{F}\) is supersolvable. This contradiction shows that there is some \(Q\in\mathcal{E}_{\mathcal{F}}^{*}\) with \(Q\trianglelefteq G\). Applying (2), we conclude that \(|O_{p}(G)|\geq|Q|\geq p|D|\). (4) \(O_{p}(G)\leq Z_{\mathcal{U}}(G)\). Let \(U\) be a subgroup of \(O_{p}(G)\) with order \(|D|\) or \(p|D|\). By hypothesis, \(U\) is weakly pronormal in \(G\). Hence, there is a subgroup \(K\) of \(G\) such that \(G=UK\) and such that \(U\cap K\) is pronormal in \(G\). By [19, Theorem 4.3], \(U\cap K\) is weakly \(\mathcal{F}\)-closed. For each \(g\in G\), we have \((U\cap K)^{g}\leq O_{p}(G)\leq S\), and so it follows that \((U\cap K)^{g}=U\cap K\). Consequently, \(U\cap K\) is normal in \(G\), and therefore, \(U\) is \(c\)-supplemented in \(G\) in the sense of [2, 5]. Since \(U\) was arbitrarily chosen, any subgroup of \(O_{p}(G)\) with order \(|D|\) or \(p|D|\) is \(c\)-supplemented in \(G\). Applying [2, Theorem 3.2 and Corollary 3.4], we conclude that \(O_{p}(G)\leq Z_{\mathcal{U}}(G)\). (5) _The final contradiction._ If \(H\) is a proper subgroup of \(G\) with \(O_{p}(G)<S\cap H\) and \(S\cap H\in\operatorname{Syl}_{p}(H)\), then \(\mathcal{F}_{S\cap H}(H)\) is supersolvable by (1) and (3). Also, \(O_{p}(G)\leq Z_{\mathcal{U}}(G)\) by (4). So \(\mathcal{F}\) is supersolvable by Lemma 2.9. This contradiction completes the proof. Proof of Theorem B.: Suppose that the theorem is false, and let \(\mathcal{F}\) be a counterexample such that \(|\mathcal{F}|\), the number of morphisms in \(\mathcal{F}\), is minimal. We will derive a contradiction in several steps. In some parts of the proof, we argue similarly as in the proof of Theorem A. (1) _Let \(R\) be a subgroup of \(S\) with \(|R|\geq 2|D|\) and \(\mathcal{F}_{0}\) be a proper saturated subsystem of \(\mathcal{F}\) on \(R\). Then \(\mathcal{F}_{0}\) is supersolvable._ By hypothesis, any subgroup of \(R\) with order \(|D|\) is abelian and weakly \({\cal F}\)-closed, and if \(R\) is nonabelian, then we moreover have that any subgroup of \(R\) with order \(2|D|\) is abelian and weakly \({\cal F}\)-closed. Clearly, any weakly \({\cal F}\)-closed subgroup of \(R\) is weakly \({\cal F}_{0}\)-closed. Consequently, the fusion system \({\cal F}_{0}\) satisfies the hypotheses of the theorem, and the minimality of \({\cal F}\) implies that \({\cal F}_{0}\) is supersolvable. (2) _Let \(Q\in{\cal E}_{\cal F}^{*}\). Then \(|Q|\geq 2|D|\), and if \(Q\) is not normal in \({\cal F}\), then \(N_{\cal F}(Q)\) is supersolvable._ Suppose that \(|Q|<2|D|\). Then there is a subgroup \(R\) of \(S\) such that \(|R|=2|D|\) and \(Q<R\). By hypothesis, \(R\) is abelian, and so we have \(R\leq C_{S}(Q)\). As a member of \({\cal E}_{\cal F}^{*}\), the subgroup \(Q\) is \({\cal F}\)-centric. It follows that \(R\leq C_{S}(Q)\leq Q\), a contradiction. Thus \(|Q|\geq 2|D|\). Suppose now that \(Q\) is not normal in \({\cal F}\). Then \(N_{\cal F}(Q)\) is a proper saturated subsystem of \({\cal F}\) on \(N_{S}(Q)\). Since \(|N_{S}(Q)|\geq|Q|\geq 2|D|\), it follows from (1) that \(N_{\cal F}(Q)\) is supersolvable. (3) \(|O_{2}({\cal F})|\geq 2|D|\). Assume that there is no \(Q\in{\cal E}_{\cal F}^{*}\) with \(Q\unlhd{\cal F}\). Then, for each \(Q\in{\cal E}_{\cal F}^{*}\), the fusion system \(N_{\cal F}(Q)\) is supersolvable by (2), and Lemma 2.6 implies that \({\cal F}\) is supersolvable. This contradiction shows that there is some \(Q\in{\cal E}_{\cal F}^{*}\) with \(Q\unlhd{\cal F}\). Applying (2), we conclude that \(|O_{2}({\cal F})|\geq|Q|\geq 2|D|\). (4) _There is a series \(1=S_{0}\leq S_{1}\leq\cdots\leq S_{n}=O_{2}({\cal F})\) of subgroups of \(O_{2}({\cal F})\) such that \(S_{i}\) is strongly \({\cal F}\)-closed for all \(0\leq i\leq n\) and such that \(S_{i+1}/S_{i}\) is cyclic for all \(0\leq i<n\)._ Let \(N:=O_{2}({\cal F})\), \(A:={\rm Aut}_{\cal F}(N)\) and \(G:=N\rtimes A\) be the outer semidirect product of \(N\) and \(A\) with respect to the natural action of \(A\) on \(N\). Identifying each element of \(N\) with its canonical image in \(G\), we may regard \(N\) as a normal subgroup of \(G\). Likewise, identifying each element of \(A\) with its canonical image in \(G\), we may regard \(A\) as a subgroup of \(G\). Then \(G=NA\), \(N\cap A=1\), and for all \(x\in N\) and \(\alpha\in A\), we have \(\alpha^{-1}x\alpha=\alpha(x)\). Let \(P\leq N\) such that \(P\) is weakly \({\cal F}\)-closed. We show that \(P\unlhd G\). Since \(P\) is weakly \({\cal F}\)-closed, we have \(P\unlhd S\) and thus \(N\leq N_{G}(P)\). For all \(\alpha\in A\), we have \(\alpha^{-1}P\alpha=\alpha(P)=P\), where the latter equality holds since \(P\) is weakly \({\cal F}\)-closed. Thus \(A\leq N_{G}(P)\), and it follows that \(G=NA\leq N_{G}(P)\). So we have \(P\unlhd G\), as wanted. By the preceding paragraph and by hypothesis, any subgroup of \(N\) with order \(|D|\) is normal in \(G\), and if \(N\) is nonabelian, then we moreover have that any subgroup of \(N\) with order \(2|D|\) is normal in \(G\). Lemma 2.8 implies that \(N\leq Z_{\cal U}(G)\). Hence, there is a series \(1=S_{0}\leq S_{1}\leq\cdots\leq S_{n}=N\) of subgroups of \(N\) such that \(S_{i}\unlhd G\) for all \(0\leq i\leq n\) and such that \(S_{i+1}/S_{i}\) is cyclic for all \(0\leq i<n\). To complete the proof of (4), we show that \(S_{i}\) is strongly \({\cal F}\)-closed for all \(0\leq i\leq n\). Let \(P\) be a subgroup of \(S_{i}\) for some \(0\leq i\leq n\), and let \(P_{1}\) be a subgroup of \(S\) such that there is an \({\cal F}\)-isomorphism \(\varphi:P\to P_{1}\). Clearly, \(N\) is strongly \({\cal F}\)-closed, and therefore, we have \(P_{1}\leq N\). Since \(N\) is normal in \({\cal F}\), the isomorphism \(\varphi\) extends to an automorphism \(\alpha\in{\rm Aut}_{\cal F}(N)=A\). We have \(P_{1}=\varphi(P)=\alpha(P)\leq\alpha(S_{i})=\alpha^{-1}S_{i}\alpha=S_{i}\), where the last equality holds since \(S_{i}\) is normal in \(G\). We have shown that each \({\cal F}\)-conjugate of \(P\) lies in \(S_{i}\). Therefore, \(S_{i}\) is strongly \({\cal F}\)-closed, as required. (5) _The final contradiction._ Applying Lemma 2.10, we deduce from (1), (3) and (4) that \(\mathcal{F}\) is supersolvable. This contradiction completes the proof. **Remark 3.1**.: Another proof of Theorem B works as follows: One assumes that Theorem B is false and considers a counterexample \(\mathcal{F}\) such that \(|\mathcal{F}|\) is minimal. Arguing as in the above proof of Theorem B, one shows that there is some \(Q\in\mathcal{E}_{\mathcal{F}}^{*}\) with \(Q\trianglelefteq\mathcal{F}\). Then \(\mathcal{F}\) is constrained, and the model theorem [4, Part III, Theorem 5.10] implies that \(\mathcal{F}\) is the \(2\)-fusion system of a finite group \(G\). Theorem 1.2 shows that \(G\) is \(2\)-nilpotent, and so it follows that \(\mathcal{F}\) is nilpotent, a contradiction. This proof of Theorem B has the advantage of being shorter than the above proof of Theorem B, but it has the disadvantage of being less elementary since it depends on the model theorem, which is a quite deep result. Proof of Theorem C.: Suppose that the theorem is false, and let \(\mathcal{F}\) be a counterexample such that \(|\mathcal{F}|\) is minimal. We will derive a contradiction in several steps. (1) \(S\) _is not normal in \(\mathcal{F}\)_. Assume that \(S\trianglelefteq\mathcal{F}\). We claim that there is a \(p\)-closed finite group \(G\) with \(S\in\mathrm{Syl}_{p}(G)\) and \(\mathcal{F}=\mathcal{F}_{S}(G)\). Indeed, the normality of \(S\) in \(\mathcal{F}\) implies that \(\mathcal{F}\) is constrained, and so the existence of such a group \(G\) follows from the model theorem [4, Part III, Theorem 5.10]. Alternatively, without using the model theorem, one can argue as follows: Let \(A:=\mathrm{Aut}_{\mathcal{F}}(S)\). Since \(\mathcal{F}\) is saturated, \(\mathrm{Inn}(S)\) is a Sylow \(p\)-subgroup of \(A\), whence \(|\mathrm{Inn}(S)|\) and \(|A:\mathrm{Inn}(S)|\) are coprime. Since \(\mathrm{Inn}(S)\trianglelefteq A\), the Schur-Zassenhaus theorem [15, Theorem 3.8] implies that \(\mathrm{Inn}(S)\) has a complement \(H\) in \(A\). Let \(G=S\rtimes H\) be the outer semidirect product of \(S\) and \(H\) with respect to the natural action of \(H\) on \(S\). Identifying each element of \(S\) with its canonical image in \(G\), we may regard \(S\) as a normal subgroup of \(G\). Then we have \(S\in\mathrm{Syl}_{p}(G)\) since \(|H|\) is not divisible by \(p\), and it is not hard to show that \(\mathcal{F}=\mathcal{F}_{S}(G)\). Since \(S\) is normal in \(G\) and since any maximal subgroup of \(S\) is weakly \(\mathcal{F}\)-closed, we have that any maximal subgroup of \(S\) is normal in \(G\). Lemma 2.8 implies that \(S\leq Z_{\mathcal{U}}(G)\). Hence, there is a series \(1=S_{0}\leq S_{1}\leq\cdots\leq S_{m}=S\) of subgroups of \(S\) such that \(S_{i}\trianglelefteq G\) for all \(0\leq i\leq m\) and such that \(S_{i+1}/S_{i}\) is cyclic for all \(0\leq i<m\). In particular, \(S_{i}\) is strongly \(\mathcal{F}\)-closed for all \(0\leq i\leq m\), and so it follows that \(\mathcal{F}\) is supersolvable. This contradiction shows that \(S\) is not normal in \(\mathcal{F}\). (2) \(S\) _possesses more than one abelian maximal subgroup, and any abelian maximal subgroup of \(S\) contains \(Z(S)\). Moreover, we have \(|S^{\prime}|=p\) and \(|S:Z(S)|=p^{2}\)._ Assume that \(S\) is abelian. Then \(S\trianglelefteq\mathcal{F}\) by [4, Part I, Corollary 4.7 (a)], which contradicts (1). So \(S\) must be nonabelian. In particular, \(S\) is not cyclic, and so \(S\) has more than one abelian maximal subgroup by hypothesis. If \(R\) is an abelian maximal subgroup of \(S\), then \(Z(S)\leq R\) because otherwise \(S=RZ(S)\) would be abelian. We see from [20, Lemma 1.9] that \(|S^{\prime}|=p\) and \(|S:Z(S)|=p^{2}\). (3) \(O_{p}(\mathcal{F})\) _is an abelian maximal subgroup of \(S\), we have \(\mathcal{E}_{\mathcal{F}}^{*}=\{O_{p}(\mathcal{F}),S\}\), and the subgroups \(O_{p}(\mathcal{F})\) and \(S\) are precisely the \(\mathcal{F}\)-centric, \(\mathcal{F}\)-radical subgroups of \(S\)._ First, we argue that any member of \(\mathcal{E}_{\mathcal{T}}^{*}\setminus\{S\}\) is an abelian maximal subgroup of \(S\). Let \(Q\in\mathcal{E}_{\mathcal{T}}^{*}\setminus\{S\}\). Then, since \(Q\) is \(\mathcal{T}\)-centric, we have \(Z(S)<Q<S\). As \(|S:Z(S)|=p^{2}\) by (2), it follows that \(Q\) is a maximal subgroup of \(S\). Also, \(Q\) is abelian since \(|Q:Z(S)|=p\). Let \(Q\in\mathcal{E}_{\mathcal{T}}^{*}\) such that \(Q\) is not normal in \(\mathcal{F}\). Then \(N_{\mathcal{F}}(Q)\) is a proper saturated subsystem of \(\mathcal{F}\) on \(S\), and it is easy to see that \(N_{\mathcal{F}}(Q)\) satisfies the hypotheses of the theorem. So \(N_{\mathcal{F}}(Q)\) is supersolvable by the minimality of \(\mathcal{F}\). If no member of \(\mathcal{E}_{\mathcal{T}}^{*}\) is normal in \(\mathcal{F}\), then \(N_{\mathcal{F}}(Q)\) is supersolvable for each \(Q\in\mathcal{E}_{\mathcal{T}}^{*}\), whence \(\mathcal{F}\) is supersolvable by Lemma 2.6. Since this is not the case, there must exist some \(Q\in\mathcal{E}_{\mathcal{T}}^{*}\) with \(Q\trianglelefteq\mathcal{F}\). We have \(Q\neq S\) by (1), and so the preceding paragraph implies that \(Q\) is an abelian maximal subgroup of \(S\). Since \(Q\leq O_{p}(\mathcal{F})\) and \(O_{p}(\mathcal{F})\neq S\), we have \(O_{p}(\mathcal{F})=Q\). Hence, we have shown that \(O_{p}(\mathcal{F})\) is an abelian maximal subgroup of \(S\). Clearly, \(S\) is \(\mathcal{F}\)-centric and \(\mathcal{F}\)-radical. Since \(O_{p}(\mathcal{F})\) is \(\mathcal{F}\)-essential, we also have that \(O_{p}(\mathcal{F})\) is \(\mathcal{F}\)-centric and \(\mathcal{F}\)-radical (see [4, Part I, Proposition 3.3 (a)]). Conversely, if \(R\) is an \(\mathcal{F}\)-centric, \(\mathcal{F}\)-radical subgroup of \(S\), then \(O_{p}(\mathcal{F})\leq R\) by [4, Part I, Proposition 4.5], and the maximality of \(O_{p}(\mathcal{F})\) in \(S\) implies that \(R=O_{p}(\mathcal{F})\) or \(R=S\). Consequently, the subgroups \(O_{p}(\mathcal{F})\) and \(S\) are precisely the \(\mathcal{F}\)-centric, \(\mathcal{F}\)-radical subgroups of \(S\). Since any member of \(\mathcal{E}_{\mathcal{T}}^{*}\) is \(\mathcal{F}\)-centric and \(\mathcal{F}\)-radical, it follows that \(\mathcal{E}_{\mathcal{T}}^{*}\subseteq\{O_{p}(\mathcal{F}),S\}\). The other inclusion also holds, and so we have \(\mathcal{E}_{\mathcal{T}}^{*}=\{O_{p}(\mathcal{F}),S\}\). (4) _There is no nontrivial subgroup of \(Z(S)\) which is normal in \(\mathcal{F}\)._ Assume that there is a subgroup \(1\neq Z\leq Z(S)\) with \(Z\trianglelefteq\mathcal{F}\). From Lemma 2.1, we see that any maximal subgroup of \(S/Z\) is weakly \(\mathcal{F}/Z\)-closed. By (2), \(S\) possesses more than one abelian maximal subgroup, and each of them contains \(Z\). Hence, \(S/Z\) has more than one abelian maximal subgroup. Consequently, the fusion system \(\mathcal{F}/Z\) satisfies the hypotheses of the theorem, and so \(\mathcal{F}/Z\) is supersolvable by the minimality of \(\mathcal{F}\). Let \(R_{1}\) and \(R_{2}\) be two distinct abelian maximal subgroups of \(S\). Since \(R_{1}/Z\) and \(R_{2}/Z\) are weakly \(\mathcal{F}/Z\)-closed and since \(\mathcal{F}/Z\) is supersolvable, we see from Lemma 2.3 that \(R_{1}/Z\) and \(R_{2}/Z\) are in fact strongly \(\mathcal{F}/Z\)-closed. Lemma 2.2 implies that \(R_{1}\) and \(R_{2}\) are strongly \(\mathcal{F}\)-closed. Since \(R_{1}\) and \(R_{2}\) are abelian, it follows from [4, Part I, Corollary 4.7 (a)] that \(R_{1}\) and \(R_{2}\) are normal in \(\mathcal{F}\). Hence, \(S=R_{1}R_{2}\) is normal in \(\mathcal{F}\), which contradicts (1). Therefore, there is no subgroup \(1\neq Z\leq Z(S)\) with \(Z\trianglelefteq\mathcal{F}\). (5) _The Sylow \(p\)-subgroups of \(\operatorname{Aut}_{\mathcal{T}}(O_{p}(\mathcal{F}))\) have order \(p\) and are not normal in \(\operatorname{Aut}_{\mathcal{T}}(O_{p}(\mathcal{F}))\)._ As \(\mathcal{F}\) is saturated and \(O_{p}(\mathcal{F})\) is fully \(\mathcal{F}\)-normalized, \(\operatorname{Aut}_{S}(O_{p}(\mathcal{F}))\) is a Sylow \(p\)-subgroup of \(\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\). Since \(O_{p}(\mathcal{F})\) is \(\mathcal{F}\)-centric, abelian and maximal in \(S\) by (3), we have \(\operatorname{Aut}_{S}(O_{p}(\mathcal{F}))\cong S/O_{p}(\mathcal{F})\cong C_{p}\). Since \(O_{p}(\mathcal{F})\) is \(\mathcal{F}\)-radical by (3), we also have \(1=\operatorname{Inn}(O_{p}(\mathcal{F}))=O_{p}(\operatorname{Aut}_{\mathcal{F}} (O_{p}(\mathcal{F})))\). Hence, \(\operatorname{Aut}_{S}(O_{p}(\mathcal{F}))\) is not normal in \(\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\). (6) _Assume that \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\) has order \(p\). Then \([\alpha,O_{p}(\mathcal{F})]\) has order \(p\)._ Since \(\operatorname{Aut}_{S}(O_{p}(\mathcal{F}))\) is a Sylow \(p\)-subgroup of \(\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\), there is some \(\beta\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\) such that \(\gamma:=\beta^{-1}\alpha\beta\in\operatorname{Aut}_{S}(O_{p}(\mathcal{F}))\). A direct calculation shows that \([\gamma,O_{p}(\mathcal{F})]=\beta([\alpha,O_{p}(\mathcal{F})])\), whence \(|[\gamma,O_{p}(\mathcal{F})]|=|[\alpha,O_{p}(\mathcal{F})]|\). Therefore, we may assume without loss of generality that \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\). (7) _If \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\) is a Sylow \(p\)-subgroup of \(\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\), then \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\)._ (8) _If \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\) is a Sylow \(p\)-subgroup of \(\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\), then \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\)._ (9) _If \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\) is a Sylow \(p\)-subgroup of \(\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\), then \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\)._ (10) _If \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\) is a Sylow \(p\)-subgroup of \(\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\), then \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\)._ (11) _If \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\) is a Sylow \(p\)-subgroup of \(\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\), then \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\)._ (12) _If \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\) is a Sylow \(p\)-subgroup of \(\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\), then \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\)._ (13) _If \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\) is a Sylow \(p\)-subgroup of \(\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\), then \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}))\)._ (14) _If \(\alpha\in\operatorname{Aut}_{\mathcal{F}}(O_{p}(\mathcal{F}) \(\operatorname{Aut}_{S}(O_{p}(\mathcal{F}))\). Then there is some \(s\in S\) with \(\alpha(x)=x^{s}\) for all \(x\in O_{p}(\mathcal{F})\), and we have \([\alpha,O_{p}(\mathcal{F})]=[s,O_{p}(\mathcal{F})]\leq S^{\prime}\). Since \(S^{\prime}\) has order \(p\) by (2) and since \(\alpha\) acts nontrivially on \(O_{p}(\mathcal{F})\), it follows that \([\alpha,O_{p}(\mathcal{F})]\) has order \(p\), as wanted. (7) \(S\) _is extraspecial of order \(p^{3}\) and exponent \(p\)_. Set \(A:=O_{p}(\mathcal{F})\), \(G:=\operatorname{Aut}_{\mathcal{F}}(A)\), \(H:=O^{p^{\prime}}(G)\), \(A_{1}:=C_{A}(H)\) and \(A_{2}:=[H,A]\). By (3), (5), (6) and Lemma 2.14, \(G\) normalizes \(A_{1}\) and \(A_{2}\), \(A=A_{1}\times A_{2}\) and \(A_{2}\cong C_{p}\times C_{p}\). Since \(\operatorname{Aut}_{S}(A)\) is a Sylow \(p\)-subgroup of \(G\), we have \(\operatorname{Aut}_{S}(A)\leq H\). Consequently, \(\operatorname{Aut}_{S}(A)\) centralizes \(A_{1}\). In other words, we have \(A_{1}\leq Z(S)\). We show now that \(A_{1}\trianglelefteq\mathcal{F}\). Since \(G\) normalizes \(A_{1}\) and since \(\mathcal{E}_{\mathcal{F}}^{*}=\{A,S\}\) by (3), it suffices to show that \(A_{1}\) is \(\operatorname{Aut}_{\mathcal{F}}(S)\)-invariant (see [4, Part I, Proposition 4.5]). But this follows from the fact that \(G\) normalizes \(A_{1}\) because any \(\mathcal{F}\)-automorphism of \(S\) restricts to an \(\mathcal{F}\)-automorphism of \(A\). We have shown that \(A_{1}\) is a subgroup of \(Z(S)\) which is normal in \(\mathcal{F}\). Therefore, by (4), \(A_{1}\) must be trivial. It follows that \(A=A_{2}\cong C_{p}\times C_{p}\). Since \(A\) is maximal in \(S\) by (3), it further follows that \(|S|=p^{3}\). By (2), \(S\) is nonabelian, and since any nonabelian \(p\)-group of order \(p^{3}\) is extraspecial, it follows that \(S\) is extraspecial. It remains to show that \(S\) has exponent \(p\). Assume that this is not the case. Then \(S\) has exponent \(p^{2}\), and hence, \(S\) has a cyclic maximal subgroup. So \(S\) is metacyclic, and [6, Theorem C] implies that \(\mathcal{F}\) is supersolvable. This contradiction shows that \(S\) has exponent \(p\). (8) _The final contradiction._ By (7), \(S\) is extraspecial of order \(p^{3}\) and exponent \(p\). As a consequence of (3), \(S\) has precisely one elementary abelian \(\mathcal{F}\)-centric and \(\mathcal{F}\)-radical subgroup. Applying [21, Theorem 1.1], or only [21, Lemma 4.7], we conclude that \(\mathcal{F}\) is isomorphic to one of the fusion systems considered in Lemma 2.11. Alternatively, this can be seen from [9, Lemma 9.2]. Therefore, by Lemma 2.11, there is a maximal subgroup of \(S\) which is not weakly \(\mathcal{F}\)-closed. On the other hand, we have by hypothesis that every maximal subgroup of \(S\) is weakly \(\mathcal{F}\)-closed. With this contradiction, the proof is complete. For the proof of Theorem D, we need the following lemma. It verifies Theorem D for the case that the fusion system \(\mathcal{F}\) in the statement of the theorem is realized by a finite group. **Lemma 3.2**.: _Let \(G\) be a finite group, \(p\) be an odd prime number, \(S\) be a Sylow \(p\)-subgroup of \(G\) and \(\mathcal{F}:=\mathcal{F}_{S}(G)\). Suppose that there is a subgroup \(D\) of \(S\) with \(1<D<S\) such that any subgroup of \(S\) with order \(|D|\) is abelian and weakly \(\mathcal{F}\)-closed. Then \(\mathcal{F}\) is supersolvable._ Proof.: Suppose that the lemma is false, and let \(G\) be a minimal counterexample. We will derive a contradiction in several steps. (1) _Let \(H<G\) with \(S\cap H\in\operatorname{Syl}_{p}(H)\) and \(|S\cap H|>|D|\). Then \(\mathcal{F}_{S\cap H}(H)\) is supersolvable._ By hypothesis, any subgroup of \(S\cap H\) with order \(|D|\) is abelian and weakly \(\mathcal{F}\)-closed. Clearly, any weakly \(\mathcal{F}\)-closed subgroup of \(S\cap H\) is weakly \(\mathcal{F}_{S\cap H}(H)\)-closed. Hence, any subgroup of \(S\cap H\) with order \(|D|\) is abelian and weakly \(\mathcal{F}_{S\cap H}(H)\)-closed. The minimality of \(G\) implies that \(\mathcal{F}_{S\cap H}(H)\) is supersolvable. (2) \(C_{G}(O_{p}(G))\leq O_{p}(G)\). Let \(Q\in{\cal E}_{\cal F}^{*}\). Since \(C_{S}(Q)\leq Q\) and since any subgroup of \(S\) with order \(|D|\) is abelian, we have \(|Q|\geq|D|\). This implies that \(|N_{S}(Q)|>|D|\). Indeed, if \(Q\neq S\), then \(|N_{S}(Q)|>|Q|\geq|D|\), and if \(Q=S\), then \(|N_{S}(Q)|=|S|>|D|\). Assume that \(Q\) is not normal in \(G\). Hence \(N_{G}(Q)<G\). We have \(S\cap N_{G}(Q)=N_{S}(Q)\in{\rm Syl}_{p}(N_{G}(Q))\) since \(Q\) is fully \({\cal F}\)-normalized and \(|S\cap N_{G}(Q)|=|N_{S}(Q)|>|D|\) by the preceding paragraph. So, by (1), the fusion system \(N_{\cal F}(Q)={\cal F}_{N_{S}(Q)}(N_{G}(Q))\) is supersolvable. If no member of \({\cal E}_{\cal F}^{*}\) is normal in \(G\), then \(N_{\cal F}(Q)\) is supersolvable for each \(Q\in{\cal E}_{\cal F}^{*}\), whence \({\cal F}\) is supersolvable by Lemma 2.6. Since this is not the case, there must exist some \(Q\in{\cal E}_{\cal F}^{*}\) with \(Q\trianglelefteq G\). Hence, \({\cal F}\) is constrained, and [13, Proposition 8.8] shows that the model of \({\cal F}\) is isomorphic to \(G/O_{p^{\prime}}(G)\). If \(O_{p^{\prime}}(G)\neq 1\), then \({\cal F}_{SO_{p^{\prime}}(G)/O_{p^{\prime}}(G)}(G/O_{p^{\prime}}(G))\) is supersolvable by the minimality of \(G\), whence \({\cal F}\) is supersolvable. Thus \(O_{p^{\prime}}(G)=1\). Consequently, \(G\) is the model of \({\cal F}\), and so we have \(C_{G}(O_{p}(G))\leq O_{p}(G)\), as wanted. (3) \(|O_{p}(G)|=|D|\). Assume that \(|O_{p}(G)|>|D|\). By hypothesis, any subgroup of \(O_{p}(G)\) with order \(|D|\) is weakly \({\cal F}\)-closed. Clearly, any weakly \({\cal F}\)-closed subgroup of \(O_{p}(G)\) is normal in \(G\). Therefore, any subgroup of \(O_{p}(G)\) with order \(|D|\) is normal in \(G\). So we have \(O_{p}(G)\leq Z_{\mathbb{U}}(G)\) by Lemma 2.8. If \(H\) is a proper subgroup of \(G\) with \(O_{p}(G)<S\cap H\) and \(S\cap H\in{\rm Syl}_{p}(H)\), then \({\cal F}_{S\cap H}(H)\) is supersolvable by (1). So, by Lemma 2.9, \({\cal F}\) is supersolvable. This contradiction shows that \(|O_{p}(G)|\leq|D|\). Because of (2), we also have \(|O_{p}(G)|\geq|D|\). Thus \(|O_{p}(G)|=|D|\), as desired. (4) \(O_{p}(G)\) _is elementary abelian._ Assume that \(O_{p}(G)\) is not elementary abelian. Then \(\Phi(O_{p}(G))\neq 1\) by [15, Lemma 4.5]. Set \(\overline{G}:=G/\Phi(O_{p}(G))\) and \(\overline{\cal F}:={\cal F}/\Phi(O_{p}(G))={\cal F}_{\overline{S}}(\overline{G})\). By hypothesis and by Lemma 2.1, any subgroup of \(\overline{S}\) with order \(\frac{|D|}{|\Phi(O_{p}(G))|}\) is abelian and weakly \(\overline{\cal F}\)-closed. The minimality of \(G\) implies that \(\overline{\cal F}\) is supersolvable. By Lemma 2.4 (1), there is a series \(\Phi(O_{p}(G))=V_{0}\leq V_{1}\leq\cdots\leq V_{m}=S\) of subgroups of \(S\) such that \(\overline{V_{i}}\) is strongly \(\overline{\cal F}\)-closed for all \(0\leq i\leq m\), such that the quotient \(\overline{V_{i+1}}/\overline{V_{i}}\) is cyclic for all \(0\leq i<m\) and such that \(V_{j}=O_{p}(G)\) for some \(0<j<m\). Since \(\overline{V_{i}}\trianglelefteq\overline{G}\) for all \(0\leq i\leq j\), it follows that \(\overline{O_{p}(G)}\leq Z_{\mathbb{U}}(\overline{G})\). Applying [25, Chapter 1, Theorem 7.19], we conclude that \(O_{p}(G)\leq Z_{\mathbb{U}}(G)\). By (1) and (3), for any proper subgroup \(H\) of \(G\) with \(O_{p}(G)<S\cap H\) and \(S\cap H\in{\rm Syl}_{p}(H)\), the fusion system \({\cal F}_{S\cap H}(H)\) is supersolvable. Lemma 2.9 implies that \({\cal F}\) is supersolvable. This contradiction shows that \(O_{p}(G)\) is elementary abelian. (5) _If \(O_{p}(G)\leq H<G\), then \(H\) is \(p\)-closed._ Let \(O_{p}(G)\leq H<G\). Without loss of generality, we assume that \(S\cap H\in{\rm Syl}_{p}(H)\). Clearly, \(O_{p}(G)\leq S\cap H\), and if \(O_{p}(G)=S\cap H\), then \(H\) is \(p\)-closed. Assume now that \(O_{p}(G)<S\cap H\). Then \({\cal E}:={\cal F}_{S\cap H}(H)\) is supersolvable by (1) and (3). So, by [24, Proposition 2.3], \(S\cap H\) is normal in \({\cal E}\). Thus \({\cal E}=N_{\cal E}(S\cap H)={\cal F}_{S\cap H}(N_{H}(S\cap H))\). For each \(h\in H\), let \(c_{h}\) denote the automorphism of \(O_{p}(G)\) induced by conjugation with \(h\), i.e. \[c_{h}:O_{p}(G)\to O_{p}(G),x\mapsto x^{h}.\] Let \(h\in H\). Then \(c_{h}\) is a morphism in \({\cal E}={\cal F}_{S\cap H}(N_{H}(S\cap H))\), and so we have \(c_{h}=c_{u}\) for some \(u\in N_{H}(S\cap H)\). Then \(hu^{-1}\leq C_{G}(O_{p}(G))\leq O_{p}(G)\leq N_{H}(S\cap H)\) by (2). It follows that \(h\in N_{H}(S\cap H)\), and since \(h\) was an arbitrarily chosen element of \(H\), we can conclude that \(S\cap H\trianglelefteq H\). Hence, \(H\) is \(p\)-closed, as desired. (6) \(S/O_{p}(G)\) _is cyclic._ Set \(\overline{G}:=G/O_{p}(G)\). Let \(O_{p}(G)\leq H<G\). Then \(H\) is \(p\)-closed by (5). Hence, \(\overline{H}\) is \(p\)-closed. The group \(\overline{G}\) is not \(p\)-closed because otherwise \(S\) would be normal in \(G\), which is not true because of (3). We have shown that \(\overline{G}\) is a non-\(p\)-closed group all of whose proper subgroups are \(p\)-closed. In other words, \(\overline{G}\) is minimal non-\(p\)-closed in the sense of [18]. Applying [18, Lemma 1], we conclude that \(\overline{G}\) is minimal nonnilpotent or that \(\overline{G}/\Phi(\overline{G})\) is nonabelian simple. If \(\overline{G}\) is minimal nonnilpotent, then \(\overline{S}\) is cyclic by Lemma 2.13. We assume now that \(\widehat{G}:=\overline{G}/\Phi(\overline{G})\) is nonabelian simple, and we show that \(\overline{S}\) is cyclic also in this case. Since \(\Phi(\overline{G})\) is nilpotent and \(O_{p}(\overline{G})=1\), we have that \(\Phi(\overline{G})\) is a \(p^{\prime}\)-group. Thus, \(\overline{S}\) is isomorphic to the Sylow \(p\)-subgroups of \(\widehat{G}\), and therefore, it suffices to show that \(\widehat{G}\) has cyclic Sylow \(p\)-subgroups. Let \(O_{p}(G)\leq L<G\) such that \(\overline{L}=\Phi(\overline{G})\). If \(L\leq H<G\), then \(H\) is \(p\)-closed by (5), whence \(H/L\) is \(p\)-closed. Consequently, any proper subgroup of \(G/L\cong\widehat{G}\) is \(p\)-closed. Since \(\widehat{G}\) is simple, the only proper quotient of \(\widehat{G}\) is \(\widehat{G}/\widehat{G}\), which is of course \(p\)-closed. The simplicity of \(\widehat{G}\) also implies that \(\widehat{G}\) is not \(p\)-closed. We have shown that \(\widehat{G}\) is a non-\(p\)-closed group all of whose proper subgroups and proper quotients are \(p\)-closed. Hence, \(\widehat{G}\) is minimal non-\(p\)-closed in the sense of [16] (note that the definition of a minimal non-\(p\)-closed group used in [16] does not coincide with the one used in [18]). Applying [16, Theorem 3.5], we conclude that \(\widehat{G}\) has cyclic Sylow \(p\)-subgroups, as required. (7) \(O_{p}(G)\) _is not maximal in \(S\)._ Assume that \(O_{p}(G)\) is maximal in \(S\). Then, by (3) and by hypothesis, any maximal subgroup of \(S\) is abelian and weakly \({\cal F}\)-closed. Theorem C implies that \({\cal F}\) is supersolvable, which is a contradiction. Hence, \(O_{p}(G)\) is not maximal in \(S\). (8) \(|O_{p}(G)|\geq p^{4}\). By (2) and (4), we have \(C_{S}(O_{p}(G))=O_{p}(G)\). Hence, \(S/O_{p}(G)\) is isomorphic to a \(p\)-subgroup of \({\rm Aut}(O_{p}(G))\). By (6), \(S/O_{p}(G)\) is cyclic. We have \(|S/O_{p}(G)|\geq p^{2}\) since \(O_{p}(G)\neq S\) by (3) and since \(O_{p}(G)\) is not maximal in \(S\) by (7). Now, let \(e\in\mathbb{N}\) such that \(|O_{p}(G)|=p^{e}\). We have to show that \(e\geq 4\). As \(|O_{p}(G)|=|D|>1\) by (3), we have \(e\neq 0\). Since \(O_{p}(G)\) is elementary abelian by (4), we have \({\rm Aut}(O_{p}(G))\cong GL_{e}(p)\). If \(e=1\), then \({\rm Aut}(O_{p}(G))\cong GL_{1}(p)\cong C_{p-1}\) does not have any nontrivial \(p\)-subgroups, and if \(e\in\{2,3\}\), then we see from Lemma 2.12 (2) that \({\rm Aut}(O_{p}(G))\cong GL_{e}(p)\) does not have any cyclic \(p\)-subgroups of order greater than \(p\). Since \(S/O_{p}(G)\) is isomorphic to a cyclic \(p\)-subgroup of \(\operatorname{Aut}(O_{p}(G))\) of order at least \(p^{2}\), it follows that \(e\geq 4\), as required. (9) _The final contradiction._ Since \(|S/O_{p}(G)|\geq p^{2}\) by (3) and (7), there exists \(O_{p}(G)<T\leq S\) with \(|T/O_{p}(G)|=p^{2}\). Then \(O_{p}(G)\) is properly contained in a maximal subgroup \(T_{1}\) of \(T\), and since \(C_{T_{1}}(O_{p}(G))\leq C_{G}(O_{p}(G))\leq O_{p}(G)\) by (2), we have that \(T_{1}\) is nonabelian. By (3) and by hypothesis, any subgroup of \(T\) with index \(p^{2}\) is abelian. Hence, \(T\) is an \(\mathcal{A}_{2}\)-group in the sense of the definition given before Lemma 2.15. We have \(|T|>|O_{p}(G)|\geq p^{4}\) by (8). Assume that \(T\) is metacyclic. Then \(O_{p}(G)\) is metacyclic as well. At the same time, \(O_{p}(G)\) is elementary abelian by (4). It is rather easy to see that an elementary abelian finite \(p\)-group can only be metacyclic when its order is at most \(p^{2}\). So it follows that \(|O_{p}(G)|\leq p^{2}\). This contradicts (8), and therefore, \(T\) is not metacyclic. Assume that \(T\) has no abelian maximal subgroups. Then \(|T|=p^{5}\) by Lemma 2.15 (2) and hence \(|O_{p}(G)|=p^{3}\), which is a contradiction to (8). Thus, there exists an abelian maximal subgroup of \(T\). Assume that there is only one abelian maximal subgroup of \(T\), say \(U\). We claim that \(T^{\prime}\leq Z(T)\). Because of (2), we have \(O_{p}(G)\not\leq U\) and hence \(T=O_{p}(G)U\). Set \(Z:=O_{p}(G)\cap U\). Since \(O_{p}(G)\) and \(U\) are abelian, the subgroup \(Z\) is centralized by both \(O_{p}(G)\) and \(U\). This implies that \(Z\leq Z(T)\). We have \(O_{p}(G)/Z\cong O_{p}(G)U/U=T/U\cong C_{p}\), and so \(Z\) is maximal in \(O_{p}(G)\). As a consequence of (2), \(Z(T)\) is a proper subgroup of \(O_{p}(G)\). So we have \(Z\leq Z(T)<O_{p}(G)\), and the maximality of \(Z\) in \(O_{p}(G)\) implies that \(Z=Z(T)\). Therefore, \(O_{p}(G)/Z(T)\) is a normal subgroup of \(T/Z(T)\) with order \(p\) and hence a central subgroup of \(T/Z(T)\). The corresponding factor group \((T/Z(T))/(O_{p}(G)/Z(T))\cong T/O_{p}(G)\) is cyclic by (6). It follows that \(T/Z(T)\) is abelian. Thus \(T^{\prime}\leq Z(T)\), as claimed above. Applying Lemma 2.15 (1), we conclude that \(\Phi(T)=Z(T)<O_{p}(G)\). So \(T/O_{p}(G)\) is elementary abelian by [15, Lemma 4.5], and since \(|T/O_{p}(G)|=p^{2}\), it follows that \(T/O_{p}(G)\) is not cyclic. This is a contradiction to (6). Therefore, \(T\) has more than one abelian maximal subgroup. Now, applying [20, Lemma 1.9], we conclude that \(Z(T)\) has index \(p^{2}\) in \(T\). Since \(O_{p}(G)\) also has index \(p^{2}\) in \(T\) and \(Z(T)\leq C_{G}(O_{p}(G))\leq O_{p}(G)\) by (2), it follows that \(O_{p}(G)=Z(T)\). In particular, we have \(C_{G}(O_{p}(G))\not\leq O_{p}(G)\). This contradicts (2), and the proof is complete with this contradiction. With Lemma 3.2 at hand, we are able to prove Theorem D in few lines. Our proof strongly relies on the model theorem [4, Part III, Theorem 5.10]. Proof of Theorem D.: Suppose that the theorem is false, and let \(\mathcal{F}\) be a counterexample such that \(|\mathcal{F}|\) is minimal. Let \(Q\) be a member of \(\mathcal{E}_{\mathcal{F}}^{*}\) which is not normal in \(\mathcal{F}\). Since \(C_{S}(Q)\leq Q\) and since any subgroup of \(S\) with order \(|D|\) is abelian, we have \(|Q|\geq|D|\). This implies that \(|N_{S}(Q)|>|D|\). Hence, \(N_{\mathcal{F}}(Q)\) is a proper saturated subsystem of \(\mathcal{F}\) on \(N_{S}(Q)\) satisfying the hypotheses of the theorem. Thus, \(N_{\mathcal{F}}(Q)\) is supersolvable by the minimality of \(\mathcal{F}\). If no member of \(\mathcal{E}_{\mathcal{F}}^{*}\) is normal in \(\mathcal{F}\), then \(\mathcal{F}\) is supersolvable by the preceding paragraph and Lemma 2.6. Therefore, there must be a member of \(\mathcal{E}_{\mathcal{F}}^{*}\) which is normal in \(\mathcal{F}\). Hence, \(\mathcal{F}\) is constrained. The model theorem [4, Part III, Theorem 5.10] implies that there is a finite group \(G\) with \(S\in\operatorname{Syl}_{p}(G)\) and \(\mathcal{F}=\mathcal{F}_{S}(G)\). Then Lemma 3.2 implies that \(\mathcal{F}\) is supersolvable. This contradiction completes the proof. **Declarations of interest:** none.
2305.11647
Waveguide QED with Moessbauer Nuclei
Thin-film nanostructures with embedded M\"ossbauer nuclei have been successfully used for x-ray quantum optical applications with hard x-rays coupling in grazing incidence. Here we address theoretically a new geometry, in which hard x-rays are coupled in forward incidence (front coupling), setting the stage for waveguide QED with nuclear x-ray resonances. We present in a self-contained manner a general model based on the Green's function formalism of the field-nucleus interaction in one dimensional waveguides, and show that it combines aspects of both nuclear forward scattering, visible as dynamical beating in the spatio-temporal response, and the resonance structure from grazing incidence, visible in the spectrum of guided modes. The interference of multiple modes is shown to play an important role, resulting in beats with wavelengths on the order of tens of microns, on the scale of practical photolithography. This allows for the design of special sample geometries to explore the resonant response or micro-striped waveguides, opening a new toolbox of geometrical design for hard X-ray quantum optics.
Petar Andrejic, Leon Merten Lohse, Adriana Palffy
2023-05-19T12:53:00Z
http://arxiv.org/abs/2305.11647v3
# Waveguide QED with Mossbauer Nuclei ###### Abstract Thin-film nanostructures with embedded Mossbauer nuclei have been successfully used for x-ray quantum optical applications with hard x-rays coupling in grazing incidence. Here we address theoretically a new geometry, in which hard x-rays are coupled in forward incidence (front coupling), setting the stage for waveguide QED with nuclear x-ray resonances. We develop a general model based on the Green's function formalism of the field-nucleus interaction in one dimensional waveguides, and show that it combines aspects of both nuclear forward scattering, visible as dynamical beating in the spatio-temporal response, and the resonance structure from grazing incidence, visible in the spectrum of guided modes. The interference of multiple modes is shown to play an important role, resulting in beats with wavelengths on the order of tens of microns, on the scale of practical photolithography. This allows for the design of special sample geometries to explore the resonant response or micro-striped waveguides, opening a new toolbox of geometrical design for hard X-ray quantum optics. ## I Introduction The interaction of Mossbauer transitions with coherent light from third generation synchrotron and XFEL sources has been shown to be an excellent platform for quantum optics in the X-ray energy scales [1]. The recoil-free emission due to the Mossbauer effect means that scattering is highly elastic, and free from Doppler broadening even at room temperature, while the exceptionally narrow line-width of nuclear transitions means that the temporal response of the nuclei can easily be experimentally resolved [2]. So far, experiments in these systems have largely been restricted to two scattering geometries: forward scattering, and grazing incidence reflection. In nuclear forward scattering, the target consists of a homogeneous bulk foil containing the resonant nuclei, on the order of urad thickness. The propagation characteristics are that of a homogeneous dielectric medium. The phase difference between the scattered field at the back of the foil compared with the front results in a characteristic spatio-temporal interference pattern known as the 'dynamical beat' [2]. The analogous visible optical system is the collective emission of a 'pencil geometry' of identical atoms, where the dynamical beat is known as a 'collective Rabi oscillation' [3]. Several interesting quantum optical effects with x-rays have been demonstrated in this geometry, such as magnetic switching [4], coherent pulse shaping [5], electromagnetically induced transparency (EIT) [6], optical control of the nuclear hyperfine spectrum [7], pulse shaping [8; 9], as well as direct observation of the multi-photon dynamics of superradiance [10]. For control over the optical environment, it has been shown that placing the Mossbauer nuclei in thin film nanostructures provides the analogue of atoms in an optical cavity. The system can thus be analysed in terms of a single Fourier mode, and acts analogously to a system of atoms placed between the mirrors of a Fabry-Perot resonator. In grazing incidence setup, these nanostructures have proven to be an excellent platform for X-ray quantum optics, with a diverse range of quantum optical phenomena demonstrated, such as superradiance [11], EIT [12], spontaneously generated coherences [13], Fano resonances and interferometry [14], subluminal pulse propagation [15], collective strong coupling [16] or Rabi oscillations between two nuclear ensembles [17]. In this work, we investigate theoretically a different setup, namely Mossbauer nuclei embedded in a waveguide environment. Compared to grazing incidence, this geometry is not limited to the single wave-vector regime and a single spatial dimension, offering greater flexibility, and also direct control over the interacting waveguide modes. X-ray quantum optics in the waveguide regime has been so far less well studied than the grazing incidence and forward scattering geometries, with the majority of works considering non-resonant propagation of the X-ray field, i.e. waveguides in the absence of Mossbauer nuclei. In a general X-ray optical context, tapered and channelled X-rays have been studied, and demonstrated experimentally to be powerful options for focusing and guiding down to the nanometre scale [18; 19; 20; 21; 22; 23; 24]. Used as guides for synchrotron radiation [25; 26; 27; 28; 29], they have been successfully used as point-like hard x-ray sources for imaging, in particular holographic imaging [30]. In the setting of Mossbauer waveguide optics, a recent proposal for embedding Mossbauer nuclei in tapered waveguides has shown the potential for reaching inversion of the resonant transition [31], while another proposal has considered using slab waveguides with a core filled with Mossbauer nuclei as a gravitational sensor [32]. Here, we consider a fairly general scenario: a thin layer of Mossbauer coupled to a one dimensional waveguide. Due to the fact that they are an experimentally well established platform in the grazing incidence geometry, we will consider slab waveguides as our explicit example. However, the theoretical description we develop is fairly general, and applies to any system with one dimensional propagation, and a resonant nuclear ensemble that is thin in the transverse extent compared to the mode widths. Our theoretical model is based on the Gruner-Welsch quantization of the macroscopic Maxwell's equations [33; 34; 35], which allows us to describe the electromagnetic field in a fully quantum way, in terms of the classical dyadic Green's function of the medium. This approach has been used in the description of collective light-matter interaction, with a wide variety of applications. Asenjo-Garcia _et al_, Chang _et al_ and others have used this form to derive an effective dipole-dipole coupling model for atomic lattices, with applications in one dimensional waveguides [36], as well as free space lattices [37; 38]. Svidzinsky _et al_ have derived an equivalent description _ab initio_ in their studies of single photon superradiance, in both isotropic atomic clouds [39; 40], and one dimensional geometries [3], while Ma and Yelin have used the Green's function as part of a self-consistency approach to study the collective Lamb shift and modified decay rates of atomic clouds [41]. In the non-linear regime, Ruostekoski _et al_[42; 43; 44; 45] have used this approach to study slab geometries of atomic clouds, developing a hierarchy of equations for the correlation functions of the atomic cloud, coupled via the classical Green's function of the background medium. In the X-ray regime, it has been adapted to describe grazing incidence scattering [46; 47], which takes advantage of the fact that the Green's function for planar layered media are analytically known. Equivalent expressions appear in the standard treatments of nuclear forward scattering by Kagan _et al_[48] and Shvyd'ko [49], as well as the general formalism for X-ray resonant scattering of Hannon and Trammel [2; 50; 51; 52]. Our formalism allows us to derive a system of multimode Maxwell-Bloch equations, which in the linear response regime can be rearranged into a matrix differential equation analogous to the equation of motion of ordinary nuclear forward scattering. We are able to obtain an analytic series solution for the spatio-temporal response in the case of two-level nuclei, and demonstrate that this shows the characteristic dynamical beat of forward scattering, but with additional interference beats due to the coupling to multiple guided mode. We additionally consider the case of a non-uniform resonant layer; specifically we consider dividing the layer into microscopic sub-ensembles along the propagation direction. In the regime where the ensemble spacing is a similar order of magnitude to the mode interference length, we show that a phenomenon similar to selective sub-radiance occurs, with the waveguide mediated dipole-dipole interaction between sub-ensembles being sensitive to their spacing. In the limit of a half wavelength spacing, we show that the sub-ensembles are at the nodes of the scattered field of their neighbours, and thus the system splits into two non-interacting sub-ensembles, displaying a sensitivity to the even-odd parity of the number of sub-ensembles. This opens an entire new direction of geometrical control of the x-ray scattering, which could be potentially exploited for quantum fluorescence imaging [53], implementation of mesoscopic models for the investigation of topological edge states [54; 55; 56], as well as the investigation of geometrical radiation phenomena such as selective sub-radiance [37]. This paper is structured as follows. Section II introduces our theoretical formalism modelling the waveguide field and its interaction with the resonant nuclei. This is continued in Section III which gives the solutions of the equations of motion for single as well as multiple modes. The theoretical approach for spatial patterning of the resonant layer is presented in Sec. IV, while derivations for analytic solutions for structured and unstructured layers of Mossbauer nuclei are given in Appendix C and D. Finally, in Section VI we then give explicit numeric examples, and a detailed qualitative study of these solutions for a realistically implementable two-mode waveguide. ## II Theoretical description In this section, we introduce the theoretical model for describing the waveguide field and its interaction with the resonant nuclei. We begin with a brief overview of the Gruner-Welsch quantization of the electromagnetic field in terms of the Green's functions of the macroscopic Maxwell's equations. ### Macroscopic QED The prototypical example of a Mossbauer nucleus is \({}^{57}\)Fe. The metastable internal states of nuclei are characterized primarily by their spin quantum number \(I\), and \({}^{57}\)Fe has a relatively low-lying magnetic dipole transition between the \(I_{g}=1/2\) ground states and the \(I_{e}=3/2\) ground states, with an energy of \(\hbar\omega_{0}=14.4\,\mathrm{keV}\) and an incredibly narrow width of \(\hbar\gamma=4.7\,\mathrm{neV}\). Due to this large energy, the nuclear transition lies well above the largest electronic resonances in the layer materials, and as such the electronic scattering is both weak, and well described as a linear dielectric. In this regime, to describe the electromagnetic propagation through the medium, we will use the Gruner-Welsch quantization of the macroscopic Maxwell's equations [33]. In this scheme, the polariton-like electromagnetic fields in the medium are quantized via Bosonic noise currents \(\hat{f}\), obey ing \[[\hat{f}_{\lambda}(\vec{r},\nu),\hat{f}^{\dagger}_{\lambda^{\prime}}( \vec{r}^{\prime},\nu^{\prime})]= \delta_{\lambda\lambda^{\prime}}\delta^{3}(\vec{r}-\vec{r}^{\prime}) \delta(\nu-\nu^{\prime}) \tag{1}\] \[[\hat{f}_{\lambda}(\vec{r},\nu),\hat{f}_{\lambda^{\prime}}(\vec{r} ^{\prime},\nu^{\prime})]= 0. \tag{2}\] Here, \(\nu\) is a formal frequency parameter, and \(\lambda=e,m\) labels the electric and magnetic polarization of the noise currents respectively. The free field Hamiltonian is then given by \[H_{F}=\sum_{\lambda=e,m}\int_{0}^{\infty}\mathrm{d}\nu\int\mathrm{d}^{3}r\, \hbar\nu\hat{f}^{\dagger}_{\lambda}(\vec{r},\nu)\hat{f}_{\lambda}(\vec{r}, \nu), \tag{3}\] ensuring that the formal frequency parameter corresponds to a Fourier frequency for the free field, \[\partial_{t}\hat{f}_{\lambda}(\vec{r},\nu,t)=-\frac{i}{\hbar}[f_{\lambda}( \vec{r},\nu,t),H_{F}]=-i\nu\hat{f}_{\lambda}(\vec{r},\nu,t). \tag{4}\] The electric and magnetic fields are then obtained from the noise currents using the dyadic Green's functions of the macroscopic Maxwell's equations of the material [57], \[\hat{E}_{+}(\vec{r},\nu) =\sum_{\lambda=e,m}\int\mathrm{d}^{3}s\stackrel{{ \longleftrightarrow}}{{\xi}}_{\lambda}(\vec{r},\vec{s},\nu)\cdot\hat{f}_{ \lambda}(\vec{s},\nu), \tag{5}\] \[\hat{E}_{-}(\vec{r},\nu) =\hat{E}_{+}(\vec{r},\nu)^{\dagger},\] (6) \[\hat{E}(\vec{r}) =\int_{0}^{\infty}\mathrm{d}\nu\left(\hat{E}_{+}(\vec{r},\nu)+ \hat{E}_{-}(\vec{r},\nu)\right),\] (7) \[\hat{B}_{+}(\vec{r},\nu) =\frac{1}{i\nu}\sum_{\lambda=e,m}\int\mathrm{d}^{3}s\,\nabla\times \stackrel{{\longleftrightarrow}}{{\xi}}_{\lambda}(\vec{r},\vec{s },\nu)\cdot\hat{f}_{\lambda}(\vec{s},\nu),\] (8) \[\hat{B}_{-}(\vec{r},\nu) =\hat{B}_{+}(\vec{r},\nu)^{\dagger},\] (9) \[\hat{B}(\vec{r}) =\int_{0}^{\infty}\mathrm{d}\nu\left(\hat{B}_{-}(\vec{r},\nu)+ \hat{B}_{+}(\vec{r},\nu)\right),\] (10) \[\stackrel{{\longleftrightarrow}}{{\xi}}_{e}(\vec{r},\vec{r}^{\prime},\nu) =i\frac{\nu^{2}}{c^{2}}\sqrt{\frac{\hbar}{\pi\varepsilon_{0}}\, \mathrm{Im}\,\varepsilon(\vec{r}^{\prime},\nu)}\overleftrightarrow{G}(\vec{r},\vec{r}^{\prime},\nu),\] (11) \[\stackrel{{\longleftrightarrow}}{{\xi}}_{m}(\vec{r},\vec{r}^{\prime},\nu) =i\frac{\nu}{c}\sqrt{\frac{\hbar}{\pi\varepsilon_{0}}\,\frac{ \mathrm{Im}\,\mu(\vec{r}^{\prime},\nu)}{|\mu(\vec{r}^{\prime},\nu)|^{2}}} \overleftrightarrow{G}(\vec{r},\vec{r}^{\prime},\nu)\times\nabla^{\prime}. \tag{12}\] In these definitions, \(\epsilon_{0},\mu_{0}\) refer to the vacuum permittivity and permeability, \(c=\frac{1}{\sqrt{\mu\epsilon_{0}}}\) the speed of light, while \(\varepsilon(\vec{r},\nu),\mu(\vec{r},\nu)\) refer to the dimensionless relative permittivity and permeability of the medium, respectively. We have introduced the notation \(E_{\pm}\) to denote positive and negative frequency field components, and \(\stackrel{{\longleftrightarrow}}{{G}}\) denotes the dyadic electric Green's function of the material, obeying \[\left(\nabla\times\mu^{-1}\nabla\times\quad-\frac{\nu^{2}}{c^{2}}\varepsilon \right)\stackrel{{\longleftrightarrow}}{{G}}(\vec{r},\vec{r}^{ \prime},\nu)=\stackrel{{\longleftrightarrow}}{{\delta}}(\vec{r} -\vec{r}^{\prime}). \tag{13}\] ### Nuclear Hamiltonian and Lindblad super-operators We will model the nuclei using transition operators, \[\hat{\Pi}^{(i)}_{ab}=|a\rangle\!\langle b|\,, \tag{14}\] where the bra and ket are implied to act only on the Hilbert space of the \(i\)th nucleus, and \(a,b\) are arbitrary internal states of the nucleus. These obey the commutation relations \[[\hat{\Pi}^{(i)}_{ab},\hat{\Pi}^{(j)}_{cd}]=\delta_{ij}\left(\delta_{bc}\hat{ \Pi}^{(i)}_{ad}-\delta_{da}\hat{\Pi}^{(i)}_{cb}\right). \tag{15}\] For polycrystalline ensembles of nuclei, Bragg scattering is insignificant, and we can model the nuclear layer as a continuum, with number density \(\rho(\vec{r})\). We then substitute the transition operators with an operator field, \[\hat{\Pi}^{(i)}_{ab}\rightarrow\hat{\Pi}_{ab}(\vec{r}), \tag{16}\] and (15) becomes \[[\hat{\Pi}_{ab}(\vec{r}),\hat{\Pi}_{cd}(\vec{r}^{\prime})]=\frac{1}{\rho(\vec{r })}\delta(\vec{r}-\vec{r}^{\prime})\left(\delta_{bc}\hat{\Pi}_{ad}(\vec{r})- \delta_{da}\hat{\Pi}_{cb}(\vec{r})\right). \tag{17}\] The internal nuclear Hamiltonian models the hyperfine interactions of the nucleus, such as the isomer shift, magnetic hyperfine field, and quadrupole splitting [58]. For the purposes of this article however, it is sufficient to express it in terms of the excited and ground eigenstates, \[H_{N}=\sum_{\mu\in I_{e}}\int\mathrm{d}^{3}r\,\rho(\vec{r})\hbar( \omega_{0}+\Delta_{\mu})\hat{\Pi}_{\mu\mu}(\vec{r})\\ +\sum_{j\in I_{y}}\int\mathrm{d}^{3}r\,\rho(\vec{r})\hbar\Delta_{ j}\hat{\Pi}_{jj}(\vec{r}), \tag{18}\] where we are using Greek indices such as \(\mu\) to denote excited eigenstates, and Latin indices such as \(j\) to denote ground eigenstates. Here, \(\omega_{0}\) is the reference transition frequency, while \(\Delta_{\mu},\Delta_{j}\) are the hyperfine-induced splittings. The nuclear excited states decay via both radiative (rad) and electron internal conversion (IC) channels, which can be modelled via Lindblad super-operators, \[L[\varrho] =L_{IC}[\varrho]+L_{rad.}[\varrho], \tag{19}\] \[L_{IC}[\varrho] =\sum_{\lambda,l}\Gamma_{IC}(\lambda l,I_{e}\to I_{g})\mathcal{L}_{ \lambda l}[\varrho],\] (20) \[L_{IC}[\varrho] =\sum_{\lambda,l}\Gamma_{rad}(\lambda l,I_{e}\to I_{g}) \mathcal{L}_{\lambda l}[\varrho],\] (21) \[\mathcal{L}_{\lambda l}[\varrho] =\int\mathrm{d}^{3}\vec{r}\,\rho(\vec{r})\sum_{\mu,j}\mathcal{R }(\lambda l,\mu\to j)\left(\hat{\Pi}_{j\mu}(\vec{r})\varrho\hat{\Pi}_{\mu j}( \vec{r})-\frac{1}{2}\{\varrho,\hat{\Pi}_{\mu\mu}(\vec{r})\}\right),\] (22) \[\mathcal{L}_{\lambda l}^{H}[\hat{O}] =\int\mathrm{d}^{3}\vec{r}\,\rho(\vec{r})\sum_{\mu,j}\mathcal{R }(\lambda l,\mu\to j)\left(\hat{\Pi}_{\mu j}(\vec{r})\hat{O}\hat{\Pi}_{j\mu}( \vec{r})-\frac{1}{2}\{\hat{O},\hat{\Pi}_{\mu\mu}(\vec{r})\}\right). \tag{23}\] Here, \(\lambda=\mathcal{E},\mathcal{M}\) denotes the electric or magnetic multipole character of a decay channel, while \(l\) denotes the multipole order of the decay. The notation \(\mathcal{L}^{H}\) denotes the Heisenberg form of the super-operator, which acts on operators \(\hat{O}\) in the Heisenberg picture as opposed to density matrices \(\varrho\) in the Schrodinger picture. The sum of the partial rates can be expressed in terms of commonly tabulated quantities, \[\sum_{\lambda,l}\Gamma_{IC}(\lambda l,I_{e}\to I_{g})= \frac{\alpha}{1+\alpha}\gamma, \tag{24}\] \[\sum_{\lambda,l}\Gamma_{rad}(\lambda l,I_{e}\to I_{g})= \frac{1}{1+\alpha}\gamma, \tag{25}\] where \(\gamma\) is the total decay rate, and \(\alpha\) the internal conversion coefficient. The rate fractions \(\mathcal{R}(\lambda l,\mu\to j)\) can be obtained in terms of the Wigner 3j symbols via [59, sec. 5.3] \[\mathcal{R}(\lambda l,\mu\to j)= \sum_{q}|C(lq,\mu\to j)|^{2}, \tag{26}\] \[C(kq,\mu\to j)= \sqrt{2I_{e}+1}\sum_{m_{e},m_{g}}\bigg{[}(-1)^{I_{e}-m_{e}}\] (27) \[\times\left\langle\mu|I_{e},m_{e}\right\rangle\left\langle I_{g},m_{g}|j\right\rangle\begin{pmatrix}I_{e}&k&I_{g}\\ -m_{e}&q&m_{g}\end{pmatrix}\bigg{]}.\] The dominant multipolarity of the resonant transition is \(\mathcal{M}1\), i.e. magnetic dipole. The coupling to the field is therefore through the magnetic transition dipole field \(\hat{m}(\vec{r})\), which can be expressed in terms of the transition operators as \[\hat{m}(\vec{r})= \hat{m}_{+}(\vec{r})+\hat{m}_{-}(\vec{r}), \tag{28}\] \[\hat{m}_{+}(\vec{r})= m_{0}\sum_{\mu,j}\vec{d}^{*}_{\mu j}\hat{\Pi}_{j\mu}(\vec{r})\] (29) \[\hat{m}_{-}(\vec{r})= \hat{m}_{+}(\vec{r})^{\dagger}. \tag{30}\] Here, we have used a generalization of the Wigner-Eckart decomposition; the prefactor \(m_{0}\) is the usual reduced matrix element of the transition dipole vectors, with magnitude \[m_{0}=\sqrt{f_{LM}\mathcal{B}(\mathcal{M}1,3/2\to 1/2)}, \tag{31}\] where \(f_{LM}\) is the Lamb-Mossbauer factor, giving the fraction of scattering in the elastic channel, while \(\mathcal{B}(\mathcal{M}1,3/2\to 1/2)\) is the reduced transition probability in Weisskopf units. The expansion vectors \(\vec{d}_{\mu j}\) are dimensionless, and given by \[\vec{d}_{\mu j}=\sum_{q=-1}^{1}\hat{e}_{q}C(kq,\mu\to j) \tag{32}\] with \(C(kq,\mu\to j)\) as defined in (27), and \(\hat{e}_{q}\) the spherical unit vectors, \[\hat{e}_{-1} =\frac{1}{\sqrt{2}}(\hat{x}-i\hat{y}) \tag{33}\] \[\hat{e}_{0} =\hat{z}\] (34) \[\hat{e}_{1} =\frac{1}{\sqrt{2}}(\hat{x}+i\hat{y}). \tag{35}\] ### Interaction Hamiltonian and Maxwell-Bloch equations For the field-nuclei coupling, as we have discussed in the previous section, the dominant multipolarity is magnetic dipole, and we therefore take the interaction Hamiltonian to be \[H_{I}=-\int\mathrm{d}^{3}r\,\rho(r)\hat{B}(\vec{r})\cdot\hat{m}(\vec{r}). \tag{36}\] We will work in the rotating frame of the nuclei, using the following interaction picture transformation \[H_{T}= H_{T,F}+H_{T,N}, \tag{37}\] \[H_{T,N}= \hbar\omega_{0}\sum_{\mu}\int\mathrm{d}^{3}r\,\rho(\vec{r})\hat{ \Pi}_{\mu\mu}(\vec{r}),\] (38) \[H_{T,F}= \hbar\omega_{0}\sum_{\lambda=e,m}\int_{0}^{\infty}\mathrm{d}\nu \int\mathrm{d}^{3}r\,f_{\lambda}^{\dagger}(\vec{r},\nu)f_{\lambda}(\vec{r}, \nu). \tag{39}\] The field transformations are then given by \[\hat{B}_{+}(\vec{r},\nu)\rightarrow e^{-i\omega_{0}t}\hat{B}_{+}(\vec{r},\nu), \tag{40}\] \[\hat{B}(\vec{r})\rightarrow \hat{B}(\vec{r},t)=\int_{0}^{\infty}\mathrm{d}\nu\,e^{-i\omega_{ 0}t}\hat{B}_{+}(\vec{r},\nu)+\mathrm{h.c.}\] (41) \[\hat{\Pi}_{j\mu}(\vec{r})\rightarrow e^{-i\omega_{0}t}\hat{\Pi}_{j\mu}(\vec{r}),\] (42) \[\hat{\Pi}_{\mu\nu}(\vec{r})\rightarrow \hat{\Pi}_{\mu\nu}(\vec{r}),\] (43) \[\hat{\Pi}_{jk}(\vec{r})\rightarrow \hat{\Pi}_{jk}(\vec{r}),\] (44) \[\hat{m}_{+}(\vec{r})\rightarrow e^{-i\omega_{0}t}\hat{m}_{+}(\vec{r}),\] (45) \[\hat{m}(\vec{r})\rightarrow \hat{m}(\vec{r},t)=e^{-i\omega_{0}t}\hat{m}_{+}(\vec{r})+\mathrm{ h.c.} \tag{46}\] The field equation of motion is then derived via the Heisenberg equations of motion for the field, \[\partial_{t}\hat{B}(\vec{r},t)=-\frac{i}{\hbar}[\hat{B}(\vec{r},t),H_{F}-H_{F,T }+H_{I}(t)]. \tag{47}\] In Appendix A, we show that for the Fourier transformed field, \[\hat{B}(\vec{r},\omega)=\int_{-\infty}^{\infty}\mathrm{d}t\,e^{i\omega t}\hat{ B}(\vec{r},t), \tag{48}\] applying the Kramers-Kronig relations leads one to obtain \[\hat{B}(\vec{r},\omega)=\hat{B}_{in}(\vec{r},\omega)-\\ \mu_{0}\int\mathrm{d}^{3}r^{\prime}\,\rho(r^{\prime}) \overleftrightarrow{G}_{mm}(\vec{r},\vec{r}^{\prime},\omega)\cdot\hat{m}( \vec{r}^{\prime},\omega), \tag{49}\] where, \(\hat{B}_{in}\) is the homogeneous solution in the absence of resonant nuclei, while \[\overleftrightarrow{G}_{mm}(\vec{r},\vec{r}^{\prime},\omega)=\nabla\times \overleftrightarrow{G}(\vec{r},\vec{r}^{\prime},\omega)\times\nabla^{\prime} \tag{50}\] is the Green's function of the macroscopic Maxwell's equations giving the magnetic field response of a magnetic source. This therefore demonstrates that the macroscopic Maxwell equations hold in the operator sense for the fully quantized field-nucleus interaction. We note at this stage that for the interacting system, the Fourier frequency \(\omega\) is not the same as the noise current frequency \(\nu\) defined in equations (5) through (10). Nevertheless, we may still divide the field into positive and negative frequency components corresponding to annihilation and creation operators of the noise field respectively. Evaluating the equation of motion of the nuclear transition operators results in the following Bloch equations (see Appendix B for details), \[\partial_{t}\hat{\Pi}_{\mu\nu}(\vec{r},t)=\left(i(\Delta_{\mu}- \Delta_{k})-\gamma\right)\hat{\Pi}_{\mu\nu}(\vec{r},t) \tag{51}\] \[\qquad\qquad\qquad+\frac{im_{0}}{\hbar}\sum_{j}\left(\hat{\Pi}_{ \mu j}(\vec{r},t)\vec{d}_{\nu j}e^{i\omega_{0}t}-\hat{\Pi}_{j\nu}(\vec{r},t) \vec{d}_{\mu j}^{*}e^{-i\omega_{0}t}\right)\cdot\hat{B}(\vec{r},t),\] \[\partial_{t}\hat{\Pi}_{jk}(\vec{r},t)=i(\Delta_{j}-\Delta_{k}) \hat{\Pi}_{jk}(\vec{r},t)+\delta_{jk}\sum_{\mu}\Gamma(\mu\to j) \hat{\Pi}_{\mu\mu}(\vec{r},t)\] (52) \[\qquad\qquad\qquad-\frac{im_{0}}{\hbar}\sum_{\mu}\left(\hat{\Pi} _{\mu k}(\vec{r},t)\vec{d}_{\mu j}e^{i\omega_{0}t}-\hat{\Pi}_{j\mu}(\vec{r},t) \vec{d}_{\mu k}^{*}e^{-i\omega_{0}t}\right)\cdot\hat{B}(\vec{r},t),\] \[\partial_{t}\hat{\Pi}_{\mu j}(\vec{r},t)=\left(i(\Delta_{\mu}- \Delta_{j})-\frac{\gamma}{2}\right)\hat{\Pi}_{\mu j}(\vec{r},t)\] (53) \[\qquad\qquad\qquad\qquad+\frac{im_{0}}{\hbar}\left(\sum_{\nu} \hat{\Pi}_{\mu\nu}(\vec{r},t)\vec{d}_{\nu j}^{*}e^{-i\omega_{0}t}-\sum_{k} \hat{\Pi}_{kj}\vec{d}_{\mu k}^{*}e^{-i\omega_{0}t}\right)\cdot\hat{B}(\vec{r},t),\] \[\partial_{t}\hat{\Pi}_{j\mu}(\vec{r},t)=\left(-i(\Delta_{\mu}- \Delta_{j})-\frac{\gamma}{2}\right)\hat{\Pi}_{j\mu}(\vec{r},t)\] (54) \[\qquad\qquad\qquad\qquad-\frac{im_{0}}{\hbar}\left(\sum_{\nu} \hat{\Pi}_{\nu\mu}(\vec{r},t)\vec{d}_{\nu j}e^{i\omega_{0}t}-\sum_{k}\hat{\Pi} _{jk}\vec{d}_{\mu k}e^{i\omega_{0}t}\right)\cdot\hat{B}(\vec{r},t).\] Here, \(\Gamma(\mu\to j)\) is the total decay rate over all channels from excited state \(\mu\) to ground state \(j\). In current experi ments, there are few resonant photons per incident pulse, and thus we can consider the linear response for the magnetization, \(\hat{\Pi}_{\mu\nu}\approx 0\), \(\hat{\Pi}_{jk}\approx\frac{\delta_{jk}}{2I_{g}+1}\). In addition, due to the very large nuclear transition frequency, the rotating wave approximation holds very well, and the positive and negative frequency components of the magnetization can be described with via a linear susceptibility tensor \(\overleftrightarrow{\chi_{m}^{\prime}}\) (see Appendix B) \[\hat{m}_{+}(\vec{r},\omega)= \frac{1}{\mu_{0}}\overleftrightarrow{\chi_{m}}(\omega)\cdot\hat{ B}_{+}(\vec{r},\omega), \tag{55}\] \[\overleftrightarrow{\chi_{m}}(\omega)= -\frac{\sigma_{res}}{k_{0}}\overleftrightarrow{F}(\omega),\] (56) \[\sigma_{res}= \frac{2\pi}{k_{0}^{2}}\frac{f_{LM}}{1+\alpha}\frac{2I_{e}+1}{2I_ {g}+1}. \tag{57}\] Here, \(\sigma_{res}\) is the cross-section of resonant scattering, \(f_{LM}\) is the Lamb-Mossbauer factor, \(\alpha\) the internal conversion coefficient, and \(k_{0}=\omega_{0}c^{-1}\) the overall transition wave-number. We note that \(\overleftrightarrow{\chi_{m}}\) has the overall dimension of volume, as we have defined \(\hat{m}(\vec{r},\omega)\) via the nuclear transition dipole moments, rather than their density. The dimensionless response tensor \(\overleftrightarrow{F}(\omega)\) is given by the sum of the Lorentzian responses of the available transitions \[\overleftrightarrow{F}(\omega)=\frac{3}{2I_{e}+1}\sum_{\mu,j}\frac{\gamma/2}{ \omega-\Delta_{\mu}+\Delta_{j}+i\gamma/2}\vec{d}^{*}_{\mu j}\otimes\vec{d}_{ \mu j}. \tag{58}\] In the case of an inhomogeneous ensemble, the response is averaged over the probability distribution of the inhomogeneous hyperfine environment of the nuclei [58]. ### Green's function for slab waveguides In Figure 1 we give a schematic view of the scattering geometry used to create a slab waveguide for resonant X-rays. The field propagates in the \(x\) direction, with refractive index gradients in the \(z\) direction used to create the waveguide structure. The waveguide bulk is translationally symmetric in the \(x\) and \(y\) directions, and since synchrotron sources are well collimated, we can take the incident field to be uniform in the \(y\) direction, making the problem effectively two-dimensional. For a slab waveguide, the Green's functions are analytically known. The Green's functions can be divided into transverse electric and transverse magnetic polarization, with these components in turn being decomposed into a sum of discrete modes, and a continuum of radiative modes, \[\overleftrightarrow{G}^{s}(\vec{r},\vec{r}^{\prime},\omega)= \sum_{\lambda}\overleftrightarrow{g}^{*}_{\lambda}(z,z^{\prime},\omega)e^{iq ^{*}_{\lambda}(\omega)|x-x^{\prime}|}\\ +\overleftrightarrow{G}^{s}_{rad}(z,z^{\prime},x-x^{\prime}, \omega). \tag{59}\] Here, \(s=TE,TM\) labels the polarization while \(\lambda\) labels the guided modes, which propagate with complex wavenumbers \(q^{*}_{\lambda}\), the positive imaginary parts of which give the attenuation of the guided mode. In particular, compared to the guided mode contributions, the radiative contribution is small in magnitude and very short range, and due to the correspondingly large bandwidth in momentum space results in an overall Purcell factor and Lamb shift. Thus, we will absorb it into our definition of the transition frequency and decay rate. Due to the very weak backscattering of X-rays outside the Bragg condition, we can neglect the backward propagating scattered field, with the substitution \[e^{iq^{*}_{\lambda}|x-x^{\prime}|}\rightarrow\Theta(x-x^{\prime})e^{iq^{*}_{ \lambda}(x-x^{\prime})}, \tag{60}\] where \(\Theta(x)\) is the Heaviside theta distribution. Over the resonant bandwidth of the Mossbauer nuclei, the envelope of the guided mode components of the Green's function vary very little as functions of frequency, while the wave-numbers have a dispersion of approximately \[\frac{\partial q_{\lambda}}{\partial\omega}\approx\frac{1}{c}. \tag{61}\] This linear dispersion can be eliminated by transforming operators with \[\hat{O}(x,\omega)\to e^{-i\omega x/c}\hat{O}(x,\omega), \tag{62}\] which has the effect of substituting time in the Fourier inversion with the retarded time, \[t\to t_{r}=t-\frac{x}{c}. \tag{63}\] Thus, we can simply solve for the absence of the linear dispersion, and substitute out ordinary time for the retarded time in our solution. For \({}^{57}\)Fe, with a lifetime of approximately 142 ns, the retardation is on the order of \(10^{-5}\) lifetimes per millimetre, and is thus negligible for Figure 1: Overview of scattering geometry. The waveguide is formed by a stack of dielectric layers in the \(z\) axis, while the incident beam propagates along the \(x\) axis. The field is sufficiently well collimated that it is uniform along \(y\), allowing us to consider a two dimensional problem in the \(xz\) plane. our purposes, and we will simply use the ordinary time from this point forward. Within this regime, we can then approximate the Green's function as \[g_{\lambda}^{s}(z,z^{\prime},\omega) \approx g_{\lambda}^{s}(z,z^{\prime},\omega_{0}), \tag{64}\] \[q_{\lambda}^{s}(\omega) \approx q_{\lambda}^{s}(\omega_{0}), \tag{65}\] where \(\omega_{0}\) is the mean transition frequency of the nuclei. In the geometry and energy scale we have considered, the difference in reflectivity for TE and TM polarizations is negligible, and additionally the longitudinal component of the TM fields are small. Thus, we can approximate the TM components as having the same magnitude but orthogonal polarization dependence to the TE, as well as the same wave-numbers. Therefore, we can express the Green's function in the approximate form \[\overleftrightarrow{G}(\vec{r},\vec{r}^{\prime},\omega)\approx \big{(}\overleftrightarrow{1}-\hat{x}\otimes\hat{x}\big{)}\sum_{ \lambda}g_{\lambda}(z,z^{\prime})e^{iq_{\lambda}(x-x^{\prime})}, \tag{66}\] \[\overleftrightarrow{G}_{mm}(\vec{r},\vec{r}^{\prime},\omega)\approx k_{0}^{2}\overleftrightarrow{G}(\vec{r},\vec{r}^{\prime},\omega), \tag{67}\] where the guided mode envelope \(g_{\lambda}\) is given in terms of the eigenfunctions \(u_{\lambda}\) of the associated Sturm-Liouville problem for TE modes, \[g_{\lambda}(z,z^{\prime})=\frac{i}{2q_{\lambda}}u_{\lambda}(z)u _{\lambda}(z^{\prime}), \tag{68}\] \[\bigg{(}\mu(z)\partial_{z}\mu(z)^{-1}\partial_{z}+k_{0}^{2}n(z)^ {2}-q_{\lambda}^{2}\bigg{)}u_{\lambda}(z)=0, \tag{69}\] where \(n(z)=\sqrt{\mu(z)\varepsilon(z)}\) is the refractive index. The normalizable TE eigenfunctions obey the bi-orthogonality relation \[\delta_{\lambda\lambda^{\prime}}=\int_{-\infty}^{\infty}\mathrm{d}z\,\frac{1}{ \mu(z)}u_{\lambda}(z)u_{\lambda^{\prime}}(z) \tag{70}\] A generalization to the non-normalizable leaky modes is also possible, using an analogous regularization method to that of Leung _et al._[60, 61]. For determining the incident field at the air-waveguide interface, the negligible backscattering means that the field normal is approximately equal on either side of the boundary, and therefore we can take the boundary condition to simply be continuity of the field. The incident field at the interface can then be decomposed into the guided mode basis and propagated, \[\hat{B}_{in}(x,z,\omega)= \sum_{\lambda}\hat{B}_{in,\lambda}(x,z,\omega), \tag{71}\] \[\hat{B}_{in,\lambda}(x,z,\omega)= u_{\lambda}(z)e^{iq_{\lambda}x}\int_{-\infty}^{\infty} \mathrm{d}z\,\frac{1}{\mu(z)}u_{\lambda}(z)\hat{B}_{in}(0,z,\omega). \tag{72}\] For a thin resonant nuclear layer, such that the guided mode envelopes are uniform across the layer coordinate, we can take the nuclear density to be a delta function, \[\rho(\vec{r})=L\delta(z-z_{0})\rho_{N}, \tag{73}\] where \(\rho_{N}\) is the number density of the bulk material, \(L\) is the layer thickness, and \(z_{0}\) the \(z\) coordinate of the layer centre. The one-dimensional equation of motion then becomes \[\hat{B}(x,z_{0},\omega)=\hat{B}_{in}(x,z_{0},\omega)\\ -i\frac{\zeta}{2}F(\omega)\int_{0}^{x}\mathrm{d}x^{\prime}\sum_{ \lambda}\xi_{\lambda}e^{iq_{\lambda}(x-x^{\prime})}\hat{B}(x^{\prime},z_{0}, \omega), \tag{74}\] where \[\zeta=\rho_{N}\sigma_{res}, \tag{75}\] is the on-resonance attenuation coefficient for ordinary nuclear forward scattering, while \[\xi_{\lambda}=k_{0}L\frac{u_{\lambda}(z_{0})^{2}}{q_{\lambda}} \tag{76}\] is the dimensionless coupling strength for each mode \(\lambda\), relative to ordinary nuclear forward scattering. In particular, we can see that the equation of motion is similar in form to ordinary nuclear forward scattering, with the NFS equation of motion given by [48, 49] \[\hat{B}(x,\omega) =\hat{B}_{in}(x,\omega)\] \[-i\frac{n\zeta}{2}F(\omega)\int_{0}^{x}\mathrm{d}x^{\prime}\,e^{ ink_{0}(x-x^{\prime})}\hat{B}(x^{\prime},\omega). \tag{77}\] Here, we have included both the bulk medium refractive index \(n\), and the overall linear dispersion in the bulk medium wave-vector, which are usually neglected in the literature. ### Matrix form of equations of motion For many purposes it is convenient to work with the decomposition of the waveguide field into the guided modes directly. This can be expressed in a matrix-vector notation. To begin with, we define the following vector, comprising the field components of each participating mode, evaluated at the layer position, \[\vec{\beta}(x,\omega)=\begin{pmatrix}B_{1}(x,z_{0},\omega)\\ \vdots\\ B_{n}(x,z_{0},\omega)\end{pmatrix}. \tag{78}\] The total field at any \(x,z\) coordinate can then be evaluated as \[\hat{B}(x,z,\omega)=\vec{w}(z)^{\top}\cdot\vec{\beta}(x,\omega), \tag{79}\] where \[\vec{w}(z)=\begin{pmatrix}\frac{u_{1}(z)}{u_{1}(z_{0})}\\ \vdots\\ \frac{u_{n}(z)}{u_{n}(z_{0})}\end{pmatrix}. \tag{80}\] In this notation, the equations of motion read \[\vec{\beta}(x,\omega)= \vec{\beta}_{in}(x,\omega)-i\frac{\zeta}{2}F(\omega)\times \tag{81}\] \[\int_{0}^{x}\mathrm{d}x^{\prime}\exp(i(Q-\omega/c)(x-x^{\prime})) \cdot\Lambda\cdot\vec{\beta}(x^{\prime},\omega),\] where \(Q\) is the diagonal matrix of wave-numbers, \[Q_{\lambda\lambda^{\prime}}=q_{\lambda}\delta_{\lambda\lambda^{\prime}}, \tag{82}\] while \(\Lambda\) is the dimensionless rank-1 matrix describing the resonant scattering, \[\Lambda= \vec{\xi}\otimes\vec{w}(z_{0})^{\top}, \tag{83}\] \[= \vec{\xi}\otimes 1^{\top},\] \[\vec{1}= \begin{pmatrix}1\\ \vdots\\ 1\end{pmatrix}, \tag{84}\] where \(\vec{\xi}\) is the column vector of dimensionless relative coupling strengths for each mode, and we note that \(w(z)\) becomes a uniform vector when evaluated at \(z_{0}\). Compared with (77), we see that the bulk medium wave-number \(nk_{0}\) is replaced with the mode wave-number matrix \(Q\), while the effective coupling strength in the bulk medium \(n\) is replaced with the matrix \(\Lambda\). We can then take a spatial derivative to obtain \[\partial_{x}\vec{\beta}(x,\omega)=iQ\cdot\vec{\beta}(x,\omega)-i\frac{\zeta}{ 2}F(\omega)\Lambda\cdot\vec{\beta}(x,\omega). \tag{85}\] We note that in transforming to the differential form of the equations of motion, since \(\hat{B}_{in}\) is the homogeneous solution, we have \[\partial_{x}\vec{\beta}_{in}-iQ\cdot\vec{\beta}_{in}=0. \tag{86}\] ## III Solution of the equations of motion In this section, we will solve the equations of motion (85), first for a single mode waveguide, and then for the general case of multiple modes. ### Single mode solution For realistic layer materials, the leaky modes lying above the cut-off wave-number have a very small amplitude in the waveguide core compared with the guided modes, which lie below the cut-off. Thus, we can adjust the waveguide thickness appropriately, such that the desired number of modes are supported, and neglect the rest due to their small amplitudes. In this section, we will consider the simplest system, which consists of a single mode waveguide with a thin layer of resonant nuclei placed in the centre. For simplicity, we will neglect hyperfine interactions, such that [58] \[\overleftrightarrow{F}(\omega)=\overleftrightarrow{1}\frac{\gamma/2}{\omega+i \gamma/2}. \tag{87}\] In this regime, the incident beam polarization is preserved, and we can treat the problem as scalar. The equation of motion for the single supported mode \(\hat{B}_{1}\) is then given by \[\hat{B}_{1}(x,\omega) =\hat{B}_{in}(x,\omega)\] \[-i\xi_{1}\frac{\zeta}{2}F(\omega)\int_{0}^{x}\mathrm{d}x^{\prime} \,e^{iq_{1}(x-x^{\prime})}\hat{B}_{1}(x^{\prime},\omega), \tag{88}\] we can see that this is of the same form as the equation for ordinary nuclear forward scattering, (77), with the attenuation length scaled by \(\xi_{1}\) and the bulk material wave-vector \(nk_{0}\) replaced by the mode wave-vector \(q_{1}\). As in ordinary nuclear forward scattering, the driving pulse is far shorter in duration than the lifetime of the nuclear transition. We can therefore approximate the driving pulse as \[\langle B_{in}(x=0,t)\rangle\rightarrow\frac{B_{0}}{\Gamma_{0}}\delta(t), \tag{89}\] where \(\Gamma_{0}\) is the bandwidth of the driving pulse, \(\delta(t)\) the Dirac delta distribution, and \(B_{0}\) the peak amplitude of the pulse. Therefore, (88) can be solved in the same manner as in ordinary nuclear forward scattering. Applying the Kagan Fourier transform method [48], we then obtain \[\langle B_{1}(x,t)\rangle= \frac{B_{0}}{\Gamma_{0}}e^{iq_{1}x}\left(\delta(t)-\Theta(t)e^{- \gamma t/2}\frac{\gamma\tau_{1}}{2}\frac{J_{1}(\sqrt{\tau_{1}\gamma t})}{ \sqrt{\tau_{1}\gamma t}}\right), \tag{90}\] \[\xi_{1}= \frac{k_{0}Lu_{1}(z_{0})^{2}}{q_{1}},\] (91) \[\tau_{1}= \xi_{1}\zeta x. \tag{92}\] Here, \(\tau_{1}\) is the effective optical depth, which differs from the bulk material optical depth \(\tau=\zeta x\) by the relative coupling strength \(\xi_{1}\). In the limit \(q_{1}\to nk_{0}\), \(\xi_{1}\to n\) we recover the ordinary nuclear forward scattering solution for a bulk material. ### Multimode solution Next, we will consider the case of multiple guided modes. This can be done using the matrix equation (81). As a first order vector differential equation, the formal solution is the following matrix exponential, \[\vec{\beta}(x,\omega)=\exp(iQx-i\frac{\zeta}{2}F(\omega)\Lambda x)\cdot\vec{ \beta}(0,\omega). \tag{93}\] For analytic Fourier inversion purposes, this solution has the drawback that each term in the series expansion of the matrix exponential is not homogeneous in powers of \(F(\omega)\). For these purposes, we will proceed to instead express the solution as a path ordered exponential. To begin, we eliminate the wave-vectors from the equation of motion by taking the exponential of \(Q\), giving the diagonal propagation matrix \(S\), that accounts for the mode attenuation and phase as \(x\) is varied, \[S(x_{f}-x_{i})=\exp(iQ(x_{f}-x_{i})). \tag{94}\] We then make the substitution \[\vec{\beta}(x,\omega)=S(x)\cdot\tilde{\beta}(x,\omega). \tag{95}\] Note that we have taken the input face of the waveguide to be at the position \(x=0\). Under this substitution, the transformed equation of motion is \[\partial_{x}\tilde{\beta}(x,\omega)=-i\frac{\zeta}{2}F(\omega)\tilde{\Lambda}( x)\cdot\tilde{\beta}(x,\omega), \tag{96}\] where \[\tilde{\Lambda}(x)=S^{-1}(x)\cdot\Lambda\cdot S(x), \tag{97}\] which has the matrix elements \[\tilde{\Lambda}_{\lambda\lambda^{\prime}}(x)=e^{i(q_{\lambda}-q_{\lambda^{ \prime}})x}\xi_{\lambda}. \tag{98}\] The formal solution to (96) is then the path ordered exponential \[\tilde{\beta}(x,\omega)=\mathcal{P}\exp\left(-i\frac{\zeta}{2}F(\omega)\int_{0 }^{x}\mathrm{d}x^{\prime}\,\tilde{\Lambda}(x^{\prime})\right)\tilde{\beta}(0,\omega). \tag{99}\] The full solution can then be obtained via \[B(x,\omega)=\vec{1}^{\top}\cdot S(x)\cdot\mathcal{P}\exp\left(-i\frac{\zeta}{2 }F(\omega)\int_{0}^{x}\mathrm{d}x^{\prime}\,\tilde{\Lambda}(x^{\prime}) \right)\cdot\vec{\beta}(0,\omega), \tag{100}\] where we note that since \(S(0)=\mathds{1}\), the initial condition for both the transformed and original mode vector are the same, \[\tilde{\beta}(0,\omega)=\vec{\beta}(0,\omega). \tag{101}\] This can be further simplified by defining the geometric factors \[U(x,x^{\prime})= \frac{1}{\mathrm{tr}\{\Lambda\}}\,\mathrm{tr}\{\Lambda\cdot S(x) \cdot S^{-1}(x^{\prime})\} \tag{102}\] \[= \frac{1}{\mathrm{tr}\{\Lambda\}}\,\mathrm{tr}\{\Lambda\cdot\exp( iQ(x-x^{\prime}))\}\] (103) \[= U(x-x^{\prime}), \tag{104}\] where in the last line we have noted that again due to translation symmetry the geometric factor \(U\) depends only on the difference of its arguments. These can be interpreted as the field envelope of the scattered field from position \(x^{\prime}\), evaluated at position \(x\), normalized to unit magnitude at \(x^{\prime}\). The solution can then be expressed as the following Dyson series, \[B(x,\omega)= B_{in}(x,\omega) \tag{105}\] \[-i\frac{\zeta}{2}\,\mathrm{tr}\{\Lambda\}F(\omega)\int_{0}^{x} \mathrm{d}x_{1}\,U(x-x_{1})B_{in}(x_{1},\omega)\] (106) \[-\frac{\zeta^{2}}{4}\,\mathrm{tr}\{\Lambda\}^{2}F(\omega)^{2} \int_{0}^{x}\mathrm{d}x_{1}\int_{0}^{x_{1}}\mathrm{d}x_{2}\,U(x-x_{1})U(x_{1} -x_{2})B_{in}(x_{2},\omega)\] (107) \[+\ldots \tag{108}\] where \[B_{in}(x,\omega)=\vec{1}^{\top}\cdot\exp(iQx)\cdot\vec{\beta}(0,\omega) \tag{109}\] is the usual free-field solution in the absence of the resonant nuclei. In this form, the solution's nature as a multiple scattering series becomes transparent; each term is given by the sum of all scattering amplitudes to a given order, with the overall frequency dependence for a given order \(m\) simply given by \(F(\omega)^{m}\). The spatial coefficients can be readily obtained using a recurrence relation and the Laplace transform: writing the series as \[B(x,\omega)=\sum_{n=0}^{\infty}\left(-i\frac{\zeta}{2}\,\mathrm{tr}\{\Lambda \}F(\omega)\right)^{n}t_{n}(x), \tag{110}\] we have the following recurrence relation, \[t_{n}(x)= \int_{0}^{x}\mathrm{d}x^{\prime}\,U(x-x^{\prime})t_{n-1}(x^{ \prime}), \tag{111}\] \[t_{0}(x)= B_{in}(x,\omega). \tag{112}\] Applying a Laplace transform gives us \[\tilde{t}_{n}(s)=\tilde{U}(s)\tilde{t}_{n-1}(s), \tag{113}\] where we have denoted the Laplace transformed variables with a tilde. The solution in Laplace space is therefore simply given by \[\tilde{t}_{n}(s)=\tilde{U}(s)^{n}\tilde{B}_{in}(s,\omega). \tag{114}\] In particular, the exact form of \(\tilde{U}(s),\tilde{B}_{in}(s)\) are readily evaluated from their definition, and given by \[\tilde{U}(s)= \frac{1}{\sum_{i}\xi_{i}}\sum_{i}\frac{\xi_{i}}{s-iq_{i}}, \tag{115}\] \[\tilde{B}_{in}(s,\omega)= \sum_{i}\frac{\beta_{i}(\omega)}{s-iq_{i}}. \tag{116}\] As each Laplace transformed coefficient is a rational function, the inverse transform will be a sum of polynomials multiplied by plane wave envelopes for each mode, with explicit closed-form expressions given in Appendix C. ## IV Spatial patterning So far we have considered only a single uniform resonant layer. While the wavelength of the resonant transition is very small, on the order of angstroms, this largely contributes to an overall phase factor on the order of \(e^{ik_{0}x}\), which will be uniform across the sample. Any deviation from this overall plane wave phase factor can be expressed as a slowly varying envelope and phase, with length scales on the order of \[\delta x\approx\frac{1}{q_{\lambda}-k_{0}} \tag{117}\] for any mode \(\lambda\). In practice, these can be fairly large, with interference beats on the order of \(\mathrm{\SIUnitSymbolMicro m}\) and attenuation lengths up to \(\mathrm{cm}\) in scale. As such, this is on a scale at which it is practical to use techniques such as photolithography during sample preparation. Therefore, in this section we will consider layers that are spatially structured on the \(\mathrm{\SIUnitSymbolMicro mscale}\), and their interaction with the guided modes. ### Micro-strips The simplest system to consider is dividing the layer along the propagation direction into micrometre sized strips, Figure 2. If the strip is made sufficiently thin, such that the envelope of the scattered field is uniform across the strip dimension, it will scatter super-radiantly. To begin with, let us consider the response of a single strip. In the uniform envelope regime, the density can be taken to be \[\rho(\vec{r})=\rho_{N}L_{x}L_{z}\delta(z-z_{0})\delta(x-x_{0}), \tag{118}\] where \(L_{x}\) is the strip \(x\) extent, \(L_{z}\) its \(z\) extent, and \(x_{0},z_{0}\) the strip coordinates in the \(x,z\) plane. The equation of motion (81) of a single microstrip then becomes \[\vec{\beta}(x,\omega)=\vec{\beta}_{in}(x,\omega)-i\frac{\tau}{2}F(\omega)\exp (iQ(x-x_{0}))\cdot\Lambda\cdot\vec{\beta}(x_{0},\omega), \tag{119}\] where as before we have \[\Lambda= \vec{\xi}\otimes\mathbb{I}^{\top}, \tag{120}\] \[\xi_{\lambda}= k_{0}L_{z}\frac{u_{\lambda}(z_{0})^{2}}{q_{\lambda}}, \tag{121}\] and \(\tau=L_{x}\zeta\) is the bulk optical depth of the micro-strips \(x\) extent. ### Super-radiance of a single micro-strip To solve (119), we must solve for the self-interaction of the field. This is found by evaluating at \(x=x_{0}\), giving \[\vec{\beta}(x_{0},\omega)=\vec{\beta}_{in}(x_{0},\omega)-i\frac{\tau}{2}F( \omega)\Lambda\cdot\vec{\beta}(x_{0},\omega). \tag{122}\] The solution to this equation is given by \[\vec{\beta}(x_{0},\omega)=\left(\mathds{1}+i\frac{\tau}{2}F(\omega)\Lambda \right)^{-1}\cdot\vec{\beta}_{in}(x_{0},\omega). \tag{123}\] This can be further simplified however, by noting that \(\Lambda\) is a rank one matrix, and thus we can apply the Sherman-Morrison formula [62] to the inverse to obtain \[\left(\mathds{1}+i\frac{\tau}{2}F(\omega)\Lambda\right)^{-1}=\mathds{1}-\frac {i\frac{\tau}{2}F(\omega)\Lambda}{1+i\frac{\tau}{2}F(\omega)\operatorname{tr} \{\Lambda\}}, \tag{124}\] where we have noted \[\operatorname{tr}\{\Lambda\}=\mathbb{I}^{\top}\cdot\vec{\xi}=\sum_{\lambda} \xi_{\lambda}. \tag{125}\] Figure 2: Front coupling geometry, with resonant layer split into micro-strips with extent \(L_{x},L_{z}\), and spacing \(\Delta x\). Thus, we have \[\vec{\beta}(x_{0},\omega)=\vec{\beta}_{in}(x_{0},\omega)-\frac{i\frac{\tau}{2}F( \omega)\Lambda}{1+i\frac{\tau}{2}F(\omega)\operatorname{tr}\{\Lambda\}}\cdot \vec{\beta}_{in}(x_{0},\omega). \tag{126}\] Evaluating the total field then gives \[B(x_{0},\omega)=B_{in}(x_{0},\omega)-\frac{i\frac{\tau}{2}F(\omega) \operatorname{tr}\{\Lambda\}}{1+i\frac{\tau}{2}F(\omega)\operatorname{tr}\{ \Lambda\}}B_{in}(x_{0},\omega). \tag{127}\] This allows us to directly read off the relative susceptibility of \[\chi(\omega)=-\frac{i\frac{\tau}{2}F(\omega)\operatorname{tr}\{\Lambda\}}{1+i \frac{\tau}{2}F(\omega)\operatorname{tr}\{\Lambda\}}. \tag{128}\] ### Multiple scattering The transfer matrix of an array of \(N\) micro-strips of uniform size, with strip \(i\) placed at position \(x_{i}\) is given by \[W_{tot}(x_{N},x_{1},\omega)=\\ \Big{(}\mathds{1}+\chi(\omega)\tilde{\Lambda}\Big{)}\prod_{i=1}^{ N-1}\bigg{[}S(x_{i+1},x_{i})\left(\mathds{1}+\chi(\omega)\tilde{\Lambda} \right)\bigg{]}, \tag{129}\] where we have defined \[\tilde{\Lambda}=\frac{\Lambda}{\operatorname{tr}\{\Lambda\}}. \tag{130}\] The total field can then be obtained via \[B(x,\omega)=\bar{1}^{\top}\cdot S(x,x_{N})\cdot W_{tot}(x_{N},x_{1})\cdot S(x _{1},0)\cdot\beta_{in}(0,\omega). \tag{131}\] The transmission coefficient is then given by \[T(x,\omega)=\frac{B(x,\omega)}{B_{in}(0,\omega)}. \tag{132}\] We will now proceed to expand the transmission coefficient into a multiple scattering series. At each scattering order, the overall frequency dependence is path independent, given by \[\chi(\omega)^{m} \tag{133}\] for a term corresponding to \(m\) scattering events. The expansion coefficient for this order is given by the sum over the geometric factors involving \(m\) distinct sites, \[V_{m}(x)= \sum_{i_{1}<i_{2}\ldots i_{m-1}<i_{m}}U(x-x_{i_{m}}) \tag{134}\] \[\times\prod_{j=1}^{m-1}\left[U(x_{i_{j+1}}-x_{i_{j}})\right] \frac{B_{in}(x_{i_{1}},\omega)}{B_{in}(0,\omega)}, \tag{135}\] where we note that only sites between \(x,x^{\prime}\) are to be considered. We note at this stage that the transmission is dependent on the spatial profile of the incident field: for each scattering path the resultant amplitude depends on the input field at the first site in the path, which is sensitive to the relative weightings of the two modes in the incident field. The final transmission coefficient is then given by the sum over all scattering orders, \[T(x,\omega)=\sum_{m=0}^{N}\chi(\omega)^{m}V_{m}(x). \tag{136}\] We can see that while a given scattering order always gives rise to the same frequency spectrum independent of the geometry of the micro-strips, the superposition of pathways of different scattering order results in interference that is greatly geometrically dependent. In the following section we will examine specific forms of the transmission coefficient for periodic arrangements coupled to two guided modes. ## V Two mode solution: structured and unstructured layers In this section, we will investigate in detail the case of two resonant modes, for both solid and micro-patterned resonant layers. The relevant parameters for such a system are the relative coupling strengths of each mode, \(\xi_{1},\xi_{2}\), and the complex wave-numbers \(q_{1},q_{2}\). To simplify the analysis, we will divide these into mean and difference, and further decompose the resonant lengths into modulus and phase, and the wave-numbers into real and imaginary parts, as follows, \[q_{1} =\bar{q}+\delta q+i\bar{\kappa}+i\delta\kappa, \tag{137}\] \[q_{2} =\bar{q}-\delta q+i\bar{\kappa}-i\delta\kappa,\] (138) \[\bar{q} =\frac{1}{2}\operatorname{Re}(q_{1}+q_{2}),\] (139) \[\bar{\kappa} =\frac{1}{2}\operatorname{Im}(q_{1}+q_{2}),\] (140) \[\delta q =\frac{1}{2}\operatorname{Re}(q_{1}-q_{2}),\] (141) \[\delta\kappa =\frac{1}{2}\operatorname{Im}(q_{1}-q_{2}),\] (142) \[\xi_{1} =|\xi_{1}|e^{i\phi_{1}}=|\xi_{1}|e^{i(\bar{\phi}+\delta\phi)},\] (143) \[\xi_{2} =|\xi_{2}|e^{i\phi_{2}}=|\xi_{2}|e^{i(\bar{\phi}-\delta\phi)},\] (144) \[\bar{\phi} =\frac{1}{2}(\phi_{1}+\phi_{2}),\] (145) \[\delta\phi =\frac{1}{2}(\phi_{1}-\phi_{2}). \tag{146}\] For an ideal lossless waveguide, \(\xi_{1},\xi_{2}\) are purely real, and thus \(\phi_{1},\phi_{2}=0\). However, for a realistic waveguide they are small but non-vanishing. ### Scattered field To begin, we will examine the geometric factor for the scattered field, given by \[U(x)=\frac{1}{\xi_{1}+\xi_{2}}\left(\xi_{1}e^{iq_{1}x}+\xi_{2}e^{iq_{2}x}\right). \tag{147}\] The common phase can be factored out, giving \[U(x)= \frac{1}{|\xi_{1}|e^{i\delta\phi}+|\xi_{2}|e^{-i\delta\phi}}e^{i( \bar{q}+i\bar{\kappa})x+i\bar{\phi}}\left(|\xi_{1}|e^{i(\delta q+i\delta\kappa)x +i\delta\phi}+|\xi_{2}|e^{-i(\delta q+i\delta\kappa)x-i\delta\phi}\right), \tag{148}\] \[U(x)|^{2} =\frac{e^{-2\kappa x}}{|\xi_{1}+\xi_{2}|^{2}}\left(|\xi_{1}|^{2}e^ {2\delta\kappa x}+|\xi_{2}|^{2}e^{-2\delta\kappa x}+2|\xi_{1}||\xi_{2}|\operatorname {Re}\!\left(e^{i([\delta q+i\delta\kappa)x+i\delta\phi]}\right)\right)\] (149) \[=\frac{e^{-2\kappa x}}{|\xi_{1}+\xi_{2}|^{2}}\left(|\xi_{1}|^{2}e ^{2\delta\kappa x}+|\xi_{2}|^{2}e^{-2\delta\kappa x}+2|\xi_{1}||\xi_{2}| \cos(\delta qx+\delta\phi)\cosh(\delta\kappa x)\right).\] In practice, as we shall see in the following section, it is possible to design waveguides such that imaginary parts of \(q_{1}\), \(q_{2}\) are close. We will therefore assume \(\delta\kappa\approx 0\), valid for sufficiently short distances. For distances long enough for the mismatch in attenuation to be an issue, the overall attenuation will be strong regardless, so in practice the effect is negligible. Thus, for negligible attenuation mismatch, (149) reaches an extremum for positions \[\delta qx+\delta\phi=\pi n\quad n\in\mathbb{Z}. \tag{150}\] Consider a strip placed at \(x_{0}=0\). Let the strips ahead of it be placed at locations \[x_{n}=\frac{\pi n-\delta\phi}{\delta q},\quad n>0. \tag{151}\] The field reaches its maximum amplitude of \[|U(x_{n})/U(0)|^{2}=e^{-2\kappa x_{n}}(|\xi_{1}|+|\xi_{2}|)^{2}. \tag{152}\] However, consider now the scattered field from \(x_{1}\). The path difference is then given by \[x_{n}-x_{1}=\frac{\pi(n-1)}{\delta q}, \tag{153}\] which is off target with the anti-nodes of the scattered field from \(x_{1}\) by a distance of \(\delta\phi/\delta q\). Therefore, it is impossible to place all the micro-strips to be completely constructive with each other unless \(\delta\phi=0\). In practice, as we shall see, for realistic waveguides this effect is small, and over the attenuation length of the cavity modes we can consider all micro-strips to be perfectly constructive. Let us turn our attention now to destructive interference. This occurs when the beat term is zero, \[\delta qx+\delta\phi=\pi(n+\frac{1}{2})\quad n\in\mathbb{Z}. \tag{154}\] Thus, the scattered field from a strip at \(x=0\) is completely out of phase with strips placed at locations \[x_{n}=\frac{\pi(n+\frac{1}{2})-\delta\phi}{\delta q},n>0. \tag{155}\] At these locations, the unattenuated scattered amplitude reaches its minimum value of \[|U(x_{n})/U(0)|^{2}=e^{-2\kappa x_{n}}(|\xi_{1}|-|\xi_{2}|)^{2}. \tag{156}\] As we shall see, it is possible in practice to achieve \(|\xi_{1}|,|\xi_{2}|\) very close to each other, and thus achieve a high level of destructive interference. However, note it is not possible to get total destructive interference at all positions in a periodic array: consider three micro-strips placed \(\pi/2\delta q\) apart. The second strip is transparent to the first, due to the fact that the scattered field of the first strip is completely destructively interfered. The third strip is transparent to the second. However, the third strip is located \(\pi/\delta q\) from the first, and thus the first strips field is maximal. Nevertheless, this demonstrates an intriguing sub-radiant phenomenon: a period array of micro-strips at \(\pi/2\delta q\) spacing can be divided into two non-interacting ensembles. ### Transmission coefficients for micro-strips We will now examine the transmission coefficients for two cases: placing the strips a whole beat and half beat wavelength, which we will refer to as constructively and destructively interfering ensembles, respectively. To understand the qualitative behaviour, we will consider the idealized case of no attenuation mismatch (\(\delta\kappa=0\)), and equally coupled modes (\(\xi_{1}=\xi_{2}=\xi\)). We first note that the overall envelope of \(e^{i(\bar{q}+i\bar{\kappa})x}\) can be factored out, giving us \[V_{m}(x)= e^{i(\bar{q}+i\bar{\kappa})x}\bar{V}_{m}(x), \tag{157}\] \[\bar{V}_{m}(x)= \sum_{i_{1}<i_{2}...i_{m-1}<i_{m}}\bar{U}(x-x_{i_{m}})\prod_{j=1}^{ m-1}[\bar{U}(x_{i_{j+1}}-x_{ij})]\frac{\bar{B}_{in}(x_{i_{1}},\omega)}{\bar{B}_{in}(0, \omega)},\] (158) \[\bar{U}(x)= \frac{1}{2}(e^{i\delta qx}+e^{-i\delta qx})=\cos(\delta qx),\] (159) \[\bar{B}_{in}(x,\omega)= \beta_{1}e^{i\delta qx}+\beta_{2}e^{-i\delta qx}. \tag{160}\] In the case of placing the strip locations a beat wavelength apart, \(x_{n}=\frac{(n-1)\pi}{\delta q}\), we have \[\bar{U}(x_{i}-x_{j})=\cos((i-j)\pi)=(-1)^{i-j}. \tag{161}\] The geometric factors then evaluate to \[\bar{V}_{m}(x) =\cos(\delta qx)\sum_{i_{1}<i_{2}...i_{m-1}<i_{m}}\] \[=\cos(\delta qx)\begin{pmatrix}N\\ m\end{pmatrix}, \tag{162}\] where we note that all the intermediate phase factors cancel, and the sum simply evaluates to the number of \(m\) combinations of the first \(N\) natural numbers. We then simply have \[T(x,\omega)= e^{i(\bar{q}+i\bar{\kappa})x}\cos(\delta qx)\sum_{m=0}^{N} \begin{pmatrix}N\\ m\end{pmatrix}\chi(\omega)^{m} \tag{163}\] \[= e^{i(\bar{q}+i\bar{\kappa})x}\cos(\delta qx)(1+\chi(\omega))^{N}.\] We note that this is the same as the transmission of \(N\) micro-strips interacting with a single mode, with wave-vector \(\bar{q}+i\bar{\kappa}\). On the other hand, for the destructively interfering strips, we have \(x_{n}=\frac{(n-1)\pi}{2\delta q}\). We have \[\forall m\in\mathbb{Z}:U(x_{i+2m+1}-x_{i})=\cos\!\left(\left(m+\frac{1}{2} \right)\pi\right)=0, \tag{164}\] and therefore any scattering events involving both even and odd positions are vanishing. We can therefore divide the ensemble into even and odd sub-ensembles, with the total transmission given by the independent transmissions of each sub-ensemble, \[T(x,\omega)= e^{i(\bar{q}+i\bar{\kappa})x}\left(T_{odd}(x,\omega)+T_{even}(x, \omega)\frac{i(\beta_{1}-\beta_{2})}{\beta_{1}+\beta_{2}}\right). \tag{165}\] Here, we have used \[\frac{B_{in}(x_{1},\omega)}{B_{in}(0,\omega)}= 1, \tag{166}\] \[\frac{B_{in}(x_{2},\omega)}{B_{in}(0,\omega)}= \frac{i(\beta_{1}-\beta_{2})}{\beta_{1}+\beta_{2}}. \tag{167}\] The even and odd transmission coefficients are themselves sensitive to whether the chain ends on an even or odd strip, with the even transmission given by \[T_{even}(x,\omega)=\left\{\begin{array}{rl}&\sin(\delta qx)(1+\chi(\omega))^ {N/2}\quad N\text{ is even},\\ &\sin(\delta qx)(1+\chi(\omega))^{(N-1)/2},\quad N\text{ is odd}.\end{array}\right. \tag{168}\] In particular, we note that if the symmetric state is driven, \(\beta_{1}=\beta_{2}\), that the even transmission will be completely vanishing, due to the fact that both the incident and scattered field would have their nodes at the even positions. The odd transmission is given by \[T_{odd}(x,\omega)=\left\{\begin{array}{rl}&\cos(\delta qx)(1+\chi(\omega)) ^{N/2},\quad N\text{ is even},\\ &\cos(\delta qx)(1+\chi(\omega))^{(N+1)/2},\quad N\text{ is odd}.\end{array}\right. \tag{169}\] The temporal evolution of these solutions can be obtained analytically, and we give the derivation of the nec essary response function in Appendix D. Specifically, in terms of the delayed response of \(n\) micro-strips, \[R_{n}(\omega)=(1+\chi(\omega))^{n}-1, \tag{170}\] the Fourier inverse of this expression is given by \[R_{n}(t)=i\nu_{0}e^{-\gamma t/2+i\nu_{0}t}L_{n-1}^{(1)}(-i\nu_{0}t), \tag{171}\] where \(L_{n-1}^{(1)}\) is a generalized Laguerre polynomial, and \[\nu_{0}=i\tau\,\mathrm{tr}\{\Lambda\}\gamma/4. \tag{172}\] An intriguing phenomenon is that for the case of even \(N\), both the even and odd transmissions have the same number of strips and therefore frequency dependence, and thus the overall frequency dependence is simply that of a single mode waveguide with \(N/2\) micro-strips. On the other hand, for odd \(N\), the odd sub-ensemble has one more strip than the even sub-ensemble, which will give rise to further interference in the time spectrum due to the superposition of two spectra with different dynamical beats. As such, the resulting temporal spectrum is sensitive not only to the number of strips in the ensemble, but the parity as well. For \(N\) even, both sub-ensembles have the same temporal response, and adjusting the position \(x\) at which the spectrum is evaluated results in only an overall re-scaling of the spectrum, Figure 3. However, for odd \(N\), the odd sub-ensemble has one more strip than the even, and the two spectra have different beat times. Adjusting the position \(x\) interpolates between these two spectra, visible as a shift in the beat, Figure 4. ## VI Numerical example: two mode waveguide As a numerical study, we will consider a waveguide with Molybdenum cladding layers, a \(1\,\mathrm{nm}\) iron layer, and \(15.8\,\mathrm{nm}\) of B\({}_{4}\)C filler on either side of the resonant layer. This wave-guide illustrates all the features developed in our model, and thus we will use it as our illustrative example. The numerically obtained parameters for this waveguide are summarized in Table 1. Figure 4: Odd parity case of Figure 3. Due to the odd parity, the different sub-ensembles have different beat times, and therefore shifting the observation point results in a noticeable shift of the time spectra. Figure 3: Example of temporal response of an even numbered interfering micro-strips, in a waveguide with parameters considered in Section VI. The idealized case is considered by neglecting the attenuation mismatch and the mismatch in relative coupling strengths. Three time spectra are compared at different observation points \(\delta x\) relative to the last micro-strip position, measured in terms of the interference beat phase \(\phi=\pi\delta x\delta q\). Due to the even parity, both sub-ensembles have the same time spectrum, and therefore shifting the observation point only scales the spectrum. ### Mode structure First, we illustrate the guided and leaky mode profiles in Figures 5 and 6, as a function of layer depth. This waveguide supports three guided modes, but only the even modes, i.e. those that are symmetric upon reflections about \(z_{0}\), have appreciable magnitude when evaluated at the nuclear layer. The leaky modes have similar magnitudes to the guided modes, however their attenuation is far larger, which can be observed in Figure 7. This Figure illustrates the location of the guided modes, leaky modes and branch cut in the complex \(q\) plane. Due to the larger attenuation of the leaky modes, their corresponding residues are suppressed by a proportional factor. To illustrate this, in Figure 8 we present the Fourier transformed Green's function along the real \(q\) axis. The dominant contribution by far is that of the two even guided modes, \(\lambda=1,3\), and the rest can be treated as a constant background, renormalizing the single particle decay rate. To evaluate the expansion coefficients for the input field of each mode, we assume a broadband, collimated input, with the free space field given by \[B_{free}(x,z,t)=\frac{B_{0}}{\Gamma_{0}}\delta\left(t-\frac{x}{c}\right). \tag{173}\] The Fourier transformed input field at the interface \(x=0\) is then given by \[B_{in}(0,z,\omega)=\frac{B_{0}}{\Gamma_{0}}, \tag{174}\] with the initial conditions for the mode expansions simply given by \[B_{\lambda}(0,\omega)=\frac{B_{0}}{\Gamma_{0}}\int_{-\infty}^{\infty}\mathrm{ d}z\,\frac{1}{\mu(z)}u_{\lambda}(z). \tag{175}\] The resultant input field intensity evaluated at the resonant layer is illustrated in Figure 9. Clearly visible is the beat pattern resulting from the interference of the two modes. The first guided mode has a larger relative amplitude due to the fact that it oscillates less within the waveguide core, and as such has a larger component in the uniform input profile. From Table 1, we can see that the wavelength of the interference beat between the two guided modes is approximately \(20\,\mathrm{\SIUnitSymbolMicro m}\). On the other hand, the attenuation lengths are much smaller, on the Figure 5: Normalized amplitudes of the guided modes of a molybdenum waveguide. Only the first two even modes, \(\lambda=1,3\) couple to the thin nuclear layer, giving us a two mode geometry. The layer widths have been optimized for the two modes to couple almost exactly equally to the resonant layer, giving a strong interference beat in their collective radiation field. Figure 6: Normalized amplitudes of the first few leaky modes of a molybdenum waveguide, which correspond to resonances of the radiative modes. Superimposed, and dashed, is the amplitude of the first guided mode, \(\lambda=1\). The exponential divergence of the leaky modes is clearly visible, demonstrating their nature as an asymptotic expansion for the near field. Although the leaky modes have amplitudes of similar magnitude to the guided modes at the resonant layer (red shading), Figures 7 and 8 demonstrate how the overall coupling strength is suppressed by their large attenuation. Figure 7: Relative mode wave-numbers and radiative mode branch cut for molybdenum waveguide. One can clearly see that leaky modes and guided modes are separated by the branch cut. The leaky modes are significantly attenuated compared to the guided modes, and as such are only relevant at very close range, on the order of \(1\,\mathrm{\SIUnitSymbolMicro m}\). millimetre scale. This motivates the definition of two Q factors for the system. The first is the 'beat Q factor', \[Q_{beat}=\frac{\delta q}{\delta\kappa}. \tag{176}\] This is to be qualitatively interpreted as the number of beats that occur before the attenuation mismatch causes visibility to diminish significantly. For this waveguide, it has a value of approximately 81. The second is the 'attenuation Q factor', \[Q_{atten}=\frac{\delta q}{\bar{\kappa}}, \tag{177}\] which measures the number of beats that occur before overall attenuation dissipates the field. For this waveguide, it is lower than \(Q_{beat}\), with a value of approximately 46. We take the overall Q factor for the collective mode to be the geometric mean of these two Q factors, as both the overall attenuation and attenuation mismatch should be minimized to optimize the cavity for long range sustained collective interference. For this waveguide, the geometric mean gives an overall Q factor of approximately 61. The overall effect of attenuation is clearly illustrated in Figure 10, which illustrates how the attenuation mismatch causes the relative strengths of the constituent fields to diverge throughout the waveguide, and thus reduces the visibility of the interference beat. To evaluate the effect of the phase mismatch between the guided modes, which evaluates to approximately \(\delta\phi=0.0252\,\mathrm{rad}\), we consider the difference between perfect constructive interference, and one that is slightly off target by \(\delta\phi\). This gives \[1-\cos(0.0252)\approx 0.03\%. \tag{178}\] As such, this is negligible, especially compared with the effects of attenuation mismatch. ### Bulk layer First, we will examine the scattered response of a bulk layer. Figure 11 gives the intensity of the scattered field as a function of propagation coordinate \(x\), as well as time. For comparison, in Figure 12 we show that overall scattered intensity resembles that of a single mode with wave-number \((q_{1}+q_{2})/2\) and optical depth \((\xi_{1}+\xi_{2})\tau/2\), where Figure 8: Fourier transformed Green’s function of a molybdenum clad waveguide, evaluated at resonant layer position. One can clearly see that only the two guided modes couple with any appreciable amplitude to the nuclei, with the leaky modes heavily suppressed by attenuation. Figure 10: On-resonance scattered field for a single micro-strip, both scaled to remove the overall attenuation (top), and unscaled (bottom). One can observe the interference beat of the two participating modes. As the collective mode is the symmetric superposition of the two participating modes, a mismatch in the attenuation lengths causes the scattered field to gradually drift out of the collective mode, clearly visible as the reduced visibility of the interference beats. Figure 9: Amplitudes of input fields, evaluated at the layer depth \(z_{0}\), as they propagate through the waveguide. Note the long attenuation lengths. The interference between the two modes is visible as a beat pattern with a wavelength of approximately \(20\,\mathrm{\SIUnitSymbolMicro m}\). Field is normalized by total field at beginning of resonant layer; as the input fields are initially out of phase, peak values with this choice of normalization are greater than one. \(\tau\) is the bulk material optical depth. The resemblance indicates that the resonant scattering largely occurs in the symmetric mode. The interference of the scattered field is visible in the periodic, approximately horizontal minima, which disrupt the dynamical beat of the symmetric mode. This affects both the temporal and spatial responses in different ways, with Figure 13 demonstrating that the temporal response is affected in the form missing beats. In contrast, Figure 14 demonstrates the scattered intensity as a function of the propagation coordinate, at a fixed time slice. Visible are the interplay of two, almost periodic oscillations, the shorter wavelength corresponding to the interference beats, with the larger wavelength corresponding to the spatial pattern of the dynamical beats. ### Microstrips Let us now compare the constructive and destructive scattering ensembles, for an equivalent total combined strip thickness. To begin with, in Figure 15 we illustrate the on-resonance scattered intensity along the propagation axis, in a realistic, non-ideal waveguide. One can clearly see that the constructive ensemble reaches a larger maximum, while the destructive ensemble has a greatly suppressed interference beat due to the out of phase emission of the two sub-ensembles. However, due to the attenuation mismatch, the effect is not perfect, and the contrast in peak field strength between the two ensembles is not as high as the ideal case. Figure 11: Scattered intensity as a function of both propagation coordinate \(x\) and time \(t\). Clearly visible are the approximately horizontal minimum contours, corresponding to the interference beats of the symmetric superposition of the two guided modes. For longer times, the minima are shifted closer together. White dotted horizontal and vertical lines illustrate the particular spatial and temporal slice considered in Figures 13 and 14 respectively. Figure 14: Scattered intensity as a function of propagation coordinate, for a fixed scattering time (blue, solid). The overall envelope strongly resembles that of the symmetric superposition of the two modes (orange, dashed), however the interference of the two modes is visible in a rapid modulation of the amplitude. Figure 12: Scattered intensity for the mean of the two guided modes wave-numbers and optical depths, as a function of both propagation coordinate \(x\) and time \(t\). This gives the overall envelope of Figure 11, without the modulation of the interference beat. Figure 13: Scattered intensity as a function of time, for a fixed spatial extend of waveguide (blue, solid). The overall envelope somewhat resembles that of the symmetric superposition of the two modes (orange, dashed), however the interference results in a reduced amplitude and shift for the third interference beat. Due to the narrow strip width, and the relatively large wavelength of the interference beat, the field envelope is very uniform over the strip's longitudinal extent. For a \(1\,\mathrm{\SIUnitSymbolMicro m}\) strip, the change in amplitude is approximately \[1-\cos(1/20)\approx 0.12\%. \tag{179}\] Thus, we can consider the strip to follow Dicke model dynamics. This can easily be seen by the susceptibility of a single strip, \[\chi(\omega)=-\frac{i\frac{\tau}{2}F(\omega)\operatorname{tr}\{\Lambda\}}{1+i \frac{\tau}{2}F(\omega)\operatorname{tr}\{\Lambda\}}=-\frac{\nu_{0}}{\omega+i \gamma/2+\nu_{0}}, \tag{180}\] where \(\nu_{0}=i\gamma\tau\operatorname{tr}\{\Lambda\}/4\). This is identical in form to the collective response of a grazing incidence Dicke mode [46, 47, 63]. Compared to the response of a single nucleus, this results in an additional overall collective Lamb shift and broadening, however the effect is small, approximately \(0.2\gamma\) for the broadening, and negligible Lamb shift. Qualitatively, the transmission spectra resemble those of nuclear forward scattering for an equivalent optical depth, as illustrated in Figure 16. As we saw in equations (171), (90), the nuclear forward scattering spectrum is reached as the limit of large strip number. This is illustrated in Figure 17, which compares the Laguerre polynomial response of a finite number of strips, to the large \(N\) Bessel function limit. One can see that for larger strip numbers the Bessel function limit and the Laguerre response match for longer times. Due to both the attenuation mismatch and the mismatch of the relative coupling strengths of the two modes, the even-odd interference phenomenon seen in Equations (168) and (169) are somewhat suppressed. This is illustrated in Figures 18 and 19, which show the temporal response of the destructively interfering ensemble for the case of 12 and 13 strips respectively. Compared to the idealized case considered in Figures 3 and 4, the attenuation mismatch causes a small shift in the beat time even for the case of an even number of strips. Figure 16: Comparison of absorption spectra for 30 constructively interfering micro-strips of \(1\,\mathrm{\SIUnitSymbolMicro m}\) width, with a single mode forward scattering spectrum with the same effective optical depth. One can see that both are qualitatively very similar. Figure 17: Comparison of scattered intensity in time domain for the microstrip Laguerre polynomial solution (solid) and solid layer Bessel function limit (dashed), for equivalent total resonant length. The responses match for short times. For larger numbers of strips, the dynamical beat of the Bessel function response matches the Laguerre response qualitatively for longer durations, however the decay of a Bessel response is more rapid. Figure 15: Comparison of on-resonance scattered intensity for super-radiant (blue) and sub-radiant (orange) geometries, with identical combined strip thickness. The super-radiant state reaches a higher peak scattered intensity, but displays the pronounced beat of the collective interference. The sub-radiant geometry displays a suppressed beat, due to the out of phase emission of the two sub-ensembles. Shading displays strip locations for constructive (top) and destructive (bottom) geometries. ## VII Conclusion We have shown that by changing the boundary conditions to forward incidence, thin film nano-structures can act as X-ray waveguides with embedded Mossbauer nuclei. In contrast to the grazing incidence boundary condition, in the forward incidence regime the explicitly broken translational symmetry results in propagation characteristics analogous to forward scattering. As a result, dynamical beats are observed, in contrast to the single wave-vector response of grazing incidence. We demonstrated that the interaction of multiple modes with a thin resonant layer results in interference phenomena over a significantly larger length scale than the wavelength of the nuclear transition, opening a new toolbox of geometrical design for hard X-ray quantum optics. As a particular example of the kinds of geometric effects possible, we considered patterned micro-strips, and demonstrated novel phenomena such as a temporal response that is sensitive to the even-odd parity of the ensemble number, with a reduced optical depth compared with the bulk layer. The possible geometric designs are not limited to one dimension however, and we wish to examine two-dimensional patterned ensembles in future works. In particular, ensembles that couple in a direction transverse to the propagation direction of the incident pulse do so via a transverse wave-number that is far smaller than \(k_{0}\). Thus, backscattering in these transverse directions is far more significant, and we hope that this could be used to implement bi-directionally coupled models that were otherwise unfeasible with ordinary forward scattering. While in this work we have considered only slab X-ray waveguides explicitly, our approach applies to any waveguide where the propagation is unidirectional, and the waveguide has negligible dispersion across the resonant bandwidth of the scatterer. In general, in this case the guided modes will propagate with some wave-vector with respect to this coordinate system, and the Green's function will have a similar form to the expression given in (59), with the substitution of the \(z\) coordinate with the appropriate guided mode coordinate. As such, our findings have general applicability, and could also be applied to analogous systems, such as atomic gases in hollow core fibres. The linear nuclear response described via the linear susceptibility Equation (56) is completely justified for experiments at current generation synchrotron sources, where only a few resonant photons per shot are available. However, with XFEL sources the available bandwidths are already orders of magnitudes narrower than synchrotron sources, and with the advent of seeded XFEL sources this is set to improve even further. As such, we can expect that nonlinearity could play a larger role in future experiments. In this regime, the macroscopic Maxwell's equations for the field, Equation (49), will still hold at the operator level, as long as the waveguide is cooled sufficiently such that the electronic scattering remains linear. However, the magnetization field will no longer be described by a linear susceptibility, and the full Maxwell-Bloch equations for the nucleus-field interaction will have to be considered. ###### Acknowledgements. This work was funded by the Deutsches Forschungsgemeinschaft (DFG) through Projects No. 429529648 (TRR 306 QuCoLiMa) ("Quantum Cooperativity of Light and Matter"). A.P. acknowledges support from the Heisenberg Program of the DFG. Figure 19: Odd parity case of Figure 18, equivalent to Figure 4 with the full consideration of attenutation mismatch. Due to the odd parity, the different sub-ensembles have noticeably different beat times, and therefore shifting the observation point results in a larger shift of the time spectra. Figure 18: Example of temporal response of an even numbered interfering micro-strips, for realistic parameters considered in Section VI. Three time spectra are compared at different observation points \(\delta x\) relative to the last micro-strip position, measured in terms of the interference beat phase \(\phi=\pi\delta x\delta q\). Because of the attenuation and coupling strength mismatch of the two modes, the sub-ensembles are not completely non-interacting, and a small shift in the beat is observed (compare with Figure 3).
2310.04942
Large Language Models for Spatial Trajectory Patterns Mining
Identifying anomalous human spatial trajectory patterns can indicate dynamic changes in mobility behavior with applications in domains like infectious disease monitoring and elderly care. Recent advancements in large language models (LLMs) have demonstrated their ability to reason in a manner akin to humans. This presents significant potential for analyzing temporal patterns in human mobility. In this paper, we conduct empirical studies to assess the capabilities of leading LLMs like GPT-4 and Claude-2 in detecting anomalous behaviors from mobility data, by comparing to specialized methods. Our key findings demonstrate that LLMs can attain reasonable anomaly detection performance even without any specific cues. In addition, providing contextual clues about potential irregularities could further enhances their prediction efficacy. Moreover, LLMs can provide reasonable explanations for their judgments, thereby improving transparency. Our work provides insights on the strengths and limitations of LLMs for human spatial trajectory analysis.
Zheng Zhang, Hossein Amiri, Zhenke Liu, Andreas Züfle, Liang Zhao
2023-10-07T23:21:29Z
http://arxiv.org/abs/2310.04942v1
# Large Language Models for Spatial Trajectory Patterns Mining ###### Abstract Identifying anomalous human spatial trajectory patterns can indicate dynamic changes in mobility behavior with applications in domains like infectious disease monitoring and elderly care. Recent advancements in large language models (LLMs) have demonstrated their ability to reason in a manner akin to humans. This presents significant potential for analyzing temporal patterns in human mobility. In this paper, we conduct empirical studies to assess the capabilities of leading LLMs like GPT-4 and Claude-2 in detecting anomalous behaviors from mobility data, by comparing to specialized methods. Our key findings demonstrate that LLMs can attain reasonable anomaly detection performance even without any specific cues. In addition, providing contextual clues about potential irregularities could further enhances their prediction efficacy. Moreover, LLMs can provide reasonable explanations for their judgments, thereby improving transparency. Our work provides insights on the strengths and limitations of LLMs for human spatial trajectory analysis. ## 1 Introduction The widespread adoption of location-enabled mobile devices has led to a massive collection of human mobility data [18], comprising diverse trajectory types from individual app usages to public transportation systems. These mobility traces can be modeled as dynamic graphs, representing sequences of location visits with associated semantics [21]. Analyzing these dynamic graphs enables valuable insights for applications like transportation mode classification and detecting spatiotemporal patterns [14; 33; 5; 38; 26; 35; 1; 10]. A particularly difficult task is identifying anomalous mobility patterns within an individual's semantic trajectories, where the trajectory significantly deviates from their historical patterns. Finding such anomalous patterns of individuals may indicate a change in behavior which has many important applications. For instance, infectious disease monitoring [17; 23; 36] or tracking elderly behaviors [25]. In recent times, there has been a surge in progress with large language models (LLMs) [22; 15] like Transformers [29], BERT [7], GPT [6], among others. These LLMs act as foundational models, which can be easily adapted for various downstream applications with minimal adjustments [6; 12; 15; 37]. Notably, breakthroughs in design and training techniques have enabled emerging abilities in LLMs, distinguishing cutting-edge models like GPT-3.5 [6], GPT-4 [20], Claude-2 [2], BARD [9], LlaMA [27], and LlaMA-2 [28] from earlier versions. For example, features such as in-context training [16] and zero-shot learning [12; 31] allow these models to adapt to tasks they were not explicitly trained for. Despite the remarkable progress LLMs have made in diverse NLP tasks like question answering (QA) and machine translation, their potential in analyzing human mobility patterns remains largely unexplored. Human mobility data, unlike typical language sequences, presents with intricate spatial-temporal dynamics and rich topological connections between entities. Detecting anomalous behav iors are especially difficult due to the intrinsic property of unkown nature of anomalies. Existing methods typically rely on creating hand-crafted features such as the total traveled distance and use heuristic rule to determine outliers, which limits their capability to generalize effectively to detect unseen outlier patterns. In contrast, LLMs has natural advantage since they can directly perceive natural language input. As LLMs have already shown powerful reasoning ability and generalization capabilities directly from the input prompt, it becomes intriguing to assess to what extent LLMs can detect diverse anomaly behaviors under the human mobility patterns. To systematically study the capabilities of LLMs on detecting outliers (anomalies) in human mobility trajectories, we conduct a series of empirical experiments with leading LLMs on diverse datasets. By comparing their performance to specialized human mobility anomaly detection methods, we aim to assess the potential strengths and limitations of LLMs in this domain. Critically, by altering the input prompt formats, we aim to evaluate how effectively LLMs can extract and leverage the underlying structural information from the dynamic mobility patterns to enhance their performance in subsequent tasks. Moreover, we delve into both the effectiveness and interpretability of LLMs' predictions. ## 2 Related Works In recent literature, a few preliminary studies [32; 30; 19] have made attempts to uncover the potential of applying LLMs in analyzing human mobility patterns. Xue et al. [32] propose a pipeline called AuxMobLCast that leverages pre-trained language models for human mobility forecasting by transforming numeric time series data into natural language descriptions. Experiments on real-world mobility datasets demonstrate that fine-tuning language models like BERT with mobility prompts can effectively capture sequential patterns and yield good performance for predicting future visitor numbers. Wang et al. [30] propose LLM-Mob that leverages LLMs for human mobility prediction by formatting mobility data into historical and context stays and designing effective prompts. Their experiments the potential of harnessing LLMs for mobility prediction through careful instructional prompting. Mushel et al. [19] envision a BERT-like system for trajectory analysis but note challenges like the high number of distinct GPS points compared to words, noisy trajectory data, and long unrelated trajectories, necessitating customization of BERT for trajectories rather than direct application. Although some initial studies exist, our research is the first to comprehensively examine the capability of LLMs in identifying anomalies in human mobility patterns and compare them with state-of-the-art human mobility anomaly detection algorithms. ## 3 Experiments ### Experimental Settings Datasets.We conducted the experiments on two human mobility benchmark datasets: GeoLife[39] and Patterns-of-Life[41; 11]. Brief descriptions of the datasets are as follows: * GeoLife: This dataset was created using the Microsoft Research Asia's GPS Trajectory dataset [39]. The GeoLife dataset, sourced from the GeoLife project by Microsoft Research Asia, captures the GPS trajectories of 182 users over a span of more than three years, from April 2007 to August 2012. Each trajectory in this dataset consists of time-stamped points detailing the latitude, and longitude. The dataset provides insights into leisure and sports activities, such as shopping, sightseeing, dining, hiking, and cycling, offering a comprehensive view of users' outdoor movements. We first eliminated agents with fewer than 50 records, resulting in a final count of 69 users. Due to the lack of ground truth anomalies, we introduced a specific outlier type called the "imposter outlier", by switching the trajectories with another agent after a specific time point. In this paper we choose 80% of the stay points of trajectories as the switching point. * Patterns-of-Life (PoL): A simulated dataset, where agents emulate human activities such as working, socializing, and more, in a real-world-like setting sourced from OpenStreetMap [4]. Throughout their simulated lives, agents navigate to diverse locations, including restaurants, workplaces, residential apartments, and recreational venues. We generate 1,000 users over a span of 4 weeks. We introduced 90 anomaly users with three specific types of abnormal behavior: (1) **Hunger outlier:** An agent under this category becomes hungry more quickly. Such agents have to go to restaurants or their homes much more often. (2) **Social outlier:** This type of agent randomly selects recreational sites to visit when needed, rather than being guided by their attributes and social network. (3) **Work outlier:** Agents in this category abstain from going to work on workdays. Comparison Methods.We compare with several unsupervised trajectory outlier detection methods, including three non-deep learning methods **OMPAD**[3], **MoNav-TT**[34] and **TRAOD**[13], and two state-of-the-art deep learning methods **DSVDD**[24] and **DAE**[40, 8]. A detailed introduction of the comparison methods can be found in Appendix A. ### LLM Detection Results Broadly, this paper focuses on studying the central question of investigating the capabilities of LLMs on identifying anomalous behaviors within human mobility patterns from three perspectives: * **Can LLMs effectively detect anomalous behaviors within human mobility patterns without any indicative information?** It is intriguing to assess whether LLMs can attain substantial predictive performance on anomaly detection tasks, even in the absence of any clue about the anomalies, e.g. such as temporal occurrence or the nature of the anomaly. * **Can providing indicative clues about the anomaly enhance the detection efficacy of LLMs?** Incorporating specific clues or hints about potential anomalies might bolster the LLM's ability to identify irregularities more accurately. By offering contextual information, it could guide the LLM to focus on certain aspects of the data and make more informed predictions. * **Can LLMs provide reasonable explanation to their judgements?** Beyond mere classification, it is imperative to observe whether LLMs can elucidate the fundamental reasoning behind their determinations. Specifically, can these models articulate the underlying rationale when predicting human mobility patterns as anomalous or normal, thereby enhancing the transparency and trustworthiness of their judgments? \begin{table} \begin{tabular}{p{142.3pt} p{284.5pt}} \hline \hline Task & Prompt to LLM \\ setting & \\ \hline **Separate** & Task: You are a human mobility trajectory behavior anomaly detector. Given a historical human trajectory information, can you analyse the pattern behind the trajectory and give an anomaly score (from 0 to 1, where larger value indicates more abnormal) of this user’s behavior? \\ & Here is the sequence of trajectory: \(<\)Sequence\(>\). \\ & Give your analysis and present your estimated anomaly score (from 0 to 1, where larger value indicates more abnormal) inside a pair of square brackets. \\ \hline **Combine** & Task: You are a human mobility trajectory behavior anomaly detector. Given a set of N users’ historical human trajectories information, can you analyse the pattern behind each user’s trajectory and give an anomaly score (from 0 to 1, where larger value indicates more abnormal) of users’ behavior? \\ & Here is the sequence of user 1: \(<\)Sequence-1\(>\) \\ & Here is the sequence of user N: \(<\)Sequence-N\(>\) \\ & Give your analysis and present your estimated anomaly scores about all users (from 0 to 1, where larger value indicates more abnormal): \\ \hline \hline \end{tabular} \end{table} Table 1: Examples of different prompts used in anomaly detection experiments. We also have “Combine With Hint’ prompt which is the similar way of adding hints in ‘Separate With Hint’ prompt. A detailed prompt example is given in Appendix C due to space limitation. Prompt Settings.To systematically study the research questions above, we design two different dimensions to create input prompts for the LLMs. Specifically, (1) **With/Without Hint**: Given that an anomaly begins at a specific time point in the data, it is crucial to evaluate the performance of the LLMs whether this information is provided or not. Notably, for all comparative methods, this hint is used to divide the data into training and testing sets.; (2) **Separate vs Combine**: It would also be interesting to assess whether there is a significant difference in performance when presenting all the trajectories in a single prompt versus in separate prompts. This is because placing them all in one prompt might allow the model to consider interactions between different trajectories. In Table 1, we present examples to illustrate the details of prompt. Choices of LLMs.We opted to utilize OpenAI's state-of-the-art models, GPT-3.5 and GPT-4, via their API system, and Claude-2 by Anthropic. Specifically, we use gpt-3.5-turbo-16k-0613 and gpt-4-0613 for 'Separate' prompt, and Claude-2 for 'Combine' prompt due to its capability to hold long input prompt up to 100K input context window size. * **LLMs can effectively detect anomaly behaviors without any indicative information.** We observed that the LLM demonstrates commonable detection results on both datasets. For the Geolife dataset, Claude-2 surpasses all non-deep learning methods, achieving performance on par with the deep learning method. Both GPT-3.5 and GPT-4 also produce results that are comparable to those of other methods. This might suggest that presenting all mobility trajectories in a single prompt may lead to better performance than using separate prompts. As for the PoL dataset, the GPT-3.5 model significantly outperforms all the methods it was compared against. * **Providing additional indicative information can further enhance the detection efficacy of LLMs.** We observed that by incorporating a 'hint' into the LLMs, detection performance consistently improved across all models when tested on the Geolife dataset. Notably, Claude-2-with-hint demonstrated a significantly superior detection rate, surpassing all other comparison methods. On the other hand, there was a slight dip in performance on the PoL dataset when adding the hint. This could arguably be attributed to the LLM's ability to manage longer input temporal trajectories, as evidenced by the average length of trajectories being 52.2 for Geolife and 182.0 for PoL. * **LLMs is capable to provide reasonable explanation to their judgements.** Examples of generated explanations alongside predictions can be found in Appendix B. Notably, we observed that the LLMs are capable of providing cogent explanations for their prediction results. Such clarity is pivotal for ensuring transparency in anomaly detection methods. ## 4 Conclusion and Future Works In this work, we conduct empirical studies to provide insights on the strengths and limitations of large language models (LLMs) for detecting anomalous behaviors from mobility data, by comparing LLMs to specialized anomaly detection methods. Our key findings show that LLMs can achieve promising anomaly detection performance even without any specific cues about potential anomalies. Furthermore, providing contextual information about possible irregularities can enhance the prediction accuracy of LLMs. In addition, LLMs can provide explanations for their anomaly judgments, thereby improving model transparency. For future work, we plan to study the effectiveness of open \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{3}{c}{Geolife} & \multicolumn{3}{c}{Patterns-of-Life} \\ \cline{2-7} Model & Top-10 Hits & Top-25 Hits\({}^{*}\) & AP score & AUC score & Top-10 Hits & Top-100 Hits & AP score & AUC score \\ \hline OMPAD & 1 & 4 & 0.1665 & 0.1697 & 0 & 0 & 0.0079 & 0.4512 \\ MoNa-YT & 0 & 7 & 0.2849 & 0.3989 & 0 & 0 & 0.0094 & 0.4798 \\ TRADO & 4 & 7 & 0.1060 & 0.5498 & 0 & 1 & 0.0030 & 0.4390 \\ DSVDD & 7 & 15 & 0.6246 & 0.7714 & 1 & 2 & 0.0120 & 0.5398 \\ DAE & 5 & 12 & 0.4627 & 0.6234 & 0 & 1 & 0.0089 & 0.4649 \\ \hline GPT-3.5 & 5 & 8 & 0.4014 & 0.4979 & 0 & 6 & 0.0365 & 0.7572 \\ GPT-3.5-with-hint & 4 & 12 & 0.3741 & 0.5917 & 0 & 2 & 0.0176 & 0.6220 \\ GPT-4 & 3 & 9 & 0.2732 & 0.4417 & - & - & - & - \\ GPT-4-with-hint & 5 & 8 & 0.3181 & 0.4818 & - & - & - & - \\ Claude-2 & 4 & 13 & 0.4756 & 0.7474 & - & - & - & - \\ Claude-2-with-hint & 7 & 16 & 0.6879 & 0.8875 & - & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 2: Outlier detection performance for all datasets. The best performance for AP and AUC scores is highlighted for each dataset. \({}^{*}\)We report Top-25 Hits instead of Top-100 for Geolife dataset due to their size constraints on datasets. (-) denotes the absence of experiments due to the API cost issue. source LLMs such as Llama-2 models to improve model transparency. We also aim to address the issue that LLMs have difficulty processing long mobility trajectories due to the limited context window size. Moreover, we intend to evaluate our approach on additional mobility datasets. This work represents an initial exploration of applying LLMs for the important and promising task of mobility anomaly detection. We hope it will inspire more research in this direction.
2306.08646
New Active Galactic Nuclei Detected by the ART-XC and eROSITA Telescopes during the First Five SRG All-Sky X-ray Surveys
We present the results of our identification of 14 X-ray sources detected in the eastern Galactic sky ($0<l<180 \circ$ ) in the 4-12 keV energy band on the combined map of the first five all-sky surveys (from December 2019 to March 2022) with the Mikhail Pavlinsky ART-XC telescope onboard the SRG observatory. All 14 sources are reliably detected by the SRG/eROSITA telescope in the 0.2-8 keV energy band. Six of them have been detected in X-rays for the first time, while the remaining ones have already been known previously as X-ray sources, but their nature has remained unknown. We have taken optical spectra for 12 sources with the 1.6-m AZT-33IK telescope at the Sayan Observatory (the Institute of Solar-Terrestrial Physics, the Siberian Branch of the Russian Academy of Sciences). For two more objects we have analyzed the archival spectra taken during the 6dF survey. All objects have turned out to be Seyfert galaxies (one NLSy1, three Sy1, four Sy1.9, and six Sy2) at redshifts $z=0.015-0.238$. Based on data from the eROSITA and ART-XC telescopes onboard the SRG observatory, we have obtained X-ray spectra for all objects in the energy range 0.2-12 keV. In four of them the intrinsic absorption exceeds $N_{\rm H}>10^{22}$ cm$^{-2}$ at a 90% confidence level, with one of them being probably heavily obscured ($N_{\rm H}>5\times 10^{22}$ cm$^{-2}$ with 90% confidence). This paper continues our series of publications on the identification of hard X-ray sources detected during the all-sky survey with the SRG orbital X-ray observatory.
Grigory Uskov, Sergey Sazonov, Igor Zaznobin, Rodion Burenin, Marat Gilfanov, Pavel Medvedev, Rashid Sunyaev, Roman Krivonos, Ekaterina Filippova, Georgii Khorunzhev, Maksim Eselevich
2023-06-14T17:23:23Z
http://arxiv.org/abs/2306.08646v1
New Active Galactic Nuclei Detected by the ART-XC and eROSITA Telescopes during the First Five SRG All-Sky X-ray Surveys ###### Abstract We present the results of our identification of 14 X-ray sources detected in the eastern Galactic sky (\(0<l<180\)\(\circ\) ) in the 4-12 keV energy band on the combined map of the first five all-sky surveys (from December 2019 to March 2022) with the Mikhail Pavlinsky ART-XC telescope onboard the SRG observatory. All 14 sources are reliably detected by the SRG/eROSITA telescope in the 0.2-8 keV energy band. Six of them have been detected in X-rays for the first time, while the remaining ones have already been known previously as X-ray sources, but their nature has remained unknown. We have taken optical spectra for 12 sources with the 1.6-m AZT-33IK telescope at the Sayan Observatory (the Institute of Solar-Terrestrial Physics, the Siberian Branch of the Russian Academy of Sciences). For two more objects we have analyzed the archival spectra taken during the 6dF survey. All objects have turned out to be Seyfert galaxies (one NLSy1, three Sy1, four Sy1.9, and six Sy2) at redshifts \(z=0.015\)-0.238. Based on data from the eROSITA and ART-XC telescopes onboard the SRG observatory, we have obtained X-ray spectra for all objects in the energy range 0.2-12 keV. In four of them the intrinsic absorption exceeds \(N_{\rm H}>10^{22}\) cm\({}^{-2}\) at a 90% confidence level, with one of them being probably heavily obscured (\(N_{\rm H}>5\times 10^{22}\) cm\({}^{-2}\) with 90% confidence). This paper continues our series of publications on the identification of hard X-ray sources detected during the all-sky survey with the SRG orbital X-ray observatory. active galactic nuclei, sky surveys, optical observations, redshifts, X-ray observations + Footnote †: slugcomment: Astronomy Letters, 2023 Vol. 49, No. 2, pp. 25–48; Pisma v Astronomicheskii zhurnal, 2023 Vol. 49, No. 2, pp. 97–121 ## 1 Introduction The Spectrum-RG (SRG) orbital observatory (Sunyaev et al., 2021) has conducted an all-sky X-ray survey since December 2019. There are two telescopes with grazing-incidence X-ray optics onboard the satellite: eROSITA (Predehl et al., 2021) and Mikhail Pavlinsky ART-XC (Pavlinsky et al., 2021) operating in the 0.2-9 and 4-30 keV energy bands, respectively. A total of eight full sky surveys, each with a duration of six months, are planned to be conducted. The first two surveys were completed in December 2020, and the first catalog of X-ray sources (ARTSS12) detected with the ART-XC telescope in the 4-12 keV energy band was produced from their results (Pavlinsky et al., 2022). Among 867 sources it contains dozens of astrophysical objects (the exact number is unknown, since there are false X-ray sources in the catalog) whose nature was unknown when the catalog was released, with some of them having not been detected previously in X-rays. The SRG/ART-XC all-sky survey allows representative samples of such classes of objects as active galactic nuclei (AGNs) and cataclysmic variables (CVs) to be obtained. Therefore, it is important to identify a maximally large number of new objects detected during the survey. Such a work was begun when producing the ARTSS12 catalog and is continued at present. The optical observations being carried out with the 1.6-m AZT-33IK telescope at the Sayan Observatory (the Institute of Solar-Terrestrial Physics, the Siberian Branch of the Russian Academy of Sciences) and the 1.5-m Russian-Turkish telescope (RTT-150) at the TUBITAK National Observatory incorporated into the SRG ground support complex play a major role in this work. The first results of this observational campaign were presented in Zaznobin et al. (2021); Uskov et al. (2022); Zaznobin et al. (2022), where the identification of 25 AGNs (including eight objects from the archival data of the spectroscopic 6dF survey, Jones et al., 2004) and three CVs was reported. In addition, during the SRG/ART-XC sky survey several X-ray binaries with neutron stars and black holes were discovered and then identified (Lutovinov et al., 2022; Mereminskiy et al., 2022). The fourth SRG all-sky survey was completed in December 2021, and approximately a third of the sky had been scanned for the fifth time by March 7, 2022. Then, the all-sky survey was suspended, and the ART-XC telescope began to conduct a deep survey of the sky along the Galactic plane. At present, the work on producing the catalog of sources detected by the ART-XC telescope based on data from the first five surveys (below we will use precisely this wording, although the fifth survey has not been completed), ARTSS1-5, is being completed. Many new objects appeared in the new catalog, and the work on their identification is underway. In this paper we present the results of our identification and classification of 14 AGNs selected among the hard X-ray sources from the ARTSS1-5 catalog in the eastern Galactic half of the sky (\(0<l<180\)o). All these sources are reliably detected by the SRG/eROSITA telescope in the 0.2-8 keV energy band. We analyzed the broadband X-ray spectra obtained from the eROSITA and ART-XC data and the optical spectra taken by us with the AZT-33IK telescope and previously during the 6dF survey. To calculate the luminosities of the objects, we use the model of a flat Universe with parameters \(H_{0}=70\)km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m}=0.3\). ## 2 The sample of objects, observations Objects from the catalog of sources detected in the 4-12 keV energy band on the combined map of the first five SRG/ART-XC sky surveys (from December 12, 2019, to March 7, 2022) (the ARTSS1-5 catalog, being prepared for publication) constitutes the sample. We considered only point sources from this catalog detected at a confidence level no less than 4.5 standard deviations in the half of the sky \(0<l<180\)o (for which we also have SRG/eROSITA data). The sample includes a total of 14 objects. Table 1 for all objects gives the coordinates of the X-ray source from the ART-XC and eROSITA data, the radius of the eROSITA position error circle (at 98% confidence), the coordinates of the suspected optical counterpart (see the Section Results), the angular separation between the X-ray source (from the ART-XC and eROSITA data) and the optical counterpart, the flux in the 4-12 keV energy band (from the ART-XC data), and the name of the observatory that detected the source in X-rays for the first time. Six sources were detected for the first time with the ART-XC and eROSITA telescopes onboard the SRG observatory. Figure 1 shows optical images of the objects being studied and the corresponding ART-XC and eROSITA position error circles of the X-ray sources. A specific extended optical object can be unambiguously associated with each X-ray source. ### X-ray Observations Depending on their positions in the sky, the sample sources were scanned during four or five SRG all-sky surveys. The combined data of these surveys were used to construct the spectra of the sources in the energy range 0.2-12 keV. The ART-XC X-ray spectra were extracted from the all-sky survey data with the standard software that was used to process the survey data (Pavlinsky et al., 2021, 2022). The data from all seven ART-XC modules were combined. We used the data in two broad energy bands, 4-7 and 7-12 keV, that were extracted in a circle of radius \(120^{\prime\prime}\). We calibrated the count rate-to-flux conversion factors using Crab Nebula observations (see Pavlinsky et al.2022) and constructed a diagonal response matrix based on them. The background level was estimated using the data in the hard 30-70 keV energy band and the wavelet decomposition survey images (see Pavlinsky et al.2022). The eROSITA data were processed with the calibration and data processing system created and maintained at the Space Research Institute of the Russian Academy of Sciences, which uses the elements of the eASS (eROSITA Science Analysis Software System) package and the software developed by the science group on the X-ray catalog of the Russian eROSITA consortium. We extracted the source spectra in a circle of radius \(60^{\prime\prime}\) and the background spectra in a ring with an inner radius of \(120^{\prime\prime}\) and an outer radius of \(300^{\prime\prime}\) around the source. If other sources fell into the background region, then the photons in a region of radius \(40^{\prime\prime}\) around them were excluded. The spectra were extracted from the data of all seven ART-XC modules in the energy range 0.2-9.0 keV. When fitting the spectra, the data were binned in such a way that there were at least three counts in each energy channel. ### Optical Observations Our spectroscopy was carried out at the AZT-33IK telescope using the low- and medium-resolution ADAM spectrograph (Afanasiev et al., 2016; Burenin et al., 2016) (see the log of observations in Table 2). We used long slits of width \(1.5^{\prime\prime}\), \(2^{\prime\prime}\), and \(3^{\prime\prime}\)at the ADAM spectrograph. The slit center was brought into coincidence with the central region of the observed galaxy. The observations were performed at a seeing (FWHM) better than \(2.5^{\prime\prime}\). We used volume phase holographic gratings (VPHGs, grisms), 600 lines per millimeter, to take the spectra at the ADAM spectrograph. As a dispersive element we used VPHG600G for the spectral range 3650-7250 A with a resolution of 8.6 A for a \(2^{\prime\prime}\)slit and VPHG600R for the spectral range 6460-10050 A with a resolution of 12.2 A for a \(2^{\prime\prime}\)slit. When using VPHG600R, we set the OS11 filter, which removes the second interference order from the image. A thick e2v CCD30-11 array produced by the deep depletion technology is installed at the spectrograph. This allows the spectroscopic images to be obtained at a wavelength of 10 000 A without interference on the thin CCD substrate. All our observations were performed with zero slit position angle. After each series of spectroscopic images for each object, we obtained the calibration images of a lamp with a continuum spectrum and the line spectrum of a He-Ne-Ar lamp. On each observing night we took the spectra of spectrophotometric standards from the ESO1 list for all of the sets of diffraction gratings and slits being used. The spectrophotometric standards were chosen so that they were approximately at the same elevation with the optical source observed by us. The data reduction was performed using the IRAF2 software and our own software. The flux calibration was performed by standard IRAF procedures from the onedepsee package. Footnote 1: [https://www.eso.org/sci/observing/tools/standards](https://www.eso.org/sci/observing/tools/standards) Footnote 2: [http://iraf.noao.edu/](http://iraf.noao.edu/) The spectra of the objects were corrected for interstellar \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{eROSITA source} & \multicolumn{3}{c}{Optical counterpart} & \\ \cline{2-9} N\({}^{\rm u}\) & ART-XC source & \(\alpha\) & \(\delta\) & \(r98\) & \(\alpha\) & \(\delta\) & \(r_{\rm A}\) & \(r_{\rm e}\) & \(F_{\rm A}^{4-12}\) & Discovered \\ \hline 1 & SRGA J001439.6+183503 & 3.66955 & 18.58246 & 11.0\({}^{\prime\prime}\) & 3.66712 & 18.58203 & 10.6\({}^{\prime\prime}\) & 8.4\({}^{\prime\prime}\) & 4.7\({}^{+1.8}_{-1.5}\) & XMM, Swift \\ 2 & SRGA J002240.8+804348 & 5.68178 & 80.72962 & 2.2\({}^{\prime\prime}\) & 5.68204 & 80.72947 & 7.2\({}^{\prime\prime}\) & 0.6\({}^{\prime\prime}\) & 2.8\({}^{+0.9}_{-0.8}\) & ROSAT \\ 3 & SRGA J010742.9+574419 & 16.92936 & 57.73894 & 2.7\({}^{\prime\prime}\) & 16.92964 & 57.73825 & 2.2\({}^{\prime\prime}\) & 2.6\({}^{\prime\prime}\) & 3.0\({}^{+1.2}_{-1.0}\) & SRG \\ 4 & SRGA J021227.3+520953 & 33.11066 & 52.16487 & 2.5\({}^{\prime\prime}\) & 33.11032 & 52.16483 & 7.6\({}^{\prime\prime}\) & 0.8\({}^{\prime\prime}\) & 1.4\({}^{+1.1}_{-0.9}\) & ROSAT \\ 5 & SRGA J025208.4+482955 & 43.04074 & 48.49992 & 2.7\({}^{\prime\prime}\) & 43.04017 & 48.49983 & 13.1\({}^{\prime\prime}\) & 1.4\({}^{\prime\prime}\) & 2.4\({}^{+1.4}_{-1.1}\) & ROSAT \\ 6 & SRGA J045432.1+524003 & 73.63236 & 52.66875 & 2.8\({}^{\prime\prime}\) & 73.63262 & 52.66847 & 4.3\({}^{\prime\prime}\) & 1.2\({}^{\prime\prime}\) & 4.1\({}^{+1.8}_{-1.5}\) & SRG \\ 7 & SRGA J051313.5+662747 & 78.31903 & 66.46429 & 3.2\({}^{\prime\prime}\) & 78.31846 & 66.46398 & 17.9\({}^{\prime\prime}\) & 1.4\({}^{\prime\prime}\) & 3.2\({}^{+1.5}_{-1.3}\) & Swift \\ 8 & SRGA J110945.8+800815 & 167.43408 & 80.13535 & 5.5\({}^{\prime\prime}\) & 167.43237 & 80.13489 & 10.8\({}^{\prime\prime}\) & 2.0\({}^{\prime\prime}\) & 2.2\({}^{+1.3}_{-1.1}\) & SRG \\ 9 & SRGA J161251.4\(-\)052100 & 243.21307 & -5.35506 & 3.0\({}^{\prime\prime}\) & 243.21342 & -5.35485 & 17.7\({}^{\prime\prime}\) & 1.5\({}^{\prime\prime}\) & 2.4\({}^{+1.4}_{-1.2}\) & ROSAT \\ 10 & SRGA J161943.7\(-\)132609 & 244.93418 & -13.43768 & 3.4\({}^{\prime\prime}\) & 244.93354 & -13.43781 & 8.7\({}^{\prime\prime}\) & 2.3\({}^{\prime\prime}\) & 2.9\({}^{+1.5}_{-1.2}\) & SRG \\ 11 & SRGA J182109.8+765819 & 275.29902 & 76.97126 & 5.4\({}^{\prime\prime}\) & 275.29846 & 76.97139 & 6.5\({}^{\prime\prime}\) & 0.7\({}^{\prime\prime}\) & 1.3\({}^{+0.6}_{-0.5}\) & SRG \\ 12 & SRGA J193707.6+660816 & 294.28375 & 66.13904 & 2.1\({}^{\prime\prime}\) & 294.28417 & 66.13925 & 6.4\({}^{\prime\prime}\) & 1.0\({}^{\prime\prime}\) & 0.8\({}^{+0.5}_{-0.5}\) & ROSAT \\ 13 & SRGA J200331.2+701332 & 300.89093 & 70.22678 & 2.2\({}^{\prime\prime}\) & 300.89162 & 70.22692 & 15.0\({}^{\prime\prime}\) & 1.0\({}^{\prime\prime}\) & 1.4\({}^{+0.5}_{-0.5}\) & ROSAT \\ 14 & SRGA J211149.5+722815 & 317.96266 & 72.47104 & 3.0\({}^{\prime\prime}\) & 317.96575 & 72.47122 & 10.4\({}^{\prime\prime}\) & 3.4\({}^{\prime\prime}\) & 0.9\({}^{+0.6}_{-0.6}\) & SRG \\ \hline \end{tabular} Column 1: the ordinal source number in the sample being studied. Column 2: the source name from the preliminary ARTSS1-5 catalog (the coordinates of the X-ray sources used in the names are given for epoch J2000.0). Column 3, 4: the source coordinates from the eROSITA data. Column 5: the radius of the 98% eROSITA position error circle. Column 6, 7: the coordinates of the suspected optical counterpart. Column 8: the angular separation between the ART-XC source and the optical counterpart. Column 9: the angular separation between the eROSITA source and the optical counterpart. Column 10: the average 4–12 keV X-ray flux from the sum of five ART-XC sky surveys, in units of 10\({}^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\). Column 11: the orbital observatory that detected the X-ray source for the first time. \end{table} Table 1: The sample of objects. \begin{table} \begin{tabular}{c c c c c} \hline \hline ART-XC source & Date & Telescope & Grism & Slit, \({}^{\prime\prime}\) & Exposure time, s \\ \hline SRGA J001439.6+183503 & 2022-10-31 & AZT-33IK & VPHG600G & 3 & \(5\times 300\) \\ SRGA J002240.8+804348 & 2022-10-31 & AZT-33IK & VPHG600G & 2 & \(3\times 600\) \\ & 2022-11-01 & AZT-33IK & VPHG600R & 2 & \(3\times 600\) \\ SRGA J010742.9+574419 & 2022-03-04 & AZT-33IK & VPHG600G & 3 & \(6\times 600\) \\ SRGA J021227.3+520953 & 2022-11-18 & AZT-33IK & VPHG600G & 2 & \(4\times 600\) \\ & 2022-11-21 & AZT-33IK & VPHG600R & 2 & \(4\times 600\) \\ SRGA J025208.4+482955 & 2022-11-01 & AZT-33IK & VPHG600G & 2 & \(3\times 600\) \\ SRGA J045432.1+524003 & 2022-11-01 & AZT-33IK & VPHG600G & 2 & \(3\times 600\) \\ SRGA J051313.5+662747 & 2022-11-01 & AZT-33IK & VPHG600G & 2 & \(5\times 300\) \\ SRGA J110945.8+800815 & 2022-11-03 & AZT-33IK & VPHG600G & 2 & \(5\times 300\) \\ & 2022-11-03 & AZT-33IK & VPHG600G & 2 & \(5\times 300\) \\ & 2022-11-03 & AZT-33IK & VPHG600G & 2 & \(2\times 300\) \\ SRGA J161251.4\(-\)052100 & 2003-05-30 & UKST & VPHG600G & 2 & \(3\times 600\) \\ & 2023-05-30 & UKST & VPHG425R extinction with the deredden procedure from the onedspec IRAF package in a standard way (Cardelli et al., 1989). The color excess \(E(B-V)\) was determined with the help of the GALExtin3 service using the model of Schlegel et al. (1998). We took \(R_{V}=2.742\) from Schlafly & Finkbeiner (2011). Footnote 3: www.galextin.org For two objects from the sample we analyzed the archival spectroscopic data from the 6dF survey (Jones et al., 2009). This survey was conducted at the UKST 1.2-m Schmidt telescope using a multifiber spectrograph with a 5.7\(\circ\) field of view equipped with two low-resolution (\(R\approx 1000\)) gratings with overlapping spectral ranges. The range 4000-7500 A was completely covered. The spectra taken during the survey were not flux-calibrated and are presented in counts, which does not allow the absolute fluxes in emission lines to be measured. However, these data can be used to estimate the line equivalent widths and the ratios of the fluxes in pairs of closely spaced lines, which is quite enough for the classification of AGNs. ## 3 Results ### X-ray Spectra Our spectral analysis was performed jointly using the eROSITA and ART-XC data. The spectra were fitted in the energy range 0.2-12 keV with the XSPEC v12.12.04 software (Arnaud, 1996). The \(W\)-statistic that takes into account the X-ray background was used for our model fitting. Footnote 4: [https://heasarc.gsfc.nasa.gov/xanadu/xspec](https://heasarc.gsfc.nasa.gov/xanadu/xspec) To fit the spectra, we used the model of a power-law continuum with a low-energy cutoff due to photoabsorption in the Galaxy and the object itself. In the spectra of several sources with a great intrinsic absorption we detected an excess of the observed counts compared to the prediction of the power-law model at energies below \(\sim 1\) kev. This excess can be caused by a slight inaccuracy in the current version of the eROSITA response matrix. On the other hand, in the X-ray spectra of type 2 AGNs at energies below 2 keV additional emission is often observed (see, e.g., Guainazzi et al., 2005) against the background of an absorbed power-law continuum. The nature of this emission can be varied, and it is by no means always possible to establish it. The following is discussed in the literature as possible mechanisms (see, e.g., Guainazzi & Bianchi, 2007): (1) the emission from the central source scattered in the rarefied gas outside the dusty torus around the supermassive black hole (SMBH), (2) the emission from the gas in the galactic nucleus photoionized by the emission and/or shocks associated with the SMBH activity, and (3) the emission from the galaxy itself associated with active star formation. Determining the nature of the observed excess emission at low energies requires refining the eROSITA response matrix, which is beyond the scope of this paper. Therefore, when fitting the spectra of the sources with such an excess, we included an additional soft component that was described by the thermal hot optically thin plasma emission spectrum (APEC, Smith et al., 2001) in the spectral model. Thus, we used the following two models in XSPEC: \[TBabs(zTBabs(cflux\ zpowerlaw))\] \[TBabs(zTBabs(cflux\ zpowerlaw)+apec)\] where TBabs is the absorption in the Galaxy from HI4PI data (HI4PI Collaboration et al., 2016), zTBabs is the intrinsic absorption in the AGN frame, and cflux is the absorption-corrected flux of the power-law component in the 2-10 keV energy band. When making a decision about the necessity of adding the soft component to the model, we used a likelihood ratio test: if \(Cstat\) decreased by more than 6 (corresponding to a statistical significance of more than 95% for two degrees of freedom) when adding the soft component, then preference was given to the two-component model. The X-ray spectrum fitting results are presented in Table 3. The 90% confidence intervals of the parameters are given. The spectra themselves are presented in Fig. 4, with the eROSITA spectra having been rebinned for clarity. We will reiterate that when interpreting the spectral parameters given in Table 3, it should be kept in mind that the final conclusion about the nature of the excess at low energies requires a further study and a refinement of the eROSITA response matrix, which is planned to be done in our succeeding papers. It is important to note that the power-law parameters (the slope and the hydrogen column density) do not change greatly when adding the soft component to the model. ### Optical Spectra Standard criteria based on the emission line flux ratios (Osterbrock, 1981; Veron-Cetty et al., 2001) were used to classify the Seyfert galaxies. The spectral continuum was fitted by a polynomial, while the emission lines were fitted by Gaussians. Thus, for each line we determined the central wavelength, the full width at half maximum \(FWHM_{\rm{ms}}\), the flux, and the equivalent width \(EW\). The \(FWHM\) of the broad Balmer lines was corrected for the spectral resolution of the instrument: \(FWHM=\sqrt{FWHM_{\rm{ms}}^{2}-FWHM_{\rm{res}}^{T}}\), where \(FWHM_{\rm{res}}\) was determined for each dispersive element and each slit as the \(FWHM\) of the lines in the calibration lamp spectrum. The errors of the emission line parameters are given at 68% confidence. The confidence level of the redshift was determined as the error of the mean narrow-line redshift. The measured \(FWHM\) of the narrow emission lines are consistent with the instrumental broadening and, therefore, the values of \(FWHM\) are not given for them. The confidence intervals for the line equivalent widths (\(EW\)) were obtained by the Monte Carlo method. Assuming that the flux errors obeyed a normal distribution, we selected 1000 spectrum realizations. Then, for each of the realizations we estimated Figure 1: Optical images in the r filter from the PanSTARRS PS1 survey (Chambers et al., 2016). The large and small circles indicate the ART-XC (radius 30\({}^{\prime\prime}\)) and eROSITA (see \(r98\) in Table 1) position error circles of the X-ray sources, respectively. The arrow indicates the optical objects whose spectra are analyzed in this paper. \(EW\). The confidence intervals were estimated from the derived \(EW\) distribution. To obtain an upper limit on the line flux, we fixed the center of the Gaussian and took its width to be equal to the instrumental broadening. The estimated line parameters for each of the sources are given below in Table 6. The redshifts of the objects were determined from the narrow emission lines and are given in the observatory frame. For the sources from the spectroscopic 6dF survey we used the redshifts from the same catalog. The results of the classification of sources and their redshift measurements are presented in Table 4. ### Results on Individual Objects #### 3.3.1 SRGA J001439.6+183503 This X-ray source is present in the catalog of the XMM-Newton slew survey (XMM-SSC 2018; Saxton et al. 2008) and the catalog of point sources detected by XRT onboard the Swift observatory (Evans et al., 2019): the sources XMMSL2 J001439.6+183450 and 2SXPS J001440.0+183455, respectively. There is the edge-on galaxy NGC52 in the ART-XC and eROSITA position error circles (Fig. 1). It is located at redshift \(z=0.01817\) (according to the SIMBAD database) and has an infrared color \(W1-W2=0.26\). The radio source NVSS J001440+183455 can also be associated with this object. Weak narrow H\(\alpha\) and [NII]\(\lambda\)6583 emission lines are seen \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{6}{c}{TBabs zTBabs(ZPL), TBabs (zTBabs ZPL + APEC) models} \\ ART-XC source & \(N_{\rm H,MW}\) & \(N_{\rm H}\) & \(\Gamma\) & \(F_{\rm PL}\) & \(kT\) & \(A_{\rm APEC}\) & dof & Cstat \\ \hline SRGA J001439.6+183503 & 0.4 & \(109^{+85}_{-57}\) & \(1.6^{+1.8}_{-1.4}\) & \(5.15^{+4.39}_{-1.81}\) & \(-\) & \(-\) & 19 & 33.6 \\ & & \(115^{+88}_{-57}\) & \(1.7^{+1.8}_{-1.4}\) & \(5.33^{+2.95}_{-1.87}\) & \(0.77^{+0.29}_{-0.22}\) & \(0.5^{+0.4}_{-0.3}\) & \(\times 10^{-5}\) & 17 & 21.8 \\ SRGA J002240.8+804348 & 1.4 & \(0.4^{+0.3}_{-0.3}\) & \(1.90^{+0.16}_{-0.15}\) & \(3.11^{+0.51}_{-0.45}\) & \(-\) & & & 279 & 265.2 \\ SRGA J010742.9+574419 & 3.2 & \(<3.1\) & \(1.9^{+0.4}_{-0.4}\) & \(1.24^{+0.49}_{-0.37}\) & \(-\) & & & 118 & 123.1 \\ SRGA J021227.3+520953 & 1.5 & \(<0.8\) & \(2.04^{+0.37}_{-0.14}\) & \(0.95\pm 0.19\) & \(-\) & & & 155 & 146.5 \\ SRGA J025208.4+482955 & 1.8 & \(3.2^{+1.4}_{-1.2}\) & \(1.7^{+0.4}_{-0.3}\) & \(2.04^{+0.69}_{-0.55}\) & \(-\) & & & 104 & 110.5 \\ SRGA J045432.1+524003 & 3.4 & \(7.7^{+2.9}_{-2.5}\) & \(1.5^{+0.3}_{-0.3}\) & \(6.73^{+1.44}_{-1.24}\) & \(-\) & & & 102 & 102.1 \\ SRGA J051313.5+662747 & 0.9 & \(11^{+5}_{-4}\) & \(1.5^{+0.6}_{-0.5}\) & \(3.32^{+1.05}_{-0.86}\) & \(-\) & & & 60 & 71.0 \\ & & \(15^{+7}_{-5}\) & \(1.9^{+0.7}_{-0.6}\) & \(3.18^{+1.01}_{-0.81}\) & \(0.24^{+0.19}_{-0.19}\) & \(0.19^{+78}_{-0.11}\) & \(\times 10^{-4}\) & 58 & 58.9 \\ SRGA J110945.8+800815 & 0.4 & \(1.8^{+2.5}_{-1.7}\) & \(0.7^{+0.5}_{-0.4}\) & \(1.48^{+0.72}_{-0.55}\) & \(-\) & & & 47 & 50.3 \\ SRGA J161251.4\(-\)052100 & 1.0 & \(12^{+4}_{-3}\) & \(1.9^{+0.5}_{-0.5}\) & \(2.61^{+0.82}_{-0.65}\) & \(-\) & & & 80 & 73.0 \\ SRGA J161943.7\(-\)132609 & 1.5 & \(<3.5\) & \(0.9^{+0.5}_{-0.4}\) & \(2.49^{+0.97}_{-0.79}\) & \(-\) & & & 82 & 87.0 \\ SRGA J182109.8+765819 & 0.5 & \(34^{+16}_{-13}\) & \(1.1^{+0.8}_{-0.6}\) & \(1.55^{+0.46}_{-0.40}\) & \(-\) & & & 58 & 62.8 \\ SRGA J193707.6+660816 & 0.8 & \(0.32^{+0.14}_{-0.13}\) & \(2.33^{+0.10}_{-0.09}\) & \(1.24^{+0.13}_{-0.12}\) & \(-\) & & & 350 & 379.8 \\ SRGA J200331.2+701332 & 1.0 & \(2.2^{+0.4}_{-0.4}\) & \(2.00^{+0.15}_{-0.14}\) & \(1.56^{+0.23}_{-0.20}\) & \(-\) & & & 314 & 352.8 \\ SRGA J211149.5+722815 & 1.5 & \(8^{+5}_{-4}\) & \(1.2^{+0.5}_{-0.4}\) & \(1.34^{+0.40}_{-0.33}\) & \(-\) & & & 119 & 112.1 \\ & & \(14^{+5}_{-4}\) & \(1.6^{+0.5}_{-0.4}\) & \(1.23^{+0.37}_{-0.30}\) & \(0.46^{+0.25}_{-0.17}\) & \(0.7^{+0.6}_{-0.4}\) & \(\times 10^{-5}\) & 117 & 97.6 \\ \hline \end{tabular} \end{table} Table 3: X-ray spectral parameters in the optical spectrum (Fig. 5, Table 6). The Fraunhofer MgI and NaD absorption lines are also seen. The redshift was measured from the emission lines: \(z=0.01800\pm 0.00007\). The narrow-line flux ratio \(\lg([NII]\lambda 6584/H\alpha)\)=\(0.43\pm 0.10\) points to the presence of an active nucleus in the galaxy, according to the BPT diagram (see Fig. 2), while the absence of a broad component in the H\(\alpha\) line suggests that this is a Seyfert 2 (Sy2) galaxy. In principle, the absence of the [OIII]\(\lambda 5007\) and H\(\beta\) emission lines supposes that this is a LINER object, but this is highly unlikely, taking into account the object's high X-ray luminosity (\(\sim 3\times 10^{42}\) erg s\({}^{-1}\) in the 4-12 keV energy band). The weakness of the emission lines in the optical spectrum probably stems from the fact that the active nucleus is observed through a thick layer of interstellar matter in the galaxy. Our X-ray spectrum modeling (Fig. 4) shows the presence of substantial absorption in the source, \(N_{\rm H}>5\times 10^{22}\) cm\({}^{-2}\), at 90% confidence (Fig. 3, Table 3). This is consistent with the weakness of the emission lines and may also be related mainly to the thick layer of interstellar matter in the edge-on galaxy and not with the dusty torus around the supermassive black hole. #### 3.3.2 SRGA J002240.8+804348 This X-ray source was discovered during the ROSAT all-sky survey (RASS, Boller et al. (2016)): 2RXS J002247.6+804418. There is the extended optical and infrared object WISEA J002243.69+804346.1 (Fig. 1) with a color \(W1-W2=0.61\) typical for AGNs in the ART-XC and eROSITA position error circles. Balmer emission lines, broad H\(\alpha\) and H\(\beta\), are observed in the galaxy's spectrum (Fig. 5, Table 6). The forbidden [OIII]\(\lambda 4959\), [OIII]\(\lambda 5007\) lines are also present. The redshift of the source is \(z=0.1147\pm 0.0013\). Against the background of the broad H\(\alpha\) line it is impossible to distinguish its narrow component and the narrow [NII]\(\lambda 6583\) line. In the case of H\(\beta\) we can set only an upper limit on the flux in the narrow component and, accordingly, a 2\(\sigma\)-lower limit on the flux ratio, \(\lg([OIII]\lambda 5007/H\beta)\)\(>0.6\). However, the presence of broad H\(\alpha\) and H\(\beta\) components allows us to say with confidence that this is a Seyfert 1 (Sy1) galaxy. In the X-ray spectrum there is evidence only for slight intrinsic absorption (\(N_{\rm H}\lesssim 10^{21}\) cm\({}^{-2}\)). #### 3.3.3 SRGA J010742.9+574419 This is a new X-ray source discovered in the first year of the SRG/ART-XCsky survey. There is the extended optical and infrared object WISEA J010743.11+574417.7 (Fig. 1) with a color \(W1-W2=0.78\) typical for AGNs in the ART-XC and eROSITA position error circles. A broad H\(\alpha\) line and narrow \(H\beta\), [OIII]\(\lambda 4959\), [OIII]\(\lambda 5007\), H\(\alpha\), and [NII]\(\lambda 6583\) lines are observed in the optical spectrum (Fig. 5, Table 6). The redshift of the source is \(z=0.06992\pm 0.00030\). The line flux ratios \(\lg([OIII]\lambda 5007/H\beta)=0.50\pm 0.12\) and \(\lg([NII]\lambda 6584/H\alpha)=-0.70\pm 0.08\), according to the BPT diagram (Fig. 2), and the presence of a broad \(H\alpha\) component with \(FWHM>2000\) km s\({}^{-1}\) with the absence of a broad \(H\beta\) component allow the object to be classified as Sy1.9. On the BPT diagram the source falls into the region of galaxies with a composite spectrum most likely because we cannot reliably distinguish the narrow H\(\alpha\) and [NII]\(\lambda 6583\) lines for it. No significant absorption was revealed in the X-ray spectrum. #### 3.3.4 SRGA J021227.3+520953 This X-ray source was discovered in RASS: 2RXS J021225.5+521004. There is the extended optical and infrared object 2MASS J02122646+5209533 (Fig. 1) with a color \(W1-W2=0.89\) typical for AGNs in the ART-XC and eROSITA position error circles. Balmer emission lines with broad H\(\alpha\) and H\(\beta\) components and narrow forbidden [OIII]\(\lambda 4959\), [OIII]\(\lambda 5007\), [NII]\(\lambda 6548\), and [NII]\(\lambda 6583\) lines are seen in the optical spectrum (Fig. 5, Table 6). The measured redshift is \(z=0.23810\pm 0.00011\). The narrow-line flux ratios \(\lg([NII]\lambda 6584/H\alpha)\)=\(-0.39\pm 0.05\) and \(\lg([OIII]\lambda 5007/H\beta)\)\(=0.87\pm 0.11\) (Fig. 2) and the presence of broad H\(\alpha\) and H\(\beta\) components allow the object to be classified as Sy1. No significant absorption was revealed in the X-ray spectrum. #### 3.3.5 SRGA J025208.4+482955 This X-ray source was discovered in RASS: 2RXS J025208.8+482956. There is the extended optical and infrared object WISEA J025209.64+482959.4 (Fig. 1) with a color \(W1-W2=0.71\) typical for AGNs in the ART-XC and eROSITA position error circles. Emission lines are seen in the optical spectrum (Fig. 5, Table 6): broad H\(\alpha\), narrow H\(\alpha\) and H\(\beta\), and forbidden [OIII]\(\lambda 4959\), [OIII]\(\lambda 5007\), [OI]\(\lambda 6300\), [NII]\(\lambda 6548\), [NII]\(\lambda 6583\), [SII]\(\lambda 6716\), and [SII]\(\lambda 6730\). The measured redshift is \(z=0.03366\pm 0.00008\). The narrow-line flux ratios \(\lg([NII]\lambda 6584/H\alpha)\)=\(-0.06\pm 0.03\), \(\lg([OIII]\lambda 5007/H\beta)=1.25\pm 0.12\) (Fig. 2) and the presence of a broad H\(\alpha\) component allow the object to be classified as Sy1.9. Moderate absorption (\(N_{\rm H}\approx 3\times 10^{21}\) cm\({}^{-2}\)) was revealed in the X-ray spectrum. #### 3.3.6 SRGA J045432.1+524003 This is a new X-ray source discovered in the SRG/ART-XC survey. In the ART-XC and eROSITA position error circles there is the galaxy LEDA 16297 (Fig. 1) with redshift \(z=0.03123\) (SIMBAD) and color \(W1-W2=0.39\) with which the radio source NVSS J045432+524009 can also be associated. Emission lines are seen in the optical spectrum (Fig. 5, Table 6): broad and narrow H\(\alpha\), narrow H\(\beta\), and forbidden [OIII]\(\lambda 4959\), [OIII]\(\lambda 5007\), [OI]\(\lambda 6300\), [NII]\(\lambda 6548\), [NII]\(\lambda 6583\), [SII]\(\lambda 6716\) and [SII]\(\lambda 6730\). The measured redshift is \(0.03117\pm 0.00012\). Based on the narrow-line flux ratios \(\lg([NII]\lambda 6584/H\alpha)\)=\(0.255\pm 0.020\), \(\lg([OIII]\lambda 5007/H\beta)=1.19\pm 0.13\) (Fig. 2) and the presence of a broad H\(\alpha\) component, the object can be classified as Sy1.9. Significant absorption (\(N_{\rm H}\sim 10^{22}\) cm\({}^{-2}\)) was revealed in the X-ray spectrum. #### 3.3.7 SRGA J051313.5+662747 This X-ray source is present in the 2SXPS catalog: 2SXPS J051316.0+ 662750. In the ART-XC and eROSITA position error circles there is the galaxy 2MASX J05131637+6627498 (Fig. 1) at redshift \(z\) = 0.01491 (SIBMAD) with a color \(W1-W2=0.50\) with which the radio source NVSS J051316+662801 can also be associated. Narrow Balmer H\(\beta\) and H\(\alpha\) emission lines and narrow forbidden [OIII]\(\lambda 4959\), [OIII]\(\lambda 5007\), [OI]\(\lambda 6300\), [NII]\(\lambda 6548\), [NII]\(\lambda 6583\), [SII]\(\lambda 6716\), and [SII]\(\lambda 6730\) lines are seen in the optical spectrum (Fig. 5, Table 6). The measured redshift is \(z\) = \(0.01479\pm 0.00008\). The narrow-line flux ratios \(\lg([NII]\lambda 6584/H\alpha)\)=\(-0.053\pm 0.011\), \(\lg([OIII]\lambda 5007/H\beta)\) = \(0.90\pm 0.04\) (Fig. 2) and the absence of broad H\(\alpha\) and H\(\beta\) components allow the object to be classified as Sy2. This is consistent with significant absorption (\(N_{\rm H}\sim 10^{22}\) cm\({}^{-2}\)) in the X-ray spectrum. #### 3.3.8 SRGA J110945.8+800815 This is a new X-ray source discovered in the SRG/ART-XC sky survey. There is the infrared and radio source WISEA J110943.77+800805.6 = NVSS J110944+800807 (Fig. 1) with a color \(W1-W2=0.76\) typical for AGNs in the ART-XC and eROSITA position error circles. It should be noted that there is a star \(\sim\) 15 mag only 6\({}^{\prime\prime}\) away from this object (located at a distance \(\sim\) 1.5 kpc from the Sun, Gaia DR3, Gaia Collaboration et al., 2022). It lies at the boundary of the 98% eROSITA position error circle, and the possibility that it makes some contribution to the X-ray flux measured by ART-XC and eROSITA if it has an active corona or, for example, is a cataclysmic variable must not be ruled out. Narrow H\(\beta\) and H\(\alpha\) emission lines and narrow forbidden [OIII]\(\lambda 3727\), [OIII]\(\lambda 4959\), [OIII]\(\lambda 5007\), [OI]\(\lambda 6300\), [NII]\(\lambda 6548\), [NII]\(\lambda 6583\), [SII]\(\lambda 6716\), and [SII]\(\lambda 6730\) lines are seen in the optical spectrum (Fig. 5, Table 6). The measured redshift is \(z\) = \(0.18879\pm 0.00031\). The narrow-line flux ratios \(\lg([NII]\lambda 6584/H\alpha)\)=\(0.00\pm 0.05\), \(\lg([OIII]\lambda 5007/H\beta)\) = \(1.02\pm 0.14\) (Fig. 2) and the absence of broad H\(\alpha\) and H\(\beta\) components allow the object to be classified as Sy2. In spite of this, no statistically significant intrinsic absorption is revealed in the X-ray spectrum, while the upper limit on the absorption column density is \(N_{\rm H}<4\times 10^{21}\) cm\({}^{-2}\) at 90% confidence. At the same time, the power-law continuum is unusually hard for AGNs, with a slope \(\Gamma=0.7^{+0.5}_{-0.4}\). This may suggest that the spectrum of this source actually has a more complex shape, which is impossible to ascertain due to the insufficient number of photons in the spectrum being analyzed. #### 3.3.9 SRGA J161251.4\(-\)052100 This X-ray source was discovered in RASS-2RXS J161250.6\(-\)052118. In the ART-XC and eROSITA position error circles there is the galaxy LEDA 3097794 (Fig. 1) with a redshift \(z\) = 0.03054 (SIMBAD, based on the 6dF survey) and an infrared color \(W1-W2=0.78\) typical for AGNs. Narrow Balmer H\(\alpha\) and H\(\beta\) emission lines and narrow forbidden [OIII]\(\lambda 4959\), [OIII]\(\lambda 5007\), [NII]\(\lambda 6548\), and [NII]\(\lambda 6583\) lines are seen in the optical spectrum (Fig. 5, Table 6). The narrow-line flux ratios \(\lg([NII]\lambda 6584/H\alpha)\)=\(0.26\pm 0.14\), \(\lg([OIII]\lambda 5007/H\beta)>0.8\) and the absence of broad H\(\alpha\) and H\(\beta\) components allow the object to be classified as Sy2. Significant X-ray absorption (\(N_{\rm H}\approx 3\times 10^{22}\) cm\({}^{-2}\)) is observed. #### 3.3.10 SRGA J161943.7\(-\)132609 This is a new X-ray source discovered in the SRG/ART-XC survey. In the ART-XC and eROSITA position error circles there is the galaxy 2MASX J16194407\(-\)1326166 (Fig. 1) with a redshift \(z\) = 0.07891 (SIMBAD, based on the 6dF survey) and an infrared color \(W1-W2=0.81\) typical for AGNs. A broad H\(\alpha\) emission line and narrow forbidden [OIII]\(\lambda 4959\), [OIII]\(\lambda 5007\), [NII]\(\lambda 6548\), and [NII]\(\lambda 6583\) lines are seen in the optical spectrum (Fig. 5, Table 6). The narrow-line flux ratios \(\lg([NII]\lambda 6584/H\alpha)\)\(>-0.7\), \(\lg([OIII]\lambda 5007/H\beta)\)\(>0.7\) (Fig. 2) and the presence of a broad H\(\alpha\) component allow the object to be classified as Sy1.9. No significant absorption was revealed in the X-ray spectrum. #### 3.3.11 SRGA J182109.8+765819 This is a new X-ray source discovered in the SRG/ART-XCsky survey. In the ART-XC and eROSITA position error circles there is the galaxy LEDA 2772547 (Fig. 1) with a color \(W1-W2=0.91\) typical for AGNs. The radio source VLASS1QLCIR J182111.52+765816. can also be associated with it. A narrow H\(\alpha\) emission line and narrow forbidden [OIII]\(\lambda 4959\), [OIII]\(\lambda 5007\), [OI]\(\lambda 6300\), [NII]\(\lambda 6548\), [NII]\(\lambda 6583\), [SII]\(\lambda 6716\), and [SII]\(\lambda 6730\) lines are seen in the optical spectrum (Fig. 5, Table 6). The Fraunhofer MgI and NaD, F absorption lines are also seen. The redshift was measured from the emission lines, \(z\) = \(0.0631\pm 0.0004\). The narrow-line flux ratios \(\lg([NII]\lambda 6584/H\alpha)\)=\(0.14\pm 0.04\), \(\lg([OIII]\lambda 5007/H\beta)\)\(>0.9\) (Fig. 2) and the absence of broad H\(\alpha\) and H\(\beta\) components allow the object to be classified as Sy2. Significant X-ray absorption (\(N_{\rm H}\approx 3\times 10^{22}\) cm\({}^{-2}\)) is observed. #### 3.3.12 SRGA J193707.6+660816 This X-ray source was discovered in RASS (2RXS J193708.1+660821). In the ART-XC and eROSITA position error circles there is the optical and radio source 2MASS J19370820+6608213 = NVSS J193710+660830 (Fig. 1) with an infrared color \(W1-W2=0.64\) typical for AGNs. Balmer emission lines are seen in the optical spectrum (Fig. 5, Table 6): broadH\(\delta\), broad H\({}_{\gamma}\), broad and narrow H\(\beta\), broad and narrow H\(\alpha\). Narrow forbidden [OIII]\(\lambda\)4959, [OIII]\(\lambda\)5007, [NII]\(\lambda\)6548, [NII]\(\lambda\)6583, [SII]\(\lambda\)6716, and [SII]\(\lambda\)6730 lines are also observed. The measured redshift is \(z=0.07136\pm 0.00012\). The narrow-line flux ratios \(\lg([NII]\lambda\)6584/\(H\alpha)\)\(=\)\(-0.48\pm 0.04\), \(\lg([OIII]\lambda 5007/H\beta)=0.10\pm 0.09\) (Fig. 2) and the presence of broad H\(\alpha\), H\(\beta\), H\({}_{\gamma}\), and H\(\delta\) components with a typical line FWHM\(\approx\) 2000 km s\({}^{-1}\) allow the object to be classified as a narrow-line Seyfert 1 (NLSy1) galaxy. There may be slight absorption (\(N_{\rm H}\approx 3\times 10^{20}\) cm\({}^{-2}\)) in the X-ray spectrum. #### 3.3.13 SRGA J200331.2+701332 This X-ray source was discovered in RASS: 2RXS J200332.1+701331. It is also known as a hard X-ray source, SWIFT J2003.4+7023 (Oh et al. 2018). In ART-XC and eROSITA position error circles there is the optical object 2MASS J20033397+7013369 (Fig. 1) with a color \(W1-W2=0.89\) typical for AGNs. Broad H\(\beta\) and H\(\alpha\) emission lines and narrow forbidden [OIII]\(\lambda\)4959, [OIII]\(\lambda\)5007, [NII]\(\lambda\)6548 and [NII]\(\lambda\)6583 lines are observed in the optical spectrum (Fig. 5, Table 6). The measured redshift is \(z=0.09759\pm 0.00002\). The narrow-line flux ratios \(\lg([NII]\lambda 6584/H\alpha)\)\(>0.23\), \(\lg([OIII]\lambda 5007/H\beta)\)\(>0.8\) (Fig. 2) and the presence of broad H\(\alpha\) and H\(\beta\) components allow the object to be classified as Sy1. Slight absorption (\(N_{\rm H}\approx 2\times 10^{21}\) cm\({}^{-2}\)) was revealed in the X-ray spectrum. #### 3.3.14 SRGA J211149.5+722815 This is a new X-ray source discovered in the SRG/ART-XC sky survey. In the ART-XC and eROSITA position error circles there is the optical and radio source WISEA J211151.78+722816.4 = NVSS J211152+722819 (Fig. 1) with an infrared color \(W1-W2=1.08\) typical for AGNs. A narrow H\(\alpha\) emission line and narrow forbidden [OIII]\(\lambda\)4959, [OIII]\(\lambda\)5007, [NII]\(\lambda\)6548, and [NII]\(\lambda\)6583 lines are observed in the optical spectrum (Fig. 5, Table 6). The measured redshift is \(0.10611\pm 0.00011\). The narrow-line flux ratios \(\lg([NII]\lambda 6584/H\alpha)\)\(=\)\(0.20\pm 0.07\), \(\lg([OIII]\lambda 5007/H\beta)>0.9\) (Fig. 2) and the absence of broad H\(\alpha\) and H\(\beta\) components allow the object to be classified as Sy2. Significant absorption (\(N_{\rm H}\sim 10^{22}\) cm\({}^{-2}\)) is present in the X-ray spectrum. ## 4 Properties of the AGN sample Table 4 presents basic characteristics of the identified AGNs: the optical type, the redshift, and the X-ray luminosity \(L_{\rm X}\). The latter was calculated using the single-component X-ray spectrum model from Table 3 in the 2-10 keV energy band5 (in the ob- served frame) and was corrected for Galactic and intrinsic absorption. Footnote 5: Taking into account the low redshifts of the objects, we do not make the \(k\)-correction. The X-ray luminosities of the objects vary in the range from \(\sim 10^{42}\) to \(\sim 10^{44}\) erg s\({}^{-1}\), typical for AGNs at the present epoch. According to the narrow-line flux ratios, \(\lg([NII]\lambda 6584/H\alpha)\) and \(\lg([OIII]\lambda 5007/H\beta)\), all sources fall into the region of Seyfert galaxies on the BPT diagram (Fig. 2), except for SRGA J001439.6+183503, SRGA J002240.8+804348, and SRGA J010742.9+574419. However, the high X-ray luminosity, the presence of broad hydrogen line components in SRGA J002240.8+804348 and SRGA J010742.9+574419, and the ratio \(\lg([NII]\lambda 6584/H\alpha)\)\(\approx 0.4\) in SRGA J001439.6+183503 point to the presence of active nuclei in these galaxies. In Fig. 3 the slope of the power-law continuum \(\Gamma\) is plotted against the intrinsic absorption column density \(N_{\rm H}\) for the objects being studied. Almost all of the slopes are close, within the error limits, to the canonical slope for AGNs, \(\Gamma\approx 1.8\). The slope is considerably larger for only one narrow-line Seyfert 1 galaxy in the sample, SRGA J193707.6+660816: \(\Gamma=2.33\pm 0.10\), typical for AGNs of this type (see, e.g., Brandt et al. 1997; Leighly 1999). Significant intrinsic absorption was revealed only in Seyfert 2 galaxies (Sy2 and Sy1.9). For four Seyfert 1 galaxies, including NLSy1 SRGA J193707.6+660816, we can estimate the masses of the central black holes from the luminosity and the width of the broad H\(\alpha\) emission line based on the well-known empirical relation (see Eq. (6) in Greene & Ho 2005) using the flux and the width of this line from Table 66. In addition, we can estimate the bolometric luminosities of these objects. For this purpose, we took the bolometric correction for the 2-10 keV energy band \(L_{\rm bol}/L_{\rm X}=11\) from Sazonov et al. (2012), which was obtained for a representative sample of Seyfert galaxies in the nearby Universe. It should be kept in mind that this correction has an uncertainty \(\sim 2\) that we ignore. Footnote 6: We do not make such estimates for Sy1.9 objects, since the H\(\alpha\) emission for them can be subject to significant intrinsic absorption. Table 5 gives the derived black hole masses, bolometric luminosities, and bolometric-to-Eddington luminosity ratios (\(\lambda_{\rm Edd}\)). The latter quantity characterizes the accretion regime. The derived \(\lambda_{\rm Edd}\) vary from \(\sim\)1 to \(\sim\)10%, which, on the whole, is typical for Seyfert galaxies (see, e.g., Khorunzhev et al. 2012). \(L_{\rm bol}\) is the bolometric luminosity derived for a fixed bolometric correction \(L_{\rm bol}/L_{\rm X}=11\); \(\lambda_{\rm Edd}\) is the bolometric-to-Eddington luminosity ratio. The errors correspond to the 68% confidence interval. ## 5 Conclusions Using the observations carried out at the AZT-33IK telescope and the archival spectroscopic data from the 6dF survey, we managed to identify 14 new AGNs among the X-ray sources detected in the 4-12 keV energy band during the first five SRG/ART-XC all-sky surveys. All sources are also detected with confidence by eROSITA in the 0.2-8.0 keV energy band. All objects turned out to be nearby (\(z=0.015-0.238\)) Seyfert galaxies (one NLSy1, three Sy1, four Sy1.9, and six Sy2). For all objects we constructed broadband (0.2-12 keV) X-ray spectra based on data from the ART-XC and eROSITA telescopes onboard the SRG observatory. In four objects the intrinsic absorption exceeds \(N_{\rm H}>10^{22}\) cm\({}^{-2}\) at 90% confidence, and one of them (SRGA J001439.6+183503) is probably heavily obscured (\(N_{\rm H}>5\times 10^{22}\) cm\({}^{-2}\) with 90% confidence). Interestingly, in the latter case the absorption can be mainly associated not with the dusty torus around the central supermassive black hole, but with the great thickness of the interstellar medium of an edge-on galaxy. This paper continues our series of publications on the \begin{table} \begin{tabular}{c c c c c} \hline \hline \(N\) & Object & Optical type & \(z^{1}\) & \(\log L_{\rm X}^{2}\) \\ \hline 1 & SRGA J001439.6+183503 & Sy2 & \(0.01800\pm 0.00007\) & \(42.58^{+0.27}_{-0.19}\) \\ 2 & SRGA J002240.8+804348 & Sy1 & \(0.11470\pm 0.00130\) & \(44.03^{+0.07}_{-0.07}\) \\ 3 & SRGA J010742.9+574419 & Sy1.9 & \(0.06992\pm 0.00030\) & \(43.17^{+0.14}_{-0.16}\) \\ 4 & SRGA J021227.3+520953 & Sy1 & \(0.23810\pm 0.00011\) & \(44.21^{+0.08}_{-0.09}\) \\ 5 & SRGA J025208.4+482955 & Sy1.9 & \(0.03366\pm 0.00008\) & \(42.73^{+0.13}_{-0.14}\) \\ 6 & SRGA J045432.1+524003 & Sy1.9 & \(0.03117\pm 0.00012\) & \(43.18^{+0.08}_{-0.09}\) \\ 7 & SRGA J051313.5+662747 & Sy2 & \(0.01479\pm 0.00008\) & \(42.21^{+0.12}_{-0.13}\) \\ 8 & SRGA J110945.8+800815 & Sy2 & \(0.18879\pm 0.00031\) & \(44.18^{+0.17}_{-0.20}\) \\ 9 & SRGA J161251.4\(-\)052100\({}^{+}\) & Sy2 & \(0.03055\) & \(42.75^{+0.12}_{-0.13}\) \\ 10 & SRGA J161943.7\(-\)132609\({}^{+}\) & Sy1.9 & \(0.07891\) & \(43.58^{+0.14}_{-0.17}\) \\ 11 & SRGA J182109.8+765819 & Sy2 & \(0.06310\pm 0.00040\) & \(43.17^{+0.11}_{-0.13}\) \\ 12 & SRGA J193707.6+660816 & NLSy1 & \(0.07136\pm 0.00012\) & \(43.19^{+0.04}_{-0.05}\) \\ 13 & SRGA J200331.2+701332 & Sy1 & \(0.09759\pm 0.00002\) & \(43.58^{+0.06}_{-0.06}\) \\ 14 & SRGA J211149.5+722815 & Sy2 & \(0.10611\pm 0.00011\) & \(43.59^{+0.11}_{-0.12}\) \\ \hline \end{tabular} \({}^{1}\) The redshifts were measured from emission lines. \({}^{2}\) The absorption-corrected luminosity in the 2–10 keV energy band in erg s\({}^{-1}\). \({}^{+}\) The redshifts were taken from the 6dFcatalog. The error corresponds to the 68% confidence interval for the redshift and 90% for the luminosity without including the error in \(z\). \end{table} Table 4: Properties of the AGNs whose spectra were obtained as a result of the AZT-33IK observations and from the archival 6dF spectra \begin{table} \begin{tabular}{l c c c} \hline \hline Object & BH mass, \(10^{8}M_{\odot}\) & \(L_{\rm bol}\), \(10^{44}\) erg s\({}^{-1}\) & \(\lambda_{\rm Edd}\) \\ \hline SRGAJ002240.8+804348 & \(2.6\pm 0.6\) & \(12\pm 2\) & \(0.034\pm 0.009\) \\ SRGAJ021227.3+520953 & \(1.4\pm 0.3\) & \(18\pm 4\) & \(0.10\pm 0.03\) \\ SRGAJ193707.6+660816 & \(0.12\pm 0.02\) & \(1.7\pm 0.2\) & \(0.11\pm 0.02\) \\ SRGAJ200331.2+701332 & \(2.2\pm 0.5\) & \(4.1\pm 0.6\) & \(0.014\pm 0.004\) \\ \hline \end{tabular} \(L_{\rm bol}\) is the bolometric luminosity derived for a fixed bolometric correction \(L_{\rm bol}/L_{\rm X}=11\); \(\lambda_{\rm Edd}\) is the bolometric-to-Eddington luminosity ratio. The errors correspond to the 68% confidence interval. \end{table} Table 5: The masses, bolometric luminosities, and Eddington ratios for the central black holes in Sy1 and NLSy1 galaxies optical identification of X-ray sources detected during the SRG/ART-XC all sky survey. The result obtained will help to obtain a large (\(\sim\)2000 objects), statistically complete sample of AGNs selected by their emission in the hard 4-12 keV X-ray energy band on completion of the planned eight sky surveys. ## Acknowledgements This work was supported by RSF grant no. 19-12-00396. The measurements with the AZT-33IK telescope were supported by the Ministry of Education and Science of Russia and were obtained using the equipment of the Angara sharing center7. In this study we used observational data from the ART-XC and eROSITA telescopes onboard the SRG observatory. The SRG observatory was built by Roskosmos in the interests of the Russian Academy of Sciences represented by its Space Research Institute (IKI) within the framework of the Russian Federal Space Program, with the participation of the Deutsches Zentrum fur Luft- und Raumfahrt (DLR). The SRG spacecraft was designed, built, launched, and is operated by the Lavochkin Association and its subcontractors. The science data are downlinked via the Deep Space Network Antennae in Bear Lakes, Ussurivsky, and Baykonur, funded by Roskos- mos. The eROSITA X-ray telescope was built by a consortium of German Institutes led by MPE, and supported by DLR. The eROSITA data used in this work were processed using the eSASS software developed by the German eROSITA consortium and the proprietary data reduction and analysis software developed by the Russian eROSITA Consortium. Footnote 7: [http://ckp-rf.ru/ckp/3056/](http://ckp-rf.ru/ckp/3056/)
2306.13500
Cascade Subspace Clustering for Outlier Detection
Many methods based on sparse and low-rank representation been developed along with guarantees of correct outlier detection. Self-representation states that a point in a subspace can always be expressed as a linear combination of other points in the subspace. A suitable Markov Chain can be defined on the self-representation and it allows us to recognize the difference between inliers and outliers. However, the reconstruction error of self-representation that is still informative to detect outlier detection, is neglected.Inspired by the gradient boosting, in this paper, we propose a new outlier detection framework that combines a series of weak "outlier detectors" into a single strong one in an iterative fashion by constructing multi-pass self-representation. At each stage, we construct a self-representation based on elastic-net and define a suitable Markov Chain on it to detect outliers. The residual of the self-representation is used for the next stage to learn the next weaker outlier detector. Such a stage will repeat many times. And the final decision of outliers is generated by the previous all results. Experimental results on image and speaker datasets demonstrate its superiority with respect to state-of-the-art sparse and low-rank outlier detection methods.
Qi Yang, Hao Zhu
2023-06-23T13:48:08Z
http://arxiv.org/abs/2306.13500v1
# Cascaded Self-Representation for Outlier Detection ###### Abstract Many methods based on sparse and low-rank representation been developed along with guarantees of correct outlier detection. Self-representation states that a point in a subspace can always be expressed as a linear combination of other points in the subspace. A suitable Markov Chain can be defined on the self-representation and it allows us to recognize the difference between inliers and outliers.However, the reconstruction error of self-representation that is still informative to detect outlier detection, is neglected. Inspired by the gradient boosting, in this paper, we propose a new outlier detection framework that combines a series of weak "outlier detectors" into a single strong one in an iterative fashion by constructing multi-pass self-representation. At each stage, we construct a self-representation based on elastic-net and define a suitable Markov Chain on it to detect outliers. The residual of the self-representation is used for the next stage to learn the next weaker outlier detector. Such a stage will repeat many times. And the final decision of outliers is generated by the previous all results. Experimental results on image and speaker datasets demonstrate its superiority with respect to state-of-the-art sparse and low-rank outlier detection methods. Qi Yang, Hao Zhu Vector Lab, JD Finance, Beijing Outlier Detection, Self-Representation, Sparse Coding ## 1 Introduction Outlier detection, also called anomaly detection, identifies unusual data patterns that are different from the majority of data. These unexpected patterns are always called anomalies or outliers, the ability to detect anomalies has significant and critical help, and anomalies often provides useful information in various application domains, such as intrusion detection, fraud detection, fault detection, suspicious transaction detection and abnormal moving activity detection, etc. Anomaly detection can be roughly categorized into three ways: statistical, algebraic and self-representation based. RANdom SAmple Consensus (RANSAC) [1], as a statistical method for outlier detection, employs a sampling strategy to find a subspace that fits as many data as possible iteratively. In this processing, data points are removed from the dataset to find a new subspace until a given threshold on the percents of inliers is reached. Although theoretically, it is able to find correct subspace in noisy data even in the presence of outliers, the computational cost is still a challenge. Its variants like [2] and [3] still face the similar problem. Algebraic methods have been to robustly learn the subspaces by penalizing the sum of an unsquared distance of points to the closest subspace [4, 5] compared with Principal Component Analysis (PCA) minimizing the sum of square error. Thus they are robust to outliers because of reducing the contributions from large residuals arising from outliers. However, the optimization problem is usually nonconvex and a good initialization is extremely important for finding the optimal solution. Recently the PCA problem with robustness have been solved to corrupted entries [6], which has led to many recent methods for PCA with robustness to outliers. A prominent advantage of convex optimization techniques is that they are guaranteed to correctly identify outliers under certain conditions. Nonetheless, these methods typically model a unique inlier subspace, e.g., by a low-rank constraint in Outlier Pursuit [7], and therefore cannot deal with multiple inlier subspaces since the union of multiple subspaces could be high-dimensional. An alternative to algebraic approaches are methods that exploit the self-expressiveness of subspaces. In most of these approaches, the goal becomes finding a clustering that renders a suitably defined affinity matrix, typically obtained by solving a sparsification [8] or rank minimization problem [9]. These methods work well in the presence of noise and offer theoretical recovery guarantees. On the other hand, handling outliers requires augmenting the objective function with additional regularization terms, whose parameters must be usually hand-tuned to obtain good performance and typically recovery guarantees are lost. Based on self-representation, [10] design a random walk process and identify outliers as those whose probabilities tend to zero, which is parameter-free. However, the residual of self-representation which is neglected is still informative and contains clues to make outlier detection better. In this paper, we introduce a novel cascade architecture that aims to extend the random walk based outlier detection onto hierarchical self-representations in a principled way. To achieve the information from the reconstruction error in a linear combination, we present a multi-stages cascade framework for self-representation based on sparse coding with Elastic Net [11]. The residuals at a layer is computed by the difference between the original input and the aggregated reconstructions of the previous layers. And then the residuals at a layer can be represented with self-representation just like the original inputs did. Multi self-representation can be used to detect outliers with a random work based method, the final decision of outliers are fused by a linear combination of different results from different layers. ## 2 Related Work Motivated by the observation that outliers do not have sparse representations, [12] declare a point as an outlier if its \(\ell 1\) norm of representation is above a threshold. However, this \(\ell 1\)-thresholding strategy is not robust to outliers that is not good enough to discriminate outliers since their representation vectors may have many small small \(\ell 1\)-norms. Low-Rank Representation (LRR) [9] employs sparsity assumption (i.e. \(\ell_{2,1}\)-norm) on the reconstruction error of low rank self-representation to learn a robust model, and the column-sparse matrix rather than the low-rank matrix can be used to indicate the outliers. In [13], a graph-based outlier detection framework is proposed. The main idea is to represent the underlying dataset as a weighted undirected graph and then use the random work on the graph to find outliers. The random walk model is designed to find nodes that are most "central" to the graph and thus the outliers are far from the "central" can be efficiently detected. [10] extends graph construction from kernel based similarity to sparse representation based self-representation that inliers express themselves with only other inliers when they lie in a union of low dimensional subspaces. To avoid the non-convergence because of directed graph, researcher choose to calculate the average of the first \(T\) t-step probability distributions. ## 3 Method In this section, we present cascaded self-representation based outlier detection method shown in Fig. 1. We first describe the data self-representation and a random walk algorithm on the graph based on the representation to identify the sets of inliers and outliers. Then, we introduce a cascaded architecture that calculates self-representation from the resulting residuals of the previous layers. Finally we ensemble the all the results into one set of scores to identify the inliers and outliers. ### Elastic Net based Self-Representation Given unlabeled data points \(\{X\}_{i=1,...,N}\) drawn from multiple linear subspaces \(\{S\}_{i=1,...,K}\), one can express a point in a subspace as a linear combination of other points in the same subspace, and the data contains inliers and outliers. We can get the self-representation coefficient matrix by the optimization problem: \[\min_{C}\frac{\gamma}{2}\|X-XC\|_{2}^{2}+\lambda\|C\|_{1}+\frac{1-\lambda}{2} \|C\|_{2}^{2}\quad s.t.\;C_{ii}=0 \tag{1}\] where \(C\) is the self-representation coefficient matrix, \(\gamma>0\) and \(\lambda\in[0,1)\), the optional diagonal constraint on \(C\) prevents trivial solutions for sparsity-inducing norms, such as the \(\ell 1\) norm. Compared with the \(\ell 1\) norm constraint, A mixture of \(\ell 1\) and \(\ell 2\) norms is able to balance the subspace preserving and connectedness properties. However, the residual term \(\|X-XC\|_{2}^{2}\), also called reconstruction error, is ignored by the later random walk based outlier detection. ### Graph-based Outlier Detection With a Self-Representation \(C\), we can get the directed weighted graph \(G\): The vertices of \(G\) correspond to the data points \(X\), and the edges are given by the (weighted) adjacency matrix \(A=|C|^{T}\). In this graph \(G\), inliers only have connections with the inliers while the outliers have edges with both inliers and outliers, which we can use random walk to detect the outliers. The transition probability from \(x_{i}\) to \(x_{j}\) at the next time is given by \(p_{ij}=a_{ij}/d_{i}\) with \(d_{i}=\sum_{j}a_{ij}\), where \(p_{ij}\in P\) and \(a_{ij}\in A\). By this definition, if the starting point of a random walk is an inlier then it will never escape the set of inliers as there is no edge going from any inlier to any outlier. In contrast, a random walk starting from an outlier will likely end up in an inlier state since once it enters any inlier it will never return to an outlier state. Thus, by using different data points to initialize random walks, outliers can be identified by observing the final probability distribution of the state of the random walks. A uniform distribution \(\pi^{(0)}\) = [\(\frac{1}{N}\),...,\(\frac{1}{N}\)] is defined as the initial probability distribution, we define the Figure 1: The Framework of Cascaded Self-Representation for Outlier Detection formula: \[\hat{\pi}^{(T)}=\frac{1}{T}\sum_{t=1}^{T}\pi^{(0)}P^{t} \tag{2}\] where the average of the t-step can ensure the convergence. It is expected that eventually the inliers have high probabilities states and outliers have low probabilities, so we can use the threshold value \(\epsilon\) to identify the ouliers if \(\hat{\pi}^{(T)}\leq\epsilon\). ### Outlier Detection by Cascaded Self-Representation As mentioned above, the method of self-representation based outlier detection often formulates the problem at hand using a linear model with regularization by \(\ell 1\) and \(\ell 2\)-norms, but the reconstruction error is ignored that hinders exploiting self-representation in their full potential. In comparison, our approach exploits a recursive way where we encode self-representation with the resulting residuals of the stage in previous stages. In a single stage, we represent the residual as a linear combination of other points, where we keep the same as single stage. Let \(\hat{X}^{n}\) denote the estimated \(n\)-th stage and \(X\) denote unlabeled data points, then the overall process can be described as: \[X=\hat{X}^{0}+\hat{X}^{1}+...\hat{X}^{n}\quad s.t.\ \hat{X}^{i}=(X-\sum_{j=0}^{i-1 }\hat{X}^{j})C^{i},\ C^{i}_{jj}=0 \tag{3}\] where \(\hat{X}^{0}\) is a matrix with zero entries which is used to easily formulate our motivation with the Equ. 3. The flow diagram of our cascade framework is showed as Fig.1 and the algorithm is Alg. 1. Given the data \(X\), we first get the self-representation \(C^{1}\) and the transition probability, then run Equ. 2 on it to get the outlier score \(S^{1}\), for the \(\hat{X}^{1}\), it can be solve by \(\hat{X}^{1}=XC^{1}\). And then we can use the \(X-\hat{X}^{1}\) do again the self-representation and random-walk to get the outlier score \(S^{2}\), we know that the outlier score \(S^{1}\) get from the random-walk's initial probability distribution is \(\pi^{(0)}\), however, the outlier score \(S^{2}\) solved by the same process's initial probability distribution is \(S^{1}\). The rest can be done in the same manner to get the \(\hat{C}^{2}\), \(\hat{C}^{3}\),...,\(\hat{C}^{n}\), and the residuals \(X-\sum_{j=0}^{i-1}\hat{X}^{j}\), and the \(S^{3}\), \(S^{4}\),...,\(S^{n}\). After all is done, do the score fusion of the \(S^{1}\),...,\(S^{n}\) to get the final Outlier Score \(S\), and identify the Outliers if \(S\leq\epsilon\). ``` 0: Unlabeled data points \(X\), the threshold value \(\epsilon\) 0: Outlier score \(S\) of each points and Outliers by \(\epsilon\) 1: Initial probability distribution \(\pi^{(0)}=[\frac{1}{N},...,\frac{1}{N}]\) 2:for\(i=1\) to \(n\)do 3:if\(i=1\)then 4:\(C^{1}\leftarrow\) solved by Equ. 1 5: Outlier Score \(S^{1}\leftarrow\) solved by Equ. 2 with \(\pi^{(0)}\) 6:\(\hat{X}^{1}\leftarrow\) solved by Equ. 3 7:endif 8: Update probability distribution = Outlier Score \(S^{(-1)}\) 9:\(C^{i}\leftarrow\) solved by Equ. 1 with residuals \(X-\sum_{j=0}^{i-1}\hat{X}^{j}\) 10: Outlier Score \(S^{i}\leftarrow\) solved by Equ. 2 with \(S^{(i-1)}\) 11:\(\hat{X}^{i}\leftarrow\) solved by Equ. 3 12:endfor 13: Score fusion result \(S\leftarrow\) from \(S^{1},S^{2},..,S^{n}\) 14: Identify the Outliers if \(S\leq\epsilon\) 15:return Outlier score \(S\) and Outliers ``` **Algorithm 1**Outlier Detection by Cascaded Self-Representation ## 4 Experiments In this section, we use three different image datasets and one speaker dataset to evaluate the proposed method for outlier detection. In particular, we analyzed the performance results in details and compared with state-of-the-art techniques. ### Experimental setup We construct our method in matlab, and evaluated it on four publicly available databases, i.e., the Extended Yale B [14], the Caltech-256 [15], the Coil-100 object image dataset [16] and the TIMIT Small dataset [17]. For our method we set the number of iterations T to be 1000, and the number of stages is 3 since larger stages do not contribute better performance significantly. We compare with 3 other representative methods that are designed for detecting outliers in one or multiple subspaces: LRR [9], \(\ell\)-thresholding [12] and R-graph [10]. All other methods are implemented according to the description in their respective papers. We call our method Outlier Detection by Cascaded Self-Representation (ODCSR) in this section. For each outlier detection method, we use two metric to evaluate its performance. One is the area under the curve (AUC) as a metric of performance in terms of the ROC, the other is the F1-score, which is the harmonic mean of precision and recall. Notice that a numerical value for each data point that indicates its "outlierness" and a threshold value for determining inliers and outliers are required. ### Extended Yale B Dataset The Extended Yale B dataset is a popular benchmark for subspace clustering. It contains frontal face images of 38 individuals each under 64 different illumination conditions. Following the experimental setup of [8], we down-sampled the original face images from \(192\times 168\) to \(42\times 42\) pixels, which makes it computationally feasible for the baselines. We designed a series of experiments to test the ability of the methods to deal with multiple inlier groups. For example, we randomly choose 1 or 3 individual from the 38 subjects and use all 64 images of the subjects as the inliers. Then the images from the remaining 37 or 35 subjects as outliers with at most one image from each subject. The overall data set has 35% or 15% outliers. The results of this experiment are reported in Table 1. The last column have the results of our methods, note that in each row the best result is typeset in bold. We can see that our cascade subspace clustering method is the best method. ### Caltech-256 Dataset For Caltech-256, which contains 256 object categories with a total of 30,607 images, and each category has at least 80 images. In our experiments, images in Caltech-256 are represented by a 4,096-dimensional feature vector extracted from the last fully connected layer of the 16-layer VGG network [18]. Like the above, images in \(n\in\{1,3,5\}\) randomly chosen categories and the first 150 images in it are used as linilers, and we randomly pick a certain number of images from the "clutter" category as outliers such that there are 50% outliers in each experiment. The results of this experiment are reported in Table 2. The last column have the results of our methods, note that in each row the best result is typeset in bold, and we are the best method. ### Coil-100 Dataset The Coil-100 dataset contains 7,200 images of 100 different objects. Each object has 72 images taken at pose intervals of 5 degrees, with the images being of size \(32\times 32\). For Coil-100, we randomly pick \(n\in\{1,4,7\}\) categories as inliers and pick at most one image from each of the remaining categories as outliers. The results of this experiment are reported in Table 3. And we have the best performance. ### TIMIT Small Dataset The TIMIT dataset is composed of 6,300 phrases (10 phrases per speaker), spoken by 438 males (70%) and 192 females (30%). In our experiment, we used the same 40 speakers dataset as reported by these earlier attempts (here called TIMIT Small). we randomly pick \(n\in\{3,5,7\}\) speakers as inliers and pick at most one phrase from each of the remaining speakers as outliers. The results of this experiment are reported in Table 4. And we can see that our method is still the best. ## 5 Conclusion In this paper, we have proposed a general framework for outlier detection in a manner of cascade subspace clustering. Our architecture consists of multi-stages, each one is random walk outlier detection based on self-representation with the residual of last layers. Eventually, each result from different paths is fused to make the final decision of outliers. Our experiments have demonstrated that our cascade method provides significant improvement over the state-of-the-art out \begin{table} \begin{tabular}{c c c c c} \hline \hline & LRR & \(\ell 1\)-thresholding & R-graph & **ODCSR(ours)** \\ \hline \multicolumn{5}{c}{_Inliers:from **one** subject, Outliers:35\% from other subjects_} \\ \hline AUC & 0.857 & 0.844 & 0.986 & **0.990** \\ F1 & 0.797 & 0.763 & 0.951 & **0.956** \\ \hline \multicolumn{5}{c}{_Inliers:from **three** subjects, Outliers:15\% from other subjects_} \\ \hline AUC & 0.807 & 0.848 & 0.985 & **0.986** \\ F1 & 0.509 & 0.545 & 0.878 & **0.886** \\ \hline \hline \end{tabular} \end{table} Table 1: Results on the Extended Yale B dataset \begin{table} \begin{tabular}{c c c c c} \hline \hline & LRR & \(\ell 1\)-thresholding & R-graph & **ODCSR(ours)** \\ \hline \multicolumn{5}{c}{_Inliers:from **one** category, Outliers:50\% from 257-clutter_} \\ \hline AUC & 0.907 & 0.772 & 0.948 & **0.983** \\ F1 & 0.893 & 0.772 & 0.914 & **0.946** \\ \hline \multicolumn{5}{c}{_Inliers:from **three** categories, Outliers:50\% from 257-clutter_} \\ \hline AUC & 0.479 & 0.810 & 0.929 & **0.984** \\ F1 & 0.671 & 0.782 & 0.880 & **0.947** \\ \hline \multicolumn{5}{c}{_Inliers:from **five** categories, Outliers:50\% from 257-clutter_} \\ \hline AUC & 0.337 & 0.774 & 0.913 & **0.984** \\ F1 & 0.667 & 0.762 & 0.858 & **0.952** \\ \hline \hline \end{tabular} \end{table} Table 2: Results on the Caltech-256 dataset \begin{table} \begin{tabular}{c c c c} \hline \hline & LRR & \(\ell 1\)-thresholding & R-graph & **ODCSR(ours)** \\ \hline \multicolumn{5}{c}{_Inliers:from **one** subject, Outliers:35\% from other subjects_} \\ \hline AUC & 0.847 & 0.991 & 0.997 & **0.999** \\ F1 & 0.872 & 0.978 & 0.900 & **0.995** \\ \hline \multicolumn{5}{c}{_Inliers:from **four** subjects, Outliers:25\% from other subjects_} \\ \hline AUC & 0.687 & 0.992 & 0.996 & **0.998** \\ F1 & 0.541 & 0.941 & 0.970 & **0.981** \\ \hline \multicolumn{5}{c}{_Inliers:from **seven** subjects, Outliers:15\% from other subjects_} \\ \hline AUC & 0.628 & 0.991 & 0.996 & **0.997** \\ F1 & 0.366 & 0.897 & 0.955 & **0.963** \\ \hline \hline \end{tabular} \end{table} Table 3: Results on the Coil-100 dataset lier detection solutions in terms of AUC and F1 on several datasets.
2310.10937
Tracking quantum clouds expansion in tunneling ionization
We study formation and evolution of the electron wave-packets in the process of strong field ionization of various atomic targets. Our study is based on reformulating the problem in terms of conditional amplitudes, i.e., the amplitudes describing outcomes of measurements of different observables provided that the electron is found in the ionized state after the end of the pulse. By choosing the electron coordinate as such an observable, we were able to define unambiguously the notion of the ionized wave-packets and to study their formation and spread. We show that the evolution of the ionized wave packets obtained in this way follows closely the classical trajectories at the initial stages of evolution providing an {\it ab initio} quantum-mechanical confirmation of the basic premises of the Classical Monte Carlo Calculations approach. At the later stages of evolution the picture becomes more complicated due to the wave packets' spread and due to interference of wave packets originating from different field maxima. Our approach also allowed us to obtain information about the coordinate and velocity electron distributions at the tunnel exit.
I. A. Ivanov, A. S. Kheifets, Kyung Taec Kim
2023-10-17T02:19:10Z
http://arxiv.org/abs/2310.10937v1
# Tracking quantum clouds expansion in tunneling ionization ###### Abstract We study formation and evolution of the electron wave-packets in the process of strong field ionization of various atomic targets. Our study is based on reformulating the problem in terms of conditional amplitudes, i.e., the amplitudes describing outcomes of measurements of different observables provided that the electron is found in the ionized state after the end of the pulse. By choosing the electron coordinate as such an observable, we were able to define unambiguously the notion of the ionized wave-packets and to study their formation and spread. We show that the evolution of the ionized wave packets obtained in this way follows closely the classical trajectories at the initial stages of evolution providing an _ab initio_ quantum-mechanical confirmation of the basic premises of the Classical Monte Carlo Calculations approach. At the later stages of evolution the picture becomes more complicated due to the wave packets' spread and due to interference of wave packets originating from different field maxima. Our approach also allowed us to obtain information about the coordinate and velocity electron distributions at the tunnel exit. ## Introduction The notion of an electron trajectory proved itself extremely useful for the qualitative and, in many cases, quantitative description of various ionization phenomena. Even the simplest picture of classical electron motion in the field of an electromagnetic wave, the well-known simple man model (SMM) and its predecessors [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12], provides a basis for understanding many ionization phenomena, such as above-threshold ionization (ATI) and high harmonic generation (HHG). The spectacular predictive power of the SMM gave rise to a variety of techniques in which the ionization is described quantum-mechanically and the subsequent electron motion is treated classically or semi-classically (classical trajectory Monte Carlo or CTMC method) [11; 12; 13; 14; 15; 16; 17; 18]. In these approaches the role of quantum-mechanics (QM) consists in setting up initial conditions for the subsequent classical or semi-classical electron motion, and to assigning statistical weights to the trajectories originating at different times by employing the notion of the instantaneous ionization rate (IIR). Various analytical expressions for the IIR obtained in the framework of the quantum-mechanical strong field approximation (SFA) approach and its subsequent developments [19; 20; 21; 22; 23; 24; 25] can be used for this purpose, such as the Ammosov-Delone-Krainov (ADK) [24; 26], or the Yudin-Ivanov [27] formulas. The initial values of the coordinates for the classical trajectories are determined by the tunnel exit position, which can be found using the field direction model (FDM) [17] or a more refined approach based on the use of the parabolic coordinate system [28] for the atomic systems governed purely by the Coulomb interaction. The initial velocities in the directions perpendicular to the orientation of the electric field vector at the time of ionization are typically assumed to be distributed according to the well-known SFA formula for the transverse velocity distribution [23]. The initial velocity in the direction parallel to the electric field vector at the moment of ionization is typically assumed to be zero [18]. The success and utility of this semi-classical picture of atomic and molecular ionization is ultimately due to the essentially semiclassical nature of many aspects of the ionization phenomena which can be explained quite satisfactorily using semiclassical trajectory simulations. The so-called low-energy structures in strong field ionization spectra [29; 30], Coulomb focusing effect [31], nonadiabatic effects in strong field ionization [32], frustrated tunneling ionization [33], have been studied using the trajectory-based methods. The semi-classical approaches based on the classical trajectories can include the interference effects as well, which can be done by using the so-called Quantum Trajectory Monte Carlo (QTMC) approach[34; 35], or the semiclassical two-step model for strong-field ionization [36], which allows to obtain angular photo-electron distributions in good agreement with fully quantum calculations based on the solution of the time-dependent Schrodinger equation (TDSE). Such semiclassical simulations usually require much less computational effort than the fully quantum calculations and consequently, for complex targets, when numerical solution of the TDSE becomes unfeasible, use of such methods may provide the only possibility to obtain quantitative description of the ionization process. If we are interested in a purely quantum mechanical description of the motion of ionized electrons and still want to be able to use some classical notions, one can apply the saddle-point method (SPM) to evaluate the SFA [37; 38; 39; 40; 41; 42; 7] or the Feynman's path-integral expressions for the ionization amplitude [43]. One obtains in this way a description of the ionization phenomena in terms of the so-called quantum trajectories (QT). QT are determined by the SPM equations and make the action in the integrals determining ionization amplitudes stationary. QT is a generally complex electron trajectory originating at the complex saddle point \(t_{s}\) and propagating till the final moment of time \(t_{f}\), when the electron arrives at the detector. This approach gives a very transparent and appealing view of the ionization process. It is, moreover, very flexible and allows to design a number of different generalizations and developments [37; 44]. One may, for instance, start with the SFA ionization amplitude, ignoring effects of the ionic potential in the continuum [37; 45; 46], evaluate it applying the SPM [37; 43; 46; 47] and obtain a description of the ionization process in terms of the complex QT which are solutions to Newton's equations for a classical electron in the presence of the laser field. Such a description provides a link with the SMM. To include the effects of the ionic potential one can consider perturbatively the Coulomb effects on the QT used in the SFA (the so-called Coulomb corrected SFA or CCSFA method [48]). Alternatively, one can consider the Coulomb and laser field effects on the trajectories on equal footing and find the QT as solutions to the Newton's equations of motion in presence of the Coulomb and laser fields, still using the SFA equation defining the saddle-point (the Trajectory-based Coulomb SFA or (TCSFA) method [49]). Yet more generally, one may apply the SPM to evaluate the Feynman's path-integral representation of the ionization amplitude [43], obtaining QT which are solutions to the Newton's equations of motion in presence of the Coulomb and laser fields with a more complicated condition defining the starting time \(t_{s}\) of the trajectory. The path connecting \(t_{s}\) and \(t_{f}\) in the complex time-plane is often chosen to consist of two straight line segments: \((t_{s},Re\ t_{s})\) and \((Re\ t_{s},t_{f})\), with \(Re\ t_{s}\) interpreted as the tunnel exit point. One should bear in mind, however, that QT are all but a convenient (albeit very useful and powerful) mathematical construct, arising as a result of the application of the SPM for evaluation of ionization amplitudes. In particular, the path connecting \(t_{s}\) and \(t_{f}\) described above is not unique and can be deformed to cross the real time-axis almost at any given point [37]. This remark applies equally, of course, to the notion of the tunnel exit used in the CTMC method [37; 44]. This path-dependence of the time and location of the tunnel exit does not affect the quantum amplitude since the integrals defining the amplitudes depend only on the end points \(t_{s}\) and \(t_{f}\) of the path (as long as deforming the path we do not cross singular points of the integrand in the complex time-plane). As it was mentioned in the review work [37], no physical experiment can favor particular values for time or location of the tunnel exit event, which does not prevent these notions to be extremely useful for practical purposes. In practice, the path connecting \(t_{s}\) and \(t_{f}\), consisting of the two straight line segments: \((t_{s},Re\ t_{s})\) and \((Re\ t_{s},t_{f})\) is the most convenient choice. The exit time \(Re\ t_{s}\) and the tunnel exit point are related to the corresponding sub-barrier part of the QT as the real part of the expression \(\mathbf{r}=\int\limits_{t_{s}}^{Re\ t_{s}}\mathbf{v}(t)\ dt\), where \(\mathbf{v}(t)\) is the complex-valued sub-barrier electron velocity. The sub-barrier part of the QT thus defined can be used to provide the necessary prerequisites for the CTMC simulations, such as position of the tunnel exit, the transverse and longitudinal momentum distributions at the tunnel exit and the instantaneous ionization rate [34; 50]. Taking the real part of the QT at \(t=Re\ t_{s}\) as the tunnel exit position has the advantage that the imaginary part of the action which determines the ionization probability is accumulated in the sub-barrier region, while subsequent propagation of the QT in the real time only produces phase-shift due to the change of the real part of the action. This choice also allows to avoid complicated issues of branching points and branch cuts in the complex time-plane [44]. Of course, all the information we can hope to obtain about a physical system is encoded in its wave function. Any question about the motion of the ionized wave-packets for times within the laser pulse, should therefore be resolved from the solution of the TDSE, provided that this question has a physical meaning at all. The approach based solely on the information obtained by solving the TDSE, however, encounters the problem of identifying the contributions from different channels (ionization, excitation, and so on) for times within the laser pulse, when the wave-packets corresponding to these channels are not spatially sepa rated yet. The splitting of the total wave-function of the system into the bound and ionized components seems to have been achieved in the SFA and the Perelomov-Popov-Terent'ev (PPT) approaches [19; 20; 21; 22; 23; 26]. Such a splitting is not unique, however, and it is different in the two theories. Moreover, it is not gauge-invariant in the SFA or PPT approaches [37]. Similarly, the procedure based on projecting out contributions of the bound states of the field-free atomic Hamiltonian from the TDSE wave-function, which is sometimes used to define wave-packet describing ionized electron, is not gauge-invariant when applied for the times within the laser pulse duration. In [51] a method, allowing to identify the part of the wave-function describing ionized electron and relating the TDSE-based approach with the insight offered by the trajectory based approaches, has been proposed. In the framework of this method the photoionized part of the wave function is singled out by means of applying the time dependent surface flux (tSURFF) method [52], which relies on the knowledge of the wave-function in the asymptotic region, when the photo-ionized part of the wave function is localized in space. By applying a short-time filter to the ionization amplitude, calculated using the tSURFF method, authors were able to identify the dominant pathways which form the photoelectron spectra. Another group of methods allowing to connect the TDSE and the notion of trajectory is based on the Bohmian interpretation of the QM [53]. Bohmian view of the QM introduces a well-defined notion of the electron trajectory, exactly reproducing at the same time all the predictions of the conventional QM [54]. This possibility of reintroducing trajectories in the QM framework has been exploited to describe ionization of atoms [55; 56] and molecules [57] driven by strong laser fields, and for the description of the HHG process [58; 59; 60]. An approach to the problem of the tunneling time, based on the Bohmian QM, has been described in [61]. In [62] the notion of the coordinate distributions describing ionized electrons has been defined using the Bohmian trajectories, which allowed to look at the tunnel exit problem from the perspective offered by the Bohmian QM. In our earlier works [63; 64; 65] we described an alternative method that allowed us to extract from the TDSE information about the time-development of the ionization process for times within the laser pulse duration. The method is based on the analysis of two-time correlation functions, computed using the time-dependent wave-function describing evolution of the system, which was obtained by solving the TDSE numerically. In essence, this procedure allows us to use the notion of the conditional probability, where the condition is imposed at an instant when the laser pulse is gone. In other words, we formulate the questions about different observables characterizing the electron motion in the following way: What would be the probability of observing a given value of a certain observable during the laser pulse, provided that the electron is found in a given state at the end of the laser pulse? We applied this technique to study the evolution of the electron velocity distribution in strong field ionization [64] and to study trajectories of the electron wave-packets for the process of the frustrated tunneling ionization (FTI) [65]. Here we report a study of evolution of the ionized wave-packets for the process of strong field ionization, based on the analysis of the information obtained from the numerical solution of the TDSE. Atomic units with \(\hbar=1\), \(e=1\), \(m=1\) where \(e\) and \(m\) being the charge and the mass of the electron are used throughout the paper. ### Theory We consider an atom interacting with a linearly polarized laser pulse which we define in terms of its vector potential: \(\mathbf{E}(t)=-\dfrac{\partial\mathbf{A}(t)}{\partial t}\), where: \[\mathbf{A}(t)=-\hat{\mathbf{e}}_{z}\dfrac{E_{0}}{\omega}\sin^{2}\left(\dfrac{\pi t}{ T_{1}}\right)\sin\left(\omega t+\phi\right)\,. \tag{1}\] Here \(T_{1}=N_{c}T\) is the total pulse duration and \(T=2\pi/\omega\) is the optical cycle (o.c.) corresponding to the central frequency \(\omega=0.057\) a.u. (the wavelength of 800 nm). In the calculations below we use pulses with \(N_{c}=4\). The target system is described using the single-active electron (SAE) approximation and a spherical potential \(V(r)\). As targets, we will consider the hydrogen atom with \(V(r)=-1/r\), a model atom with a short range (SR) Yukawa-type potential \(V(r)=-1.903e^{-r}/r\) and the Ar atom described by means of an effective potential [66]. The target atom is initially in the ground state \(|\phi_{0}\rangle\), which is an \(s\) state for the hydrogen and Yukawa atoms (both with the ionization potential of 0.5 a.u.) and a \(p\) state with the energy -0.59 a.u. in the case of the Ar atom. We have shown in earlier works [63; 64; 65] that tunneling ionization dynamics can be studied in detail by analyzing suitably chosen two-time correlation functions: \[C(A(t_{1})B(t_{2}))=\langle\phi_{0}|\hat{A}^{H}(t_{1})\hat{B}^{H}(t_{2})|\phi_ {0}\rangle\,\,, \tag{2}\] where the operators \(\hat{A}^{H}(t)\) and \(\hat{B}^{H}(t)\) in (2) are taken in the Heisenberg representation, and \(|\phi_{0}\rangle\) is the initial state of the system. The particular choice of the operators \(\hat{A}\) and \(\hat{B}\) in Eq. (2) is dictated by the nature of the problem under consideration. We have shown in [64; 65] that by choosing for \(\hat{B}^{H}(t)\) the Heisenberg form \(\hat{Q}^{H}(T_{1})\) of a suitable Schrodinger projection operator \(\hat{Q}\) one can study the dynamical development of various ionization processes. The reason why we may expect the correlation function (2) to provide a useful dynamical information with such a choice of \(\hat{B}\) can be easily understood if in Eq. (2) we transform the operators to the more familiar Schrodinger picture: \[\hat{A}^{H}(t) = \hat{U}(0,t)\hat{A}\hat{U}(t,0)\] \[\hat{B}^{H}(t) = \hat{U}(0,t)\hat{B}\hat{U}(t,0)\, \tag{3}\] where \(\hat{U}(t,0)\) is the operator describing quantum evolution of the system, so that the wave function of the system at time \(t\) is \(\Psi(t)=\hat{U}(t,0)\phi_{0}\). Applying the transformation (3) we rewrite Eq. (2) as: \[C(A(t_{1})B(t_{2}))=\langle\hat{A}\Psi(t_{1})|\hat{U}(t_{1},t_{2})\hat{B}\Psi( t_{2})\rangle. \tag{4}\] Let us assume, for instance, that \(\hat{B}=\hat{P}\), where \(\hat{P}\) is the projection operator on the continuous spectrum of the field-free atomic Hamiltonian, and \(t_{2}=T_{1}\), where \(T_{1}\) is the moment of time when the laser pulse is gone. Then, according to the well-known projection postulate of QM [67], the ket-vector \(\hat{P}\Psi(t_{2})\rangle\) represents, apart from an unimportant normalization factor, the wave-function of the system immediately after the measurement that has found the electron in an ionized state. Eq. (4) can therefore be interpreted as a quantum-mechanical amplitude of finding an electron in the state \(|\hat{A}\Psi(t_{1})\rangle\) at the moment \(t=t_{1}\) provided that the electron has been found in an ionized state after the end of the pulse. With a suitable choice of the operator \(\hat{A}\) (we will discuss this choice in more detail below) we can now have a glimpse of the dynamical characteristics of the ionized electrons for the moments of time \(t<T_{1}\) within the laser pulse duration. Similarly, if we use \(\hat{B}=\hat{I}-\hat{P}\) in Eq. (4) and again choose \(t_{2}=T_{1}\), the expression for the correlation function can be interpreted as an amplitude of finding an electron in the state \(|\hat{A}\Psi(t_{1})\rangle\) at the moment \(t=t_{1}\) provided that the electron remains bound after the end of the pulse, which allows us to study dynamics of the FTI process for \(t<T_{1}\). We can concentrate on various aspects of the electron dynamics by choosing the operator \(\hat{A}\) in Eq. (4) appropriately. We can choose, for instance, \(\hat{A}\) to be a projection operator in momentum space. This choice together with \(\hat{B}=\hat{P}\) allows us to study the development of the ionized electron velocity distribution [64]. The choice of \(\hat{B}=\hat{I}-\hat{P}\) and a coordinate space projection operator for \(\hat{A}\) allowed us to study evolution of the FTI electrons in coordinate space. We exploit below yet another possibility, using the following Schrodinger operators in the definition of the correlation function (4): \[\hat{B} = \hat{P}\] \[\hat{A}_{z_{0}} = |\phi_{z_{0}}\rangle\langle\phi_{z_{0}}|\, \tag{5}\] where the components of the ket vector \(|\phi_{z_{0}}\rangle\) in the position representation are: \[\phi_{z_{0}}(\mathbf{r})=\langle\mathbf{r}|\phi_{z_{0}}\rangle=Ne^{-a(\mathbf{r}-\mathbf{e}_{z }z_{0})^{2}}. \tag{6}\] In Eq. (6) \(N\) is the normalization factor. The ket vector \(|\phi_{z_{0}}\rangle\) and its components in the coordinate basis given by (6) depend on the parameters \(z_{0}\) and \(a\), defining a point in space with the coordinates \((0,0,z_{0})\) and the resolution with which we look at the neighborhood of this point. In the calculations below we used \(a=4\ln 2\) which gives us the spatial resolution of approximately one atomic unit. We use the position representation in all the calculations below. In this representation action of the projection operator \(\hat{A}_{z_{0}}\) on a state vector \(|\Phi\rangle\) with the components \(\Phi(\mathbf{r})=\langle\mathbf{r}|\Phi\rangle\) can be found as: \[\langle\mathbf{r}|\hat{A}_{z_{0}}|\Phi\rangle=\phi_{z_{0}}(\mathbf{r})\int\phi_{z_{0} }^{*}(\mathbf{r})\Phi(\mathbf{r})\ d\mathbf{r}. \tag{7}\] We choose \(t_{2}=T_{1}\) in Eq. (4), where \(T_{1}\) is duration of the laser pulse and we will be looking at various moments of time \(t\leq T_{1}\). We will be thus studying the correlation function: \[C(z_{0},t)=\langle\hat{A}_{z_{0}}\Psi(t)|\hat{U}(t,T_{1})\hat{P}\Psi(T_{1})\rangle \tag{8}\] for \(t\leq T_{1}\), with the operators \(\hat{A}\) and \(\hat{P}\) specified in Eq. (5). It is clear from the above discussion that with such a choice, Eq. (8) can be interpreted as giving us (apart from an unimportant normalization factor) a quantum-mechanical amplitude of finding the electron near the point with the coordinates \((0,0,z_{0})\) at the time \(t\) provided that the electron will be found in an ionized state after the end of the pulse. In other words, this expression provides a means of studying trajectories of the ionized electrons during the laser pulse. To calculate the correlation function (8) we use a procedure similar to the one we used previously in [63; 64; 65], and we will only briefly describe the technical details. The calculation can be reduced to multiple solutions of the 3D time-dependent Schrodinger equation (TDSE): \[i\frac{\partial\Psi(\mathbf{r},t)}{\partial t}=\left(\hat{H}_{\rm atom}+\hat{H}_{ \rm int}(t)\right)\Psi(\mathbf{r},t)\, \tag{9}\] where \(H_{\rm atom}=\frac{\hat{\mathbf{p}}^{2}}{2}+V(r)\) is the field free atomic Hamiltonian and \(\hat{H}_{\rm int}(t)\) is the atom-field interaction Hamiltonian for which we use the length form: \[\hat{H}_{\rm int}(\mathbf{r},t)=\mathbf{r}\cdot\mathbf{E}(t). \tag{10}\] We first propagate the TDSE forward in time on the interval \((0,T_{1})\), using ground atomic state as the initial state, obtaining position representation of the state vector \(|\Psi(T_{1})\rangle\). Acting on \(|\Psi(T_{1})\rangle\) with the projection operator \(\hat{P}\) we obtain the wave-function \(\Phi(\mathbf{r})\) corresponding to the vector \(|\Phi\rangle=\hat{P}|\Psi(T_{1})\rangle\). To find the vector \(\hat{U}(t,T_{1})\hat{P}\Psi(T_{1})\rangle\), that we need to compute the matrix element in Eq. (8), we propagate the TDSE (9) backward in time using \(\Phi(\mathbf{r})\) as an initial (or rather final) wave-function, obtaining the vector \(|\Phi(t)\rangle\) with the components \(\Phi(\mathbf{r},t)\) for the times \(t\) within the laser pulse. Simultaneously, we propagate backward in time the vector \(|\Psi(T_{1})\rangle\), obtaining the vector \(|\Psi(t)\rangle\) and the wave-function \(\Psi(\mathbf{r},t)\)- solution to the TDSE for the times \(t\) within the laser pulse. Of course, \(\Psi(\mathbf{r},t)\) had already been computed during the first, forward run of the TDSE, but we cannot store it in memory for all the times \(t\) we need as it would require too much memory space. We recompute it again, therefore, in the process of the back-propagation of the TDSE. Calculating overlaps of \(|\Phi(t)\rangle\) and \(|\hat{A}_{z0}\Psi(t)\rangle\) for a given \(z_{0}\) and for a given set of times \(t\) (we use the grid of \(t\) with twenty points for every optical cycle), we obtain the correlation function \(C(z_{0},t)\) defined in Eq. (8). A single calculation using the backward propagation that we described above, gives us \(C(z_{0},t)\) for the whole grid of \(t\) and a single \(z_{0}\). The procedure is repeated for different values of \(z_{0}\). More specifically, we used a grid of a hundred \(z_{0}\)-values equally spaced on the interval (\(-60\) a.u., \(60\) a.u.). The TDSE was solved numerically using the procedure tested and described in detail earlier [68; 69; 70]. The procedure relies on representing the coordinate wave-function as a series in spherical harmonics with the quantization axis along the laser polarization direction. Spherical harmonics with orders up to \(L_{\rm max}=50\) were used. The radial variable was treated by discretizing the TDSE on a grid with a step-size \(\delta r=0.05\) a.u. in a box of size \(R_{\rm max}=400\) a.u. The initial ground state of the system was obtained by using a variational calculation employing the Slater basis set [71] with subsequent propagation in imaginary time [72] on the spatial grid we described above. The necessary convergence checks were performed. As we discussed above, to calculate the correlation function (4) we have to propagate the TDSE both forward and backward in time. That was achieved by using the matrix iteration method [73]. ## Results ### Correlation function analysis. Short range Yukawa potential. In this section, we present results of the correlation function analysis based on the formal theory of the previous Section. These results are displayed in Figures 1-4 below. Brighter colors in the figures correspond to greater absolute values of \(|C(z,t)|\). The ionized wave-packets spread fast (this spread is discussed in more detail below), so that \(|C(z,t)|\) decreases very fast in magnitude when we move away from the instant of ionization. To see evolution of the wave-packets in greater detail and to be able to discern in the figures structures with very different magnitudes, we show exponentiated values \(|C(z,t)|^{1/9}\). Fig. 1 visualizes the birth and propagation of a photo-electron in a model Yukawa atom with the SR potential for different field strengths corresponding to a range of Keldysh parameters \(\gamma=\sqrt{2I_{p}}/A_{0}\) evaluated from the vector potential peak strength \(A_{0}\) and the ionization potential \(I_{0}\). Our modeling covers both tunneling (\(\gamma\lesssim 1\)) and the multiphoton (\(\gamma>1\)) regimes. The lines in Fig. 1 and other figures are used to display the classical trajectories originating at the three main local maxima of the laser pulse we use. Pulse shapes for different CEPs are shown in Fig. 2. Taking into account that the trajectories originating at the field maxima with zero velocities receive higher weights in the CTMC method, we may expect those trajectories to be related to the quantum picture we are analyzing. Following the prescriptions of the CTMC method, these classical trajectories have been computed assuming the zero initial velocities and the initial \(z-\) coordinate defined by the FDM. In this model, the electron coordinate at the tunnel exit is determined as an outer point where the electron kinetic energy becomes positive. This energy is obtained from energy conservation taking into account the combined potential of the ionic core and the external electric field which is assumed to be static. For brevity, we will call this construction the CTMC trajectories. For the Yukawa potential, the CTMC trajectories launched at the local field maxima practically coincide with the trajectories we would have obtained had we neglected completely any ionic potential in the classical trajectory calculations. We will call below such trajectories the Coulomb-free trajectories. We do not show the Coulomb-free trajectories in the Fig. 1, they would be practically indistinguishable from the CTMC trajectories shown in the figure. As one can observe, except the case shown in Fig. 1a, the ionized electron wave-packets initially propagate along the classical CTMC trajectories launched at the field maxima. The case of the field intensity of \(2\times 10^{13}\) W/cm\({}^{2}\) shown in Fig. 1a stands apart. It belongs to Figure 1: (Color online) Visualization of the correlation function (8) for the Yukawa atom at different field intensities \(I\) and the CEP \(\phi=0\). The correlation function is exponentiated (\(|C(z,t)|^{1/9}\) is shown) for improving the visibility of the patterns. The lines in the figure display the classical trajectories originating at the main (dots) and two auxiliary (dot-dash and dash) maxima of the driving laser pulse. the multiphoton regime with the Keldysh parameter \(\gamma=2.39\). Motion of the ionized wave-packets, as rendered by the quantum calculation, deviates considerably from the classical CTMC trajectories launched at the field maxima. As one can see from the figure, this deviation is, in part, due to the incorrect initial value of the initial coordinate, for which the FDM model gives too large a value. Also, for such a value of the parameter \(\gamma\) we cannot expect the description based on the notion of an effective potential to remain accurate, and we cannot expect that in this ionization regime use of only the classical trajectories launched at the field maxima might provide a good approximation to the quantum picture. To obtain such an approximation one should, as it is done in the CMTC calculations, include the totality of the trajectories originating at various times within the laser pulse. To relate the birth place of the photo-electron with the local maxima of the electric field, we varied the carrier-envelope phase (CEP) of the driving laser pulse. Results of these simulations for the Yukawa atom are presented in Fig. 2. We see again that initially the ionized wave-packets follow closely the CTMC trajectories, which in turn, are practically identical to the Coulomb-free trajectories. These results are in accordance with the SFA in which the birth and motion of the ionized wave packets are solely due to the electron-laser interaction. Such an approach is fully justified for the SR Yukawa atom. Our approach allows us to visualize how this picture actually emerges from an _ab initio_ TDSE calculation. As one can see from Fig. 1 and Fig. 2, at the latter stages of evolution the ionized wave-packets broaden and their paths may deviate considerably from the CTMC trajectories launched at the field maxima. This, we believe is a consequence of the wave-packet spread and interference of the wave-packets emitted at different times. To describe qualitatively these effects we can use a model based on the SFA which we describe in the Appendix. ### Correlation function analysis. H and Ar atoms. Fig. 3 and Fig. 4 are analogous to Fig. 1 and show the photo-electron trajectories for the hydrogen and Ar atoms, respectively. As in the case of the SR potential, for some time after their birth the ionized wave-packets follow relatively closely the classical trajectories, progressively widening. This widening and the interference of the wave-packets born at different local maxima of the field alters this motion at the later stages of the evolution. Unlike the SR Yukawa case, the long range Coulomb force introduces a considerable change into the wave-packets dynamics. This point is illustrated in Fig. 3, where the solid line shows the Coulomb-free trajectory originating at the main field maximum, i.e., the trajectory obtained if the ionic potential (the pure Coulomb in the hydrogen case) is neglected. One can see that the Coulomb-free trajectory deviates quite considerably both from the CTMC trajectory and from the TDSE correlation pattern. In Fig. 4 and Fig. 5 we present results for the Ar atom with the initial state \(3p\) orbital oriented differently with respect to the laser polarization vector. Fig. 4 shows results for the initial \(3p_{z}\) state, oriented in \(z-\) direction along the laser field, while Fig. 5 shows results for the initial \(3p_{x}\) state, oriented in \(x\)-direction perpendicular to the laser field. Unlike the two previous cases of the Yukawa and hydrogen atoms, Fig. 4 shows horizontal bands, present at the initial stage of the evolution before the first maximum of the laser pulse. These bands reflect the distribution of the electron density along the polarization direction due to the nodal structure of the initial \(3p_{z}\) state. For the cases of the Yukawa and H atoms with the initial \(s-\) state shown in Fig. 1, Fig. 2 and Fig. 3 we have, of course, only one band concentrated near the origin where the coordinate density of the unperturbed initial state is maximal. For the moments of time within the laser pulse, preceding the first local maximum of the pulse, the presence of such a band (or bands in the case of the \(3p_{z}\) state of Ar) in the plots showing the correlation function is easy to explain. It is just a consequence of the simple fact that all the ionized electrons resided initially in the ground atomic state. According to this logic the bands due to the correlations between the ionized electrons and the electrons in the initial atomic state, should disappear or diminish in brightness for the moments of time exceeding position of the major maximum of the field strength, when relatively few electrons can be ionized. In other words, if \(t_{m}\) is the position of the major maximum of the pulse field strength, then we might expect these bands to start vanishing or diminishing in brightness for \(t\gtrsim t_{m}\). We see that this is indeed the case for the correlation pattern for the ionization from the \(3p_{z}\) state of argon atom shown in Fig. 4, where the bands describing correlations between the ionized and bound electrons vanish for \(t>t_{m}\). This is also the case of the correlation pattern for the hydrogen atom shown in Fig. 3, where the band around the line \(z=0\) diminishes in brightness for \(t>t_{m}\). The picture is apparently different for the correlation patterns for the Yukawa atom (Fig. 1 and Fig. 2) and ionization from the \(3p_{x}\) state of the Ar atom (Fig. 5). With the exceptions of Fig. 1c and Fig. 1d showing results for the Yukawa atom for higher field strengths, these figures do not show any appreciable change in the degree of correlation between ionized and bound electrons for \(t>t_{m}\). We believe that this apparently counter-intuitive behavior is an artefact which is due to a problem which is very hard to avoid in a numerical calculation. If we inspect the definition (8) of the correlation function, we see that the first step of the calculation consists in projecting out the contributions of the bound states from the state vector \(|\Psi(T_{1})\rangle\) describing the system at the end of the pulse. In practical calculations we perform this projection operation as follows: \[\hat{P}|\Psi(T_{1})\rangle=|\Psi(T_{1})\rangle-\sum_{k}\langle\phi_{k}|\Psi(T_ {1})\rangle|\phi_{k}\rangle\, \tag{11}\] where \(|\phi_{k}\rangle\) describe bound states of the system. It is unavoidable in numerical calculations that \(|\phi_{k}\rangle\) differ slightly from the state vectors describing the true bound atomic states. This means that after performing the projection operation, the resulting vector in Eq. (11) is only approximately orthogonal to all the atomic bound states vectors. Most important of course, is the possible non-orthogonality to the initial atomic state, which which would manifest itself as presence of correlations between the bound and ionized electrons even for the times when ionization process effectively ceases. The extent to which this possible non-orthogonality issue may alter the correlation pattern depends, of course, on the magnitude of the vector \(\hat{P}|\Psi(T_{1})\rangle\). Clearly, this numerical problem plays more significant role when this magnitude \(||\hat{P}\Psi(T_{1})||^{2}=\langle\Psi(T_{1})|\hat{P}|\Psi(T_{1})\rangle\) is small, or, in other words, when the ionization probability is small. We can expect, therefore, this numerical problem to be less important for the systems with higher ionization probabilities. This conclusion is confirmed by our data. Let us consider the particular case of the field intensity of \(10^{14}\) W/cm\({}^{2}\) and zero CEP. For these field parameters we obtain the following values for the total ionization probabilities \(P_{\rm ion}\) for the targets we consider: \(P_{\rm ion}=2.49\times 10^{-5}\) for the Yukawa atom, \(P_{\rm ion}=6.61\times 10^{-3}\) for the hydrogen atom, \(P_{\rm ion}=7.65\times 10^{-2}\) for the Ar atom (\(3p_{z}\) initial state), and \(P_{\rm ion}=3.42\times 10^{-3}\) for the Ar atom (\(3p_{x}\) initial state). One can see that, indeed, our data show the expected behavior of the correlation pattern, with the bands describing correlations between the ionized and bound electrons vanishing or diminishing in magnitude considerably for \(t>t_{m}\), in the cases of higher total ionization probabilities. viz., in the cases of the hydrogen and the Ar atom prepared initially in the \(3p_{z}\) state. We also see this expected behavior of the correlation patterns for the Yukawa atom in the cases of the higher field intensities shown in Fig. 1c and Fig. 1d. ### Coordinate and velocity distributions at the moment of ionization. By taking the slices of the correlation patterns at \(t=t_{0}\) along the lines of the constant \(t\) one may try to obtain some information about the distribution of electron coordinates at the tunnel exit. We will be interested in a normalized quantity: \[d(z_{0})=\frac{|C(z,t_{0})|^{2}}{|\langle\Psi(t_{0})|\hat{A}_{z}|\Psi(t_{0}) \rangle|^{2}}\;. \tag{12}\] Here \(C(z,t_{0})\) is the correlation function (8), \(\hat{A}_{z}\) is the coordinate projection operator defined in Eq. (5) and \(\Psi(t_{0})\) is the solution of the TDSE describing the evolution of the system. The normalization used in Eq. (12) removes a trivial \(z-\) dependence of the correlation function. In Fig. 6 we show results for \(d(z)\) by taking the slices at \(t_{0}=2\) o.c. for the Yukawa, hydrogen and Ar atoms at the pulse intensity of \(10^{14}\) W/cm\({}^{2}\) and zero CEP. We are thus looking at the electrons born at the main maximum of the laser pulse. We should bear in mind that the correlation function is not a probability distribution, and strictly speaking, it does not give us the coordinate probability distribution directly. We may expect, nevertheless, that the spatial profile of the distribution defined in Eq. (12) may inherit the main features of the probability distribution, in particular, the position of the maximum and the width of the coordinate probability distribution at the moment of electron ionization. These expectations are based on the following observation. By the projection postulate of QM [67], the ket-vector \(A_{z}|\Psi(t_{0})\rangle/\langle\Psi(t_{0})|\hat{A}_{z}|\Psi(t_{0})\rangle\) represents the state of the system immediately after the measurement that detects the electron in the neighborhood of the point \((0,0,z)\) at time \(t_{0}\). From the Eq. (12) and the definition Eq. (8), we see than that expression (12) can be interpreted as the probability to detect the electron in the ionized state \(\hat{P}\Psi(T_{1})\) at the end of the laser pulse, provided that it was found near the point \((0,0,z)\) at time \(t_{0}\). One can see that the maxima of the distributions given by Eq. (12) are indeed quite close to the FDM predictions for the three targets we have considered. Using the plots in Fig. 6 we can find the full widths at half maximum (FWHM) of the coordinate distributions. For the Yukawa and hydrogen atoms these estimates are given in Table 1. One can use a simple check to verify if these estimates are reasonable. Let us assume that the coordinate distribution is a Gaussian with the FWHM \(\Delta_{z}\). Then, by performing Fourier transform we obtain the velocity distribution, which will be again a Gaussian with the FWHM \(\Delta_{v}\) related to the coordinate FWHM as \(\Delta_{v}\Delta_{z}=8\ln 2\). We obtain in this way the estimates for the FWHMs of the velocity distributions shown in Table 1. We can compare these estimates to the FWHM following from the well-known SFA relation [23] for the longitudinal electron velocity distribution: \[W(v_{z})=\text{const}\times\exp\left\{-2Kv_{z}^{2}(\text{arcsinh}\,\gamma- \gamma(1+\gamma^{2})^{-1/2})\right\}\,. \tag{13}\] Here \(K=I_{p}/\omega\), \(I_{p}\) is the target ionization potential and \(\gamma\) is Keldysh parameter. This expression gives the velocity distribution at the detector. The FWHM of distribution (13) is also shown in Table 1. In general, the longitudinal velocity distribution at the ionization instant does not need to coincide with the distribution (13) since this distribution may be affected by the ionic core potential during the post-ionization propagation. We can, however, expect this propagation effect to play small role for the short range Yukawa potential. Indeed, the FWHM of the longitudinal velocity distribution we obtain from Eq. (13) agrees very well with the estimate of the velocity FWHM we obtained above from the TDSE calculation. The case of the Coulomb potential is different. The coordinate distribution for the hydrogen atom in Fig. 6 is considerably wider than the distribution for the SR Yukawa potential, resulting in a smaller value of around 0.5 a.u. for the velocity FWHM in Table 1. Different estimates for the initial longitudinal velocity spread for Coulomb systems can be found in the literature, ranging from the FWHM of around 0.1 a.u. [74] to 0.4 a.u. [18]. Our FWHM estimate given in Table 1 seems to agree with the latter value. ## Conclusion In summary, we devised a procedure based on the correlation function analysis and employed it to study ionization dynamics of the three atomic targets: the SR Yukawa, hydrogen \begin{table} \begin{tabular}{|c|c|c|c|} \hline Model & Coordinate FWHM (a.u.) & Velocity FWHM (a.u.), TDSE & Velocity FWHM (a.u.), SFA \\ \hline Yukawa & 5.9 & 0.91 & 0.89 \\ Hydrogen & 11 & 0.50 & \\ \hline \end{tabular} \end{table} Table 1: Estimates for the FWHMs of the coordinate and velocity distributions. and Ar atoms. The starting point of our analysis is the time-dependent wave function returned by a numerical solution of the TDSE. Our approach allows us to look closely at early stages of the photo-electron evolution and to separate various components of the wave function which, at later stages, will contribute to distinct outcomes of the laser-atom interaction. We achieve this result by reformulating the problem in terms of the conditional amplitudes, i.e., the amplitudes describing outcomes of measurements of different observables provided that electron is found in the ionized state after the end of the pulse. By choosing electron coordinate as such an observable we were able to track the motion of the ionized electron wave-packets basing on an _ab initio_ TDSE calculation. Our study demonstrates the somewhat limited character of the notion of a photo-electron trajectory for the description of the ionization process. The true photo-electron dynamics is more complex and resemble more a "quantum cloud" expansion. We demonstrate that the photo-electron wave-packets obtained in this way follow closely the CTMC trajectories at the initial stages of the evolution both for the SR and Coulomb systems. However, at the later stages of the evolution, the picture becomes more complicated due to the spread and interference of the wave-packets originated at different field maxima. In the present work we considered these effects using a quantum mechanical approach based on the correlation function analysis. Alternative description might be based on incorporating these effects into the trajectory based methods. Interference effects can be included in the consideration following the prescriptions of the QTMC method [34; 35] or the semi-classical two-step model for strong-field ionization [36], which supply each trajectory with a phase accumulated along the trajectory and allow thus to describe the interference effects. The dispersion effect could be described analogously to the description we obtained in the SFA-based model we presented above, where this effect manifests itself as a spread of the wave-packet moving along the classical trajectory. In the simplest case given by the Eq. (24) the spread does not depend on the core potential and is described by a simple analytical formula. Such a trajectory based description of the interference and dispersion effects possesses the advantage of the trajectory based methods, since it can be applied for more complex targets such as molecules, for which use of the TDSE based technique becomes prohibitively computationally demanding. Our approach also allowed us to obtain information about coordinate and velocity electron distributions at the tunnel exit. ###### Acknowledgements. This work was supported by the Institute for Basic Science grant (IBS-R012-D1) and the National Research Foundation of Korea (NRF), grant funded by the Korea government (MIST) (No. 2022R1A2C3006025). Computational works for this research were performed on the IBS Supercomputer Aleph in the IBS Research Solution Center. IAI wishes to thank the Australian National University for hospitality. ## Appendix: Dispersion of wave packets We use the well-known expression for the SFA ionization amplitude [23]: \[a_{\mathbf{p}}(t)=-i\int\limits_{0}^{t}\exp\left\{-i\int\limits_{\tau}^{t}\frac{( \mathbf{p}+\mathbf{A}(u))^{2}}{2}du+I_{p}\tau\right\}\langle\mathbf{p}|\hat{H}_{\rm int}( \tau)\phi_{0}\rangle\ d\tau\, \tag{14}\] where \(\mathbf{A}(t)\) is the vector potential (1) of the pulse. The physical meaning of the amplitude (16) is that it gives us the momentum space wave-function of the ionized wave-packet. Fourier transform of \(a_{\mathbf{p}}(t)\) will give us then coordinate wave-function of the ionized wave-packet: \[\Psi_{ion}(\mathbf{r},t)=\int e^{i\mathbf{p}\cdot\mathbf{r}}a_{\mathbf{p}}(t)\ d\mathbf{p}. \tag{15}\] Below, we will be interested in the absolute value \(|\Psi_{ion}(\mathbf{r},t)|\) of the coordinate wave-function (15). Choice of the length or velocity gauges to describe the atom-field interaction is, therefore, immaterial for our purposes and we use the velocity gauge in Eq. (14) which makes the formulas somewhat simpler. We consider expression (14) for the ionization amplitude for times \(t\) within the interval \((0,T_{1})\) of the laser pulse duration. To evaluate expression (14) we employ the SPM, supplemented with the rule used in [27] that for \(t<T_{1}\) we need to consider only the saddle points \(t_{s}\) with \(Re(t_{s})<t\) for the evaluation of the integral in Eq. (14). Following the standard prescriptions of the SPM we obtain: \[a_{\mathbf{p}}(t)=\sum_{Re(t_{s})<t}(-i)e^{-\frac{i\pi}{4}}\sqrt{\frac{2\pi}{S^{ \prime\prime}(t_{s},t,\mathbf{p})}}e^{-iS(t_{s},t,\mathbf{p})}\langle\mathbf{p}|\hat{H}_{ \rm int}(t_{s})\phi_{0}\rangle\, \tag{16}\] where \(t_{s}\) are saddle-points of the integrand in Eq. (14), satisfying the SPM equation \((\mathbf{p}+\mathbf{A}(t_{s}))^{2}+2I_{p}=0\) and: \[S(t_{s},t,\mathbf{p})=\int\limits_{t_{s}}^{t}\left(\frac{(\mathbf{p}+\mathbf{A}(u))^{2}}{2}+I_ {p}\right)\ du \tag{17}\] We can compute the Fourier transform defining the coordinate wave-function (15) using the SPM again. One can see that it is the region of small momenta \(\mathbf{p}\) that dominate the integral in Eq. (15). It is sufficient, therefore, to expand the action in Eq. (17) in powers of \(\mathbf{p}\) keeping only the constant, linear and quadratic terms: \[S(t_{s},t,\mathbf{p})=\alpha^{s}(t)p^{2}+\mathbf{\beta}^{s}(t)\cdot\mathbf{p}+\gamma^{s}(t) \tag{18}\] A simple integration than will give us for the coordinate wave-function of the ionized wave-packet: \[\Psi_{ion}(\mathbf{r},t)=\sum_{Re(t_{s})<t}\frac{C_{s}}{\alpha^{s}(t)^{\frac{3}{2}} }\exp\left\{\left(i\frac{(\mathbf{\beta}^{s}(t)-\mathbf{r})^{2}}{4\alpha^{s}(t)} \right)-i\gamma^{s}(t)\right\}\,, \tag{19}\] where we have absorbed all constant factors into the factors \(C_{s}\). To see the physical meaning of Eq. (19) we have to take a closer look at the coefficients of the expansion (18). The integration path in Eq. (17) can be chosen to consist of two segments: a vertical line \((t_{s},Re(t_{s}))\) descending on the real time axis and a horizontal segment \((Re(t_{s}),t)\). We can then represent the action in Eq. (17) as a sum \(S(t_{s},t,\mathbf{p})=S(t_{s},Re(t_{s}),\mathbf{p})+S(Re(t_{s}),t,\mathbf{p})\), where \[S(t_{s},Re(t_{s}),\mathbf{p})=\tilde{\alpha}_{1}^{s}(\mathbf{p})\mathbf{p}^{2}+\tilde{\bm {\beta}}_{1}^{s}(\mathbf{p})\cdot\mathbf{p}+\tilde{\gamma}_{1}^{s}(\mathbf{p})\, \tag{20}\] with \[\tilde{\alpha}_{1}^{s} = -i\frac{Im(t_{s})}{2}\] \[\tilde{\mathbf{\beta}}_{1}^{s} = \int\limits_{t_{s}}^{Re(t_{s})}\mathbf{A}(u)\ du\] \[\tilde{\gamma}_{1}^{s} = -iIm(t_{s})I_{p}\, \tag{21}\] and: \[S(Re(t_{s},t),\mathbf{p})=\tilde{\alpha}_{2}^{s}(\mathbf{p})\mathbf{p}^{2}+\tilde{\mathbf{ \beta}}_{2}^{s}(\mathbf{p})\cdot\mathbf{p}+\tilde{\gamma}_{2}^{s}(\mathbf{p})\, \tag{22}\] with \[\tilde{\alpha}_{2}^{s} = \frac{t-Re(t_{s})}{2}\] \[\tilde{\mathbf{\beta}}_{2}^{s} = \int\limits_{Re(t_{s})}^{t}\mathbf{A}(u)\ du\] \[\tilde{\gamma}_{2}^{s} = (t-Re(t_{s}))I_{p}. \tag{23}\] It is customary to refer to the action (20) as describing under-the-barrier, and (22) as describing post-ionization motion. Of course, as we mentioned in the Introduction, this division is arbitrary to a degree, since it corresponds to a particular choice of the integration path in Eq. (17) which is not unique. The tilted coefficients in Eq. (21) and Eq. (23) are themselves functions of \(\mathbf{p}\). This dependence is due to the dependence of the saddle point position \(t_{s}\) on the momentum. To obtain the untilted quantities in Eq. (18) we must renormalize these coefficients, by expanding \(t_{s}\) in powers of momentum components and keeping only constant, linear and quadratic terms in the resulting expressions. Corresponding formulas become rather bulky and add little to understanding the physical picture, we will not present them here. In the actual calculation reported below we performed all the necessary re-expansions numerically. Before presenting results of this calculation we will first illustrate the physical meaning of Eq. (19) by making a few simplifying assumptions leading to more transparent formulas. Let us suppose first that we can drop under-the-barrier part of the action and substitute the expressions for the tilted coefficients from Eq. (23) in Eq. (19). We obtain then: \[\Psi_{ion}(\mathbf{r},t)=\sum_{Re(t_{s})<t}\frac{2^{\frac{3}{2}}C_{s}}{(t-Re(t_{s} )))^{\frac{3}{2}}}\exp\left\{\left(i\frac{(\mathbf{\beta}^{s}(t)-\mathbf{r})^{2}}{2(t- Re(t_{s}))}\right)-i\gamma^{s}(t)\right\}\,, \tag{24}\] with \(\mathbf{\beta}_{s}(t)\) given by the second of equations (23). It is not difficult to see that for each \(t_{s}\) exponential factor in Eq. (24) describes evolution of an electron prepared at the moment \(t=Re(t_{s})\) in the state described by a delta function \(\delta(\mathbf{r})\), which evolves subsequently under the action of the laser field only. For each \(t_{s}\) the corresponding evolving wave-packet is weighted by a factor \(e^{-i\gamma_{s}}\). Since \(\gamma_{s}\) is complex, the exponential function \(e^{-i\gamma_{s}}\) is sharply peaked around the field maximum nearest to \(t_{s}\) (it is this factor, in fact, that gives the characteristic exponential dependence of the ionization probability on the field strength in the ADK and similar formulas). We obtain thus a simple picture of very narrow electron wave-packets created at times near the local field maxima and propagating subsequently under the action of the laser field. Including the under-the-barrier part of the action makes this picture more realistic. Assuming that \(\alpha_{s}(t)=\alpha_{1}^{s}(t)+\alpha_{2}^{s}(t)\), where \(\alpha_{1}^{s}(t)\) and \(\alpha_{2}^{s}(t)\) are given by Eq. (21) and Eq. (23), we obtain from Eq. (19): \[\Psi_{ion}(\mathbf{r},t)=\sum_{Re(t_{s})<t}\frac{2^{\frac{3}{2}}C_{s}}{\left(t-Re(t _{s})-iIm(t_{s}))\right)^{\frac{3}{2}}}\exp\left\{\left(i\frac{(\mathbf{\beta}^{s}( t)-\mathbf{r})^{2}}{2(t-Re(t_{s})-iIm(t_{s}^{0}))}\right)-i\gamma^{s}(t)\right\}\,, \tag{25}\] where \(t_{s}^{0}\) is the zero order term in the expansion of \(t_{s}\) in powers of momentum. The role that this correction plays in Eq. (25) is quite clear. It is easy to see that exponential factor in Eq. (25) describes now the spread and motion of the wave-packet prepared initially in a state describe by a Gaussian \(e^{-cr^{2}}\) with \(c=\frac{1}{2Im(t_{s})}\), and evolving subsequently under the action of the laser field only. Role of the under-the-barrier part of the coefficient \(\alpha\) consists, therefore, in giving a non-zero initial spread to the ionized wave-packet. Similarly, one can see, that under-the-barrier part of the coefficient \(\beta\) leads to the non-zero initial value of the coordinate. Results for the coordinate wave-function of the ionized electron provided by Eq. (19) by systematically obtaining terms of the expansion Eq. (18) from Eq. (21) and Eq. (23) are shown in Fig. 7. The re-expansions needed to compute coefficients in Eq. (18) from Eq. (21) and Eq. (23) were done numerically. The figure shows absolute value \(|\Psi_{ion}(\mathbf{r},t)|\) along the line \(\mathbf{r}=(0,0,z)\) computed for the field parameters used in Fig. 2a. We note the slight offset of the \(z\)-coordinate of the maxima of the correlation function at the staring points of the classical trajectories. This offset is due to the fact that the classical trajectory calculations use the FDM initial coordinates values. The initial coordinates in the SFA calculation, on the other hand, are essentially determined by the second equation (21), which does not include ionic potential, and differs, therefore, from the FDM value. One can see, nevertheless, that the plot shown in Fig. 7 reproduces qualitatively the main features of the TDSE correlation pattern. The wave-packets follow initially the classical trajectories, broadening subsequently. We also see the structures appearing at the latest stages of evolution, for times near the end of the pulse, which are reminiscent of the structures seen in Fig. 7. These structures disappear if we retain in Eq. (19) only the contribution of the main field maximum, and are, therefore, a manifestation of the interference of the wave-packets born at different field maxima.
2307.13629
White dwarf spectral type-temperature distribution from Gaia-DR3 and the Virtual Observatory
The characterization of white dwarf atmospheres is crucial for accurately deriving stellar parameters such as effective temperature, mass, and age. We aim to classify the population of white dwarfs up to 500 pc into hydrogen-rich or hydrogen-deficient atmospheres based on Gaia spectra and to derive an accurate spectral type-temperature distribution of white dwarfs as a function of the effective temperature for the largest observed unbiased sample of these objects. We took advantage of the recent Gaia low-resolution spectra available for 76,657 white dwarfs up to 500 pc. We calculated synthetic J-PAS narrow-band photometry and fitted the spectral energy distribution of each object with up-to-date models for hydrogen-rich and helium-rich white dwarf atmospheres. We estimated the probability for a white dwarf to have a hydrogen-rich atmosphere and validated the results using the Montreal White Dwarf Database. Finally, precise effective temperature values were derived for each object using La Plata evolutionary models. We have successfully classified a total of 65,310 white into DAs and non-DAs with an accuracy of 94%. An unbiased subsample of nearly 34,000 objects was built, from which we computed a precise spectral distribution spanning an effective temperature range from 5,500 to 40,000 K, while accounting for potential selection effects. Some characteristic features of the spectral evolution, such as the deficit of helium-rich stars at T_eff $\approx$35,000-40,000 K and in the range 22,000 < T_eff < 25,000 K, as well as a gradual increase from 18,000K to T_eff $\approx$7,000K, where the non-DA stars percentage reaches its maximum of 41%, followed by a decrease for cooler temperatures, are statistically significant. These findings will provide precise constraints for the proposed models of spectral evolution.
S. Torres, P. Cruz, R. Murillo-Ojeda, F. M. Jiménez-Esteban, A. Rebassa-Mansergas, E. Solano, M. E. Camisassa, R. Raddi, J. Doliguez Le Lourec
2023-07-25T16:30:18Z
http://arxiv.org/abs/2307.13629v1
# White dwarf spectral type-temperature distribution from _Gaia_-Dr3 and the Virtual Observatory ###### Abstract Context:The characterization of white dwarf atmospheres is crucial for accurately deriving stellar parameters such as effective temperature, mass, and age. However, the inclusion of physical processes like convective mixing and convective dilution in current white dwarf atmospheric models predicts a spectral evolution of these objects. To constrain these models, accurate observational data and analysis are necessary. Aims:To classify the population of white dwarfs up to 500 pc into hydrogen-rich or hydrogen-deficient atmospheres based on _Gaia_ spectra and to derive an accurate spectral type-temperature distribution, i.e., the ratio between the number of non-DAs to the total number of white dwarfs as a function of the effective temperature for the largest observed unbiased sample of these objects. Methods:We took advantage of the recent _Gaia_ low-resolution spectra available for 76 657 white dwarfs up to 500 pc. We calculated synthetic J-PAS narrow-band photometry and fitted the spectral energy distribution of each object with up-to-date models for hydrogen-rich and helium-rich white dwarf atmospheres. We estimated the probability for a white dwarf to have a hydrogen-rich atmosphere and validated the results using the Montreal White Dwarf Database. Finally, precise effective temperature values were derived for each object using La Plata evolutionary models. Results:We have successfully classified a total of 65 310 white dwarfs (57 155 newly classified objects) into DAs and non-DAs with an accuracy of 94%. An unbiased subsample of nearly 34 000 objects was built, from which we computed a precise spectral distribution spanning an effective temperature range from 5 500 to 4 0000 K, while accounting for potential selection effects. Conclusions:Some characteristic features of the spectral evolution, such as the deficit of helium-rich stars at \(T_{\rm eff}\approx 35\,000-40\,000\) K and in the range \(22\,000\lesssim T_{\rm eff}\leq 25\,000\) K, as well as a gradual increase from 18 000 K to \(T_{\rm eff}\approx 7\,000\) K, where the non-DA stars percentage reaches its maximum of 41%, followed by a decrease for cooler temperatures, are statistically significant. These findings will provide precise constraints for the proposed models of spectral evolution. Conclusions: ## 1 Introduction The _Gaia_ mission has revealed important features of the white dwarf population with unprecedented precision. That is the case, for instance, of the existence of two well-defined branches in the color-magnitude diagram (roughly between \(0.0\lesssim G_{\rm BP}-G_{\rm RP}\lesssim 0.5\)) referred to as the A and B branches, which correspond to the majority of white dwarfs with hydrogen-rich and helium-rich atmospheres (see Gaia Collaboration et al. 2018). The most plausible explanation to reproduce the B branch invokes the presence of small amount of hydrogen or carbon into helium-dominated atmospheres (Bergeron et al. 2019; Camisassa et al. 2023; Blouin et al. 2023). These models are based on well-studied physical processes that can alter the composition of the outer layers, such as convective mixing and convective dilution, among others (e.g. Rolland et al. 2018, and references therein). The specific characteristics of each model, such as the hydrogen content, the depth of the convective zone as a function of the effective temperature, or even the possibility of accreting material from surrounding asteroids, give rise to different channels of formation and evolution of white dwarf spectral types1(e.g. Rolland et al. 2018; Ourique et al. 2018; Cunningham et al. 2019; Cunningham et al. 2020; Bedard et al. 2020). Therefore, accurate observational data is required to constrain these models. Footnote 1: White dwarfs are classified as DAs or non-DAs based on the presence or absence of hydrogen lines in their spectra, respectively. This last group is formed by those who exhibits helium lines (DB), metal lines (DZ), carbon lines (DQ), or no lines at all (DC), among others (Sion et al. 1983). The proper explanation for the formation of the _Gaia_ A and B branches extends beyond the effective temperature range of these branches, encompassing a broader issue. A crucial observational factor in analyzing the spectral evolution of white dwarfs is the ratio of non-DA to DA stars as a function of effective temperature2. Extensive efforts have been made since the pioneering work of Sion (1984) to obtain statistically significant spectral distributions (e.g. Tremblay and Bergeron, 2008). However, the advent of large spectroscopic and photometric surveys such as the Sloan Digital Sky Survey (SDSS; York et al., 2000), _Galaxy Evolution Explorer_(_GALEX_; Morrissey et al., 2007), and _Gaia_(Gaia Collaboration et al., 2016) has significantly increased both the quantity and quality of available data (e.g. Ourique et al., 2018; Genest-Beaulieu and Bergeron, 2019; Blouin et al., 2019; Cunningham et al., 2020; McCleery et al., 2020; Lopez-Sanjuan et al., 2022). Even though, complete spectroscopic samples have been limited to a distance of up to 40 pc, or in other cases, magnitude-selection effects introduce significant biases in the final distribution. Footnote 1: [http://www.cfa.harvard.edu/](http://www.cfa.harvard.edu/) Nevertheless, we can leverage the exceptional quality of astrometric and photometric data provided by the _Gaia_ mission. The third data release (DR3) of _Gaia_ includes low-resolution spectra for nearly 100 000 white dwarfs (Gaia Collaboration et al., 2022), making it the largest sample of white dwarfs available for analysis. In our recent study (Jimenez-Esteban et al., 2023), we classified 8 150 white dwarfs within a nearly volume-complete 100 pc sample into DA or non-DA categories. The achieved accuracy of 90% was remarkable and allowed us to derive a detailed spectral distribution within the range of temperatures from 5 500 K up to 23 000 K. In this paper, we extend our previous analysis to a distance of 500 pc, significantly increasing the expected number of white dwarfs, particularly for hotter effective temperatures. Our goal is to derive for the first time the spectral distribution in the entire range of temperatures between 5 500 K up to 40 000 K, where spectral evolution is significant. The paper is structured as follows: in Section 2, we describe the selection procedure used to obtain our white dwarf sample from _Gaia_ data. We provide a summary of the main steps of our classification methodology in Section 3. Once our sample is classified and validated, Section 4 focuses on analyzing selection effects and addressing the completeness correction of the sample. In Section 5, we present our spectral distribution, discuss our findings, and compare them to previous works. We summarize our key results and draw our main conclusions in Section 6. ## 2 The _Gaia_-DR3 white dwarf sample We selected our objects from _Gaia_-DR3 catalogue3 following the criteria used in Jimenez-Esteban et al. (2023) but extended up to 500 pc: Footnote 3: [http://gea.esac.esa.int/archive/](http://gea.esac.esa.int/archive/) * \(\omega-3\sigma_{\omega}\geq 2\) mas and \(\omega/\sigma_{\omega}\geq 10\) * \(F_{\rm BP}/\sigma_{\rm F_{\rm BP}}\geq 10\) and \(F_{\rm BP}/\sigma_{\rm F_{\rm BP}}\geq 10\) * RUVE-1.4; where RUVE stands for Renormalised Unit Weight Error preventing against poor astrometric solutions (Lindegren et al., 2021). * \(|C^{*}|<3\sigma_{C^{*}}\); where \(|C^{*}|\) is an estimate of the BP and RP flux excess factor and \(\sigma_{C^{*}}\) its scatter following the prescription by Riello et al. (2021). Selected objects were corrected from extinction following the 3D interstellar Galactic extinction maps from Lallement et al. (2022)4. In principle, we selected only those objects falling below the 0.45 M\({}_{\odot}\) cooling track on the Hertzsprung-Russell (HR) diagram. Additionally, as the atmospheric models we used (see Section 3) provide a reliable estimate of the effective temperature in the range from 5 500 K up to 40 000 K, we chose those objects with unreddened color between \(-0.5<\rm BP-RP<0.86\). A total number of 100 173 objects were selected, from which 76 657 have _Gaia_ low-resolution spectra available. Footnote 4: [https://stillism.obspm.fr/](https://stillism.obspm.fr/) Figure 1 displays the distance distribution (left panel) and cumulative distribution of apparent magnitude \(G\) (right panel) for our entire sample (red histogram) and white dwarfs with _Gaia_ spectra (blue histogram). The use of inverse parallax as a distance estimator, combined with a parallax error threshold of less than 10%, introduces negligible discrepancies (less than \(\approx 4\%\)) compared to other distance estimators (Bailer-Jones et al., 2021). Most of our selected white dwarfs (73%) are within 250 pc, with a long tail extending up to 500 pc. The cumulative \(G\) magnitude distribution reveals a deficit of objects starting at \(G\sim 20\) mag for the entire sample and around \(G\sim 19.5\) mag for white dwarfs with _Gaia_ spectra. Although the nominal limiting _Gaia_ magnitude is \(\sim 21.0\), we adopted a conservative value of \(G_{\rm lim}=19.5\) mag for our completeness analysis (see Section 4). ## 3 White dwarf spectral classification For those sources of our selected sample with available _Gaia_ spectra, we followed the same procedure as described in Figure 1: Distance distribution (left panel) and apparent \(G\) magnitude cumulative distribution (right panel) for the entire sample of white dwarfs that fulfil our selection criteria (red histogram) and for the subsample of objects that have _Gaia_ spectra (blue histogram). A constant cumulative slope is shown (black line) as indicative of the completeness of the sample. Jimenez-Esteban et al. (2023). A brief description of the methodology for classifying white dwarfs into DA and non-DA types used in that work is provided as follows. First, for each white dwarf of our sample with available _Gaia_ spectrum we determined, by means of the Python package _GaiaXPy5_ and taking into account all the coefficients of the _Gaia_ spectrum, the synthetic Javalambre-Physics of the Accelerating Universe Astrophysical Survey (J-PAS; Benierz et al. 2014) filter system (Marin-Franch et al. 2012) photometry. We focused on those filters covering the range from 4000 to 9590.54 A, and discarding those filters with effective wavelength shorter than 4000 A. Second, for each object we built a spectral energy distribution (SED) using the derived J-PAS photometry. Although most of the SEDs have 56 photometric points, in some noisy spectra the number of points is lower, due to our threshold in the photometric error of 10% to each individual photometric measurement obtained with _GaiaXPy_. Footnote 5: [https://www.cosmos.esa.int/web/gaia/gaiaxpy](https://www.cosmos.esa.int/web/gaia/gaiaxpy) We analyzed 67 340 new white dwarfs spectra, not previously studied in our 100 pc sample (Jimenez-Esteban et al. 2023). For those objects with more than 4 photometric points (57 155), their SEDs were fitted using either pure hydrogen white dwarf atmospheric models (DA) or models with helium and a small trace of hydrogen (non-DA, log N(H)/N(He) = -6). Both DA and non-DA models covered the temperature range of interest for this study (5 500 to 40 000 K) and surface gravities from 7 to 9 dex (see Section 3.1 in Jimenez-Esteban et al. 2023 for detailed model information). The fitting process was performed using the Virtual Observatory Spectral energy distribution Analyzer6(VOSA; Bayo et al. 2008), a powerful tool developed by the Spanish Virtual Observatory. Among the new 57 155 analyzed objects, only 3 had Vgf\({}_{b}\)7 greater than 15 and were disregarded. For each object, two reduced chi-squared values (\(\chi^{2}_{\rm DA}\) and \(\chi^{2}_{\rm non-DA}\)) were obtained. Finally, the estimator of the probability of being a DA white dwarf (\(P_{\rm DA}\)) was defined as Footnote 6: [http://svo2.cab.inta-csic.es/theory/vosa](http://svo2.cab.inta-csic.es/theory/vosa) Footnote 7: Vgf\({}_{b}\): Modified reduced \(\chi^{2}\) calculated by forcing \(\sigma(F_{\rm obs})\) to be larger than \(0.1\times F_{\rm obs}\), where \(\sigma(F_{\rm obs})\) is the error in the observed flux (\(F_{\rm obs}\)). Vgf\({}_{b}\) smaller than 10–15 is often perceived as a good fit. \[P_{\rm DA}=\frac{1}{2}\left(\frac{\chi^{2}_{\rm non-DA}-\chi^{2}_{\rm DA}}{\chi^ {2}_{\rm non-DA}+\chi^{2}_{\rm DA}}+1\right), \tag{1}\] where we classified an object as DA if \(P_{\rm DA}\geq 0.5\), otherwise as a non-DA. We validated our classification procedure by means of the spectroscopically labelled white dwarf sample of the Montreal White Dwarf Database8(MWDD; Dufour et al. 2017). A total of 8 482 objects from the MWDD were used in our validating test (including those at distances closer than 100 pc). We adopted as DA class all MWDD objects whose primary spectral type is DA regardless of secondary types. The rest of the objects were considered as non-DA. In the left panel of figure 2 we show the distribution of the probability of being a DA, \(P_{\rm DA}\), for white dwarfs labelled as DA (blue histogram) or non-DA (red histogram) in the MWDD. The distribution confirms that the adopted threshold at \(P_{\rm DA}=0.5\) effectively separates both populations and that the percentage of misclassified object is reasonable small, \(\lesssim 7\)%. In the right panel of Fig. 2 we show the confusion matrix, where rows represent the number of already labelled spectral objects, while columns are the prediction of our classification (in parentheses the percentages with respect to the total population). The results of the confusion matrix reveal that the performance of our spectral type estimator is excellent, as it is corroborated by the derived metrics9: accuracy of 0.94, F1-score of 0.96, recall of 0.94 and precision of 0.98. Footnote 8: [https://www.montrealwhitedwarfdatabase.org/](https://www.montrealwhitedwarfdatabase.org/) Footnote 9: For a definition of these parameters, please refer for instance to Appendix A from Echeverry et al. (2022) A last check was performed before applying our classification method to the observed _Gaia_ white dwarf sample. In Figure 3 we depicted the distribution of the \(G\) apparent magnitude for the entire sample of white dwarfs with _Gaia_ spectra (blue histogram) and those with spectral classification in MWDD (gray histogram). We verified that the MWDD sample covers the full range of magnitudes and closely resemble the observed _Gaia_ sample distribution. These facts guarantee the proper use of the MWDD sample for testing our classification method. Moreover, in Fig. 3, we present the magnitude distribution of the white dwarfs misclassified by our method (red histogram). As expected, the fainter the magnitude the larger the fraction of missclassified objects. Considering the percentage of error as a function of magnitude and extrapolating it to the _Gaia_ sample, we estimated that the error in the final classification should not exceed 10%. Figure 2: Top panel: probability distribution of being DA for the white dwarf labelled as DA or non-DA (blue and red histograms, respectively) in the MWDD. Bottom panel: confusion matrix of our estimator of being DA. Displayed values represent the total number of objects, while in brackets the percentages with respect to the total population. Once we have validated the reliability of our classification method, we applied it to the sample of white dwarfs with _Gaia_ spectra within 500 pc. A total of 65 310 white dwarfs (including those previously classified in Jimenez-Esteban et al., 2023) have been classified into the spectral types DA (50 189; 77%) and non-DA (15 121; 23%) with an accuracy of 0.94. The catalogue with the spectral classification is available online as supplementary material hosted by the journal and at _The SVO archive of Gaia white dwarfs_10 at the Spanish Virtual Observatory portal11. Footnote 10: [http://svocats.cab.inta-csci.es/svdw/index.php](http://svocats.cab.inta-csci.es/svdw/index.php) In Figure 4, we showed the HR diagram of the corresponding DA and non-DA populations. For visual reference we also showed the cooling sequence of a 0.58 M\({}_{\odot}\) DA white dwarf (Camisassa et al., 2016). The distribution of DA and non-DA white dwarfs clearly follows different tracks, with the latter group being, on average, less luminous for a certain color (in particular for BP - RP \(>\) 0) than the first one. It is worth mentioning that our classification into DA and non-DA groups is less model dependent than H- versus He-rich classification. However, such a classification is still needed for a correct astrophysical interpretation. Thus, higher resolution spectroscopy is required to flag misclassified objects, identify He-rich DAs, magnetic DAH with distorted Balmer jumps, as well as DZ and DQ with unusual colors, among other cases. ## 4 Completeness correction We analyzed the different selection effects and how they can be corrected or at least mitigated. First of all, as our sample is initially built as a magnitude-selected sample, objects fainter than a certain magnitude limit would be absent from our sample. A standard \(1/\mathcal{V}_{\mathrm{max}}\) method (Schmidt, 1968) will provide an unbiased estimate of the space density. However, it requires extra conditions of completeness and homogeneity to be fulfilled (see Geijo et al., 2006, and references therein). These conditions are not guaranteed in our sample, as the requirement to have a _Gaia_ spectrum adds a new selection effect and, mainly because DA and non-DA populations have, as previously stated, a different distribution in the color-magnitude diagram (see Fig.4). In order to avoid this bias that would distort the spectral distribution, we developed a strategy in which we consider objects contributing to the spectral distribution only if they are brighter than a certain magnitude and hotter than a certain temperature (color). We adopted the cooling sequence for a 1.05 M\({}_{\odot}\) white dwarf as our faint limiting region. For a given distance of the white dwarf, the _Gaia_ limiting magnitude we adopted \(G_{\mathrm{lim}}=19.5\) (see Section 2) will fix the absolute observable magnitude limit. The corresponding color at this magnitude for the 1.05 M\({}_{\odot}\) white dwarf cooling sequence delimits the possible contribution to the spectral distribution. Only objects with a bluer color than this value will contribute, while redder objects will be disregarded. In the left panel of Figure 5, we present an example of this strategy. White dwarfs beyond a distance of 250 pc are highlighted in the _Gaia_ HR diagram. Only those objects (marked in blue) above the horizontal line are observable, and those to the left of the vertical line and above the 1.05 M\({}_{\odot}\) track are included in the final sample to construct our spectral distribution. Thus, we prevent non-DA objects (which, on average, are fainter than DAs for a given color; see Fig. 4) from being underestimated in the spectral distribution. It should be noted that this procedure automatically eliminates any massive white dwarf from our sample. However, their contribution is estimated to be less than \(\approx 3\%\) of the entire population (e.g. Kilic et al., 2020; Jimenez-Esteban et al., 2023). A second important source of incompleteness comes from the fact that not all white dwarf sources have an available _Gaia_ spectrum. Moreover, even those sources that have it may not have a good VOSA determination of the probability of being DA or non-DA. It is expected that the number of sources without a determination of this probability increases for fainter and distant objects. We take into account this fact by introducing a weight function that depends on the distance, \(d\), of the object and its specific location within the HR diagram, \(w(G_{\mathrm{BP}}-G_{\mathrm{RP}},\,M_{G},\,d)\). For a given source with parameters (\(G_{\mathrm{BP}}-G_{\mathrm{RP}},\,M_{G},\,d)_{0}\) we computed the number of sources, \(n_{\mathrm{sources}}\), inside a volume \(\mathcal{V}=\Delta(G_{\mathrm{BP}}-G_{\mathrm{RP}})\times\Delta M_{G}\times\Delta\) centered at the previous value and with \(\Delta(G_{\mathrm{BP}}-G_{\mathrm{RP}})=0.1\), \(\Delta M_{G}=0.1\) and \(\Delta d=50\) pc. Besides of objects with available \(Gaia\) spectra, \(n_{Gaia-sp}\), and the number of those who have a VOSA estimation of the probability of being DA, \(n_{VOSA-PDA}\). Assuming that the completeness weight function should be inversely proportional to the probability of an object of belonging to the final sample, a straightforward application of the Bayes' theorem for conditional probability leads to: \[w(G_{\mathrm{BP}}-G_{\mathrm{RP}},\,M_{G},\,d)=\left(\frac{n_{Gaia-sp}}{n_{ sources}}\times\frac{n_{VOSA-PDA}}{n_{Gaia-sp}}\right)^{-1}=\frac{n_{sources}}{n_{VOSA-PDA}} \tag{2}\] Finally, in addition to the selection function previously described (that is, we selected objects hotter than a given color determined by the 1.05 M\({}_{\odot}\) evolutionary track for a given distance), we considered objects located in the HR diagram below the cooling track for a 0.51 M\({}_{\odot}\) helium-rich white dwarf. This way, we avoid unresolved binary white dwarf systems and the contribution of white dwarfs evolved from binary evolution (Jimenez-Esteban et al., 2023). Furthermore, the existence of low-mass white dwarfs with helium-rich atmospheres has not been proven (see, for instance, Genest-Beaulieu & Bergeron, 2019; Battich et al., 2020). Thus, we adopted the 0.51 M\({}_{\odot}\)low-mass limit as a conservative criterion. For each object of our final sample we determined the effective temperature by interpolating the _Gaia_ photometry in the La Plata models. Those objects Figure 3: Distribution of \(G\) apparent magnitude for the entire sample of white dwarfs with _Gaia_ spectra (blue histogram), those with spectral classification in MWDD (gray histogram) and those of the previous sample missclassified by our method (red histogram). classified as DA were interpolated in the models of Camisassa et al. (2016), while for those labelled as non-DA, we used the hydrogen-deficient cooling models of Camisassa et al. (2017), in both cases for carbon-oxygen-core white dwarfs (see Jimenez-Esteban et al. 2023, for details). Atmospheric models where those used in the SED analysis (see Section 3.1 from Jimenez-Esteban et al. 2023), i.e., Koester's models with pure hydrogen composition for DAs and helium with a small trace of hydrogen (log N(H)/N(He) = -6) for non-DAs (Koester 2010). It is worth noting here that recent studies have emphasized the importance of considering the carbon content in non-DA atmospheres (Camisassa et al. 2023; Blouin et al. 2023). Assuming the maximum non-observable carbon enrichment prescription from Camisassa et al. (2023), i.e. carbon sequence -1 dex, the difference with respect to a pure helium model is \(\sim 10\)% at 12 000 K, and approximately 5% at 6 000 K. The differences are not larger than 1 500 K, which corresponds to the bin width of our spectral distribution. Thus, no major effects are expected in this regard. Our final sample to estimate the spectral fraction, consisting of 33 997 white dwarfs, is shown in Figure 5, with 25 984 (76.4%) classified as DAs and 8 013 (23.6%) as non-DAs. ## 5 The spectral type-temperature distribution The spectral type-temperature distribution, \(f\), is defined as the ratio of weighted non-DA white dwarfs, to the total number of weighted objects, \(N_{w}\), per effective temperature interval. We adopted that the contribution of each object to its temperature bin depends on its weight function, \(w\), and the probability of classification, \(P_{i}\) (that is \(P_{\rm DA,i}\) for DAs, \(1-P_{\rm DA,i}\) for non-DAs). Hence, the weighted number of objects is defined as: \[N_{w}=\sum_{i}^{N}w(G_{\rm BP}-G_{\rm RP},\ M_{G},\ d)_{i}\times P_{i}, \tag{3}\] where \(N\) is the number of objects in that interval, \(w\) the weighted function and \(P_{i}\) the probability aforementioned. Error bars were estimated taking into account Poissonian error and the corresponding object weight, that is \(\sigma_{f}=\sqrt{f\times(1-f)/N_{W}}\). In Figure 6 we displayed our final spectral distribution (black solid circles and black line). To evaluate the extent of the completeness correction introduced in our sample, we included the ratio distribution of the entire classified white dwarf population (raw sample consisting of 65 310 objects) when no selection Figure 4: _Gaia_ HR diagram for the population of white dwarfs classified by our probability estimator as DA (left panel) and non-DA (right panel). As a visual reference, we plotted the cooling sequence of a 0.58 M\({}_{\odot}\) DA white dwarf according to La Plata models. Figure 5: _Left panel:_Gaia_ color-magnitude diagram of our population of DA and non-DA white dwarfs (yellow dots). Highlighted in blue are those objects at distances \(d>250\) pc. Assuming a limiting magnitude of \(G_{\rm bin}=19.5\), only those brighter than \(M_{G}<12.5\) (horizontal red line) are observable. The cooling track for a 1.05 M\({}_{\odot}\)white dwarf (black line) is adopted as our lower selection function limit. For a given distance, only objects (marked in dark blue) at the left of the corresponding color value (vertical red line) contribute to the final spectral distribution. Right panel: highlighted in red are the white dwarfs selected for building our spectral distribution. Objects above or below the cooling track for a 0.51 M\({}_{\odot}\) helium-rich and a 1.05 M\({}_{\odot}\) hydrogen-rich white dwarf, respectively, were discarded. function is applied at all (blue open triangles). For comparative purposes, we also plotted the spectral distribution for the sample of 100 pc (red circles; Jimenez-Esteban et al., 2023). The analysis of the different spectral distributions revealed that the effects due to completeness correction are of minor order. The general trend is practically coincident, and only small discrepancies appear for the coolest bin, where error bars underestimate the fact that the classification is less certain. In any case, we can conclude that our weighted spectral distribution provides a robust estimate of the ratio of DA versus non-DA white dwarfs. In Figure 7, we show a comparison of the spectral distribution found in this work (black solid circles) with some of the most recent ratio distributions found in the literature. The first remarkable characteristic of the spectral distribution found in this work is the wider range of effective temperatures covered by it, thus constituting a solid estimate of the spectral evolution of white dwarfs. While a detailed analysis of the implications for spectral evolution is outside the scope of this work, in what follows, we make a brief analysis of the most relevant points found: * At the hottest end, i.e., \(T_{\rm eff}\approx 35\,000-40\,000\) K, the lowest ratio of non-DAs was found. As previously reported, there is not a complete absence of non-DAs in this region, but an average \(\sim 5\%\) is indicative of the presence of the so called DB-gap (e.g. Bergeron et al., 2011; Koester and Kepler, 2015, and references therein). * A statistical significant deficit of non-DAs was also found for effective temperatures in the range \(22\,000\,\,^{<}\,T_{\rm eff}\,\,^{<}\,25\,000\) K. It is also in perfect agreement with the ratios found by Ourique et al. (2018) and Lopez-Sanjuan et al. (2022). * For temperatures cooler than \(\sim 18\,000\) K, coinciding with the onset of convection mixing in DAs (e.g. Cunningham et al., 2020), a marked increase in the ratio of non-DAs was found, leading from \(\sim 10\%\) at \(18\,000\) K up to \(\sim 40\%\) at \(8\,000\) K and in agreement with most of the spectral distributions found in the literature. * Our spectral distribution presented a peak around \(T_{\rm eff}\approx 7\,000\) K with a ratio of non-DA objects of \(f\approx 0.41\). This maximum is found at lower temperatures than that of our previous work (Jimenez-Esteban et al., 2023) and others retrieved in literature such as Ourique et al. (2020), probably as a consequence of an unweighted distribution in these cases. However, the weighted distribution presented here is in agreement with the 40 pc spectroscopic complete sample analyzed in McCleery et al. (2020). * A statistically significant decrease in the ratio of non-DAs is found at the coolest bin, \(T_{\rm eff}\approx 6\,000\) K, dropping to \(f\approx 0.35\). This behaviour was also reported in our previous work (Jimenez-Esteban et al., 2023) and is also in agreement with Blouin et al. (2019) and McCleery et al. (2020), although no known physical mechanism can be associated to it (Blouin et al., 2019). ## 6 Conclusions Following the methodology presented in Jimenez-Esteban et al. (2023), we have expanded our white dwarf study sample up to 500 pc. A total of 65 310 white dwarfs have been classified as DAs and non-DAs based on their _Gaia_ spectra, with an accuracy of 94%. This has allowed us to construct a statistically significant and precise distribution of the DA versus non-DA ratio as a function of effective temperature. Nearly 34 000 white dwarfs have contributed to the final selected sample, making it the largest sample to date in terms of the number of objects and the range of effective temperatures analyzed, from 5 500 K to 40 000 K. The comparative analysis of our distribution with others found in the literature reveals statistically significant features such as: the deficit of DBs within the effective temperature range of approximately \(35\,000-40\,000\) K and between \(22\,000-25\,000\) K, along with a gradual rise starting from \(18\,000\) K up to around \(7\,000\) K, where the proportion of non-DA white dwarfs peaks at 41%, followed by a decline for lower temperatures. Finally, we can state that selection effects have been taken into account in the construction of the final sample. This fact, along with the high number of objects per interval in the sample, ensures that our spectral distribution can be considered a robust and precise element in the analysis of the spectral evolution of white dwarfs. ###### Acknowledgements. We acknowledge support from MINECO under the PID2020-117252GB-100 grant and by the AGAUR/Generalitat de Catalunya grant SGR-386/2021. PC. acknowledges financial support from the Government of Comunidad Autonoma de Madrid (Spain) via postdoctoral grant Atraci-Gion de Talento Investigador 2019-T/TIC-4760. R.M.O. is funded by NTA through grant PRE-OBESRAVATORIO. MC acknowledges grant RYC2021-032721-1, funded by MCIN/AEI/1.30398/501100011033 and by the European Union NostFacientific/FPRR.R acknowledges support from Grant RYC2021-030837-1 funded by MCIN/AEI/1.103039501100011033 and by "European Union NostFacientific/FPRR.F.J.E. acknowledges support from ESA through the Faculty of the European Space Astronomy Centre (ESAC) - Funding reference 4000139151/22/ES/CM. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This work has made use of the Python package _GaiaXPy_, developed and maintained by members of the _Gaia_ Data Processing and Analysis Consortium (DPAC) and in particular, Coordination Unit 5 (CUS), and the Data Processing Centre located at the Institute of Astronomy, Cambridge, UK (DPCI). This publication makes use of VOSA, developed under the Spanish Virtual Observatory ([https://svco.nta.itac-eses](https://svco.nta.itac-eses)) project funded by MCIN/AEI/1.103039/501100011033/ through grant PID2020-112949GB-100. We extensively made used of Topcat (Taylor, 2005). This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. We acknowledge use of the ADS bibliographic services.
2301.06814
Taking advantage of noise in quantum reservoir computing
The biggest challenge that quantum computing and quantum machine learning are currently facing is the presence of noise in quantum devices. As a result, big efforts have been put into correcting or mitigating the induced errors. But, can these two fields benefit from noise? Surprisingly, we demonstrate that under some circumstances, quantum noise can be used to improve the performance of quantum reservoir computing, a prominent and recent quantum machine learning algorithm. Our results show that the amplitude damping noise can be beneficial to machine learning, while the depolarizing and phase damping noises should be prioritized for correction. This critical result sheds new light into the physical mechanisms underlying quantum devices, providing solid practical prescriptions for a successful implementation of quantum information processing in nowadays hardware.
L. Domingo, G. Carlo, F. Borondo
2023-01-17T11:22:02Z
http://arxiv.org/abs/2301.06814v3
# Taking advantage of noise in quantum reservoir computing ###### Abstract The biggest challenge that quantum computing and quantum machine learning are currently facing is the presence of noise in quantum devices. As a result, big efforts have been put into correcting or mitigating the induced errors. But, can these two fields benefit from noise? Surprisingly, we demonstrate that under some circumstances, quantum noise can be used to improve the performance of quantum reservoir computing, a prominent and recent quantum machine learning algorithm. Our results show that certain noise types can be beneficial to machine learning, while others should be prioritized for correction. This critical result sheds new light into the physical mechanisms underlying quantum devices, providing solid practical prescriptions for a successful implementation of quantum information processing in nowadays hardware. ## I Introduction Machine learning (ML) is among the most disruptive technological developments of the early 21st century [1; 2]. However, despite existing ML solutions capable of coping with systems of moderate size, learning more complex patterns often requires the use of a large number of parameters and long training times; this fact conditions its success to having access to high performance computational resources. For this reason, a tremendous interest has recently arisen for a technological field with potential to dramatically improve many of these algorithms: quantum ML (QML). To unravel the full potential of QML algorithms, fault-tolerant computers with millions of qubits and low error-rates are needed. Although the actual realization of these devices is still decades ahead, the so-called noisy intermediate-scale quantum (NISQ) era has been reached. Thanks to NISQ, quantum (near) supremacy [3] has been achieved with the quantum computers available today. One of the biggest challenges of the current quantum devices is the presence of noise. They perform noisy quantum operations with limited coherence time, which affects the performance of quantum algorithms. To overcome this limitation, great effort has been devoted to designing error-correcting methods [4; 5], which correct the errors in the quantum hardware as the algorithm goes on, and also error-mitigation techniques [6; 7], which aim to reduce the noise of the outputs after the algorithm has been executed. Even though these methods can sometimes successfully reduce quantum noise, a fundamental question still remains open: Can the presence of noise in quantum devices be beneficial for quantum machine learning algorithms? The aim of this Letter is to address this issue in a highly relevant NISQ algorithm: quantum reservoir computing (QRC). This algorithm uses random quantum circuits, carefully chosen from a certain family, in order to extract relevant properties from the input data. The measurements of the quantum circuits are then fed to a ML model, which provides the final prediction. This simple learning structure makes of QRC a suitable QML algorithm for NISQ devices. The design of the QR has recently proven to be crucial to guarantee optimal performance in the ML task [8; 9]. However, these studies use _noiseless_ quantum simulations, which do not take into account the real limitations of current quantum hardware. Thus, whether real, noisy implementations of QRs provide advantage over classical ML methods is still an open question. In Ref. [10] the presence of noise has recently been used to improve the convergence of variational quantum algorithms. In this work, QRs are used to solve a quantum chemistry problem, which has become a common benchmark for QML [11; 9; 12]. Quantum chemistry is one of the areas where quantum computing has highest potential of outperforming traditional methods [2], since the complexity of the problem increases exponentially with the system's degrees of freedom. The exponential size of the Hilbert space allows to study high-dimensional systems with few computational resources, when compared to classical methods. Our results show that certain types of noise can actually provide a better performance for QRC than noiseless reservoirs. Our numerical experiments are further supported with a theoretical demonstration. Moreover, we provide a practical criterion to decide how to use quantum noise to improve the performance of the algorithm, and also what noise should be a priority to correct. ## II Results The QML task considered in this work consists on predicting the excited electronic energy \(E_{1}\) from the corresponding ground state \(\left|\psi_{0}\right\rangle_{R}\) with energy \(E_{0}\) for the LiH molecule, using noisy QRs. Three noise models are considered in this study: the _depolarizing channel_, the _amplitude damping channel_ and the _phase damping channel_. Full description of the QML task and noise models is provided in section **Methods** below. Figure 1 shows the mean squared error (MSE) in \(E_{1}\) predicted with our QRs as a function of the number of gates, for different values of the error probability \(p\) (colored curves) and noise models (panels), together with the results for the corresponding noiseless reservoir (in black). As expected, the general tendency of the MSEs is to grow with the noise characterized by \(p\). However, a careful comparison of the three plots in Fig. 1 surprisingly demonstrates that the amplitude damping noise renders results which are significantly different from those obtained in the other two cases. Indeed, if the number of gates and error probability are small enough, the QRs with amplitude damping noise provides better results than the noiseless QR. The same conclusion applies for the higher values of \(p\), although in those cases the threshold number of gates for better performance decreases. This is a very significant result, since it means that, contrary to the commonly accepted belief, the presence of noise is here _beneficial_ for the performance of the quantum algorithm, and, more importantly, it takes place within the limitations of the NISQ era. As an example, for \(p=0.0005\) (green curve) all noisy reservoirs render better performance than the noiseless counterpart when the number of gates is smaller than \(135\). A practical criterion to decide when noise can be used to improve the performance of QRC is provided in Table 1, which shows the averaged fidelity between the output noisy state \(\rho\) and the noiseless state \(\left|\psi\right\rangle\) for the circuits subjected to an amplitude damping noise with different values of the error probability. The number of gates has been chosen to be as large as possible provided that the noisy reservoirs outperform the noiseless ones. These results imply that when the fidelity is greater than \(0.96\), the noisy reservoirs outperform the noiseless ones at QML tasks, and accordingly the noise should _not_ be corrected. Finally, also notice that for \(p=0.0001\) the fidelity is always higher than \(0.96\), and thus the performance of the noisy QRs is always higher or equal than their noiseless counterparts. A second conclusion from the comparison among plots in Fig. 1 is that the behavior for depolarizing and the phase damping channels is significantly different than for the amplitude damping one. In the former cases, the performance of the noisy reservoirs is always worse than that of the noiseless one, even for small error probabilities. A third result that can be extracted from our calculations is that the tendency of the algorithm performance when the reservoirs have a large number of gates is the same for the three noise models considered (except for the smallest value of \(p=0.0001\)). While the performance of the noiseless reservoirs stabilizes to a constant value as the number of gates increases, the noisy reser \begin{table} \begin{tabular}{c c c} \hline \hline Error prob. & Optimal & Fidelity \\ \(p\) & \# of gates & (averaged) \\ \hline \(0.0001\) & \(215\) & \(0.990\) \\ \(0.0005\) & \(135\) & \(0.965\) \\ \(0.0010\) & \(105\) & \(0.956\) \\ \(0.0030\) & \(65\) & \(0.962\) \\ \hline \end{tabular} \end{table} Table 1: (Averaged) Fidelity between the noisy and noiseless final quantum states for the circuits with amplitude damping noise (see text for details). The number of quantum gates is chosen so that the performance of the noisy reservoirs outperforms that of the noiseless reservoirs. Figure 1: (Averaged) Mean squared error of the quantum reservoirs with amplitude damping noise (top), depolarizing noise (middle) and phase damping noise (bottom), as a function of the number of gates of the circuit. Averages are made over 100 simulations. voirs decrease their performance, seemingly going to the same growing behavior. This is due to the fact that the quantum channels are applied after each gate, and thus circuits with a large number of gates have larger noise rates, which highly decreases the fidelity of the output state. For this reason, even though increasing the number of gates has no effect in the noiseless simulations, it highly affects the performance of the noisy circuits, and thus the number of gates should be optimized in this case. Having analyzed the MSE results, we next provide a theoretical explanation for the different behavior of the three noisy reservoirs. In the first place, the depolarizing and phase damping channels give similar results, except that the performance of the former decreases faster than that for the latter. This effect can be explained with the aid of Table 2, where the averaged fidelity of each error model over the first 200 gates is given. As can be seen, the depolarizing channel decreases the fidelity of the output much faster than the phase damping, which explains the different tendency in the corresponding ML performances. On the other hand, the amplitude damping channel is the only one that can improve the performance of the noiseless reservoirs in the case of few gates and small error rates. The main difference between amplitude damping and the other channels is that the former is not unital, i.e. it does not preserve the identity operator. Let us consider now how this fact affects the distribution of noisy states in the Pauli space. For this purpose, let \(\rho^{\prime}\) be the \(n-qubit\) density matrix obtained after applying \(N-1\) noisy gates, (with the noise described by the quantum channel \(\epsilon\)), and then apply the \(N\)-th noisy gate \(U\). The state becomes \(\epsilon(\rho)\), defined as: \[\epsilon(\rho)=\sum_{m=1}\,M_{m}\rho M_{m}^{\dagger},\quad\rho=U\,\rho^{\prime }\,U^{\dagger}, \tag{1}\] where \(\rho\) is the state after applying gate \(U\)_without_ noise. Now, both \(\rho\) and \(\epsilon(\rho)\) can be written as linear combinations of Pauli basis operators \(\{P_{i}\}_{i}\), where each one of them is the tensor product of the Pauli operators \(\{X,Y,Z,\mathbb{I}\}\) as \[\rho=\sum_{i}a_{i}P_{i},\quad\text{with }a_{i}=\frac{1}{2^{n}}\, \text{tr}(P_{i}\rho), \tag{2}\] \[\epsilon(\rho)=\sum_{i}b_{i}P_{i},\quad\text{with }b_{i}=\frac{1}{2^{n}}\, \text{tr}[P_{i}\epsilon(\rho)]. \tag{3}\] Notice here that some of the coefficients \(b_{i}\) will be used to feed the ML model after applying all the gates of the circuit and make the final predictions. Thus, expanding the final quantum states in this basis is suitable to understand the behavior of the QRC algorithm. Next, we study the relation between coefficients \(\{a_{i}\}\) and \(\{b_{i}\}\). Since the operators \(P_{i}\) are tensor product of Pauli operators, it is sufficient to study how each of the noise models \(\epsilon\) maps the four Pauli operators. The results are shown in Table 3, where we see that \(\epsilon(P_{i})\) is always proportional to \(P_{i}\), except for \(\epsilon(\mathbb{I})\) with the amplitude damping channel. Indeed, it is for this reason that, with depolarizing or phase damping noises, the quantum channel only mitigates coefficients in the Pauli space. On the other hand, the amplitude damping channel can introduce additional non-zero terms to the Pauli decomposition. Also, this explains why, for low noise rates, the shapes of the MSE curves for depolarizing and phase damping are similar to that for the noiseless scenario, but not for the amplitude damping one. Table 3 also explains why the phase damping channel provides states with higher fidelity than the depolarizing channel. The phase damping channel leaves the \(Z\) operator invariant, and also produces lower mitigation of the \(X\) and \(Y\) coefficients compared to the depolarizing channel. For this reason, even though both the depolarizing and phase damping channels are unital, the depolarizing channel decreases the ML performance faster, and its correction should be prioritized. Let us provide a mathematical demonstration for this fact. For any Pauli operator \(P_{i}\), the coefficient in the Pauli space with the depolarizing and phase damping channels is \[b_{i}=\frac{1}{2^{n}}\,\text{tr}[P_{i}\;\epsilon(\rho)]=\frac{1}{2^{n}}\alpha _{i}\;tr(P_{i}\rho)=\alpha_{i}\;a_{i},\quad 0\leq\alpha_{i}\leq 1, \tag{4}\] and therefore the noisy channel mitigates coefficient \(a_{i}\). However, let us take a gate with amplitude damping noise. Suppose channel \(\epsilon\) acts non-trivially on qubit \(j\), that is, the Kraus operators for \(\epsilon\) are of the form \begin{table} \begin{tabular}{c c c c} \hline \hline Error prob. & Amplitude & Depolarizing & Phase \\ \(p\) & damping & & damping \\ \hline 0.0001 & 0.995 & 0.994 & 0.998 \\ 0.0005 & 0.975 & 0.971 & 0.988 \\ 0.0010 & 0.951 & 0.944 & 0.976 \\ 0.0030 & 0.862 & 0.842 & 0.931 \\ \hline \end{tabular} \end{table} Table 2: (Averaged) Fidelity between the noisy and noiseless final quantum states for the circuits with the three noise models. Fidelity is averaged over all the quantum reservoirs with less than 200 gates, with the same noise model. \begin{table} \begin{tabular}{c c c c} \hline \hline & Amplitude & Depolarizing & Phase \\ & damping & & damping \\ \hline \(\epsilon(X)\) & \(\sqrt{1-p}\;X\) & \((1-\frac{4}{3}p)X\) & \((1-p)\;X\) \\ \(\epsilon(Y)\) & \(\sqrt{1-p}\;Y\) & \((1-\frac{4}{3}p)Y\) & \((1-p)\;Y\) \\ \(\epsilon(Z)\) & \((1-p)\;Z\) & \((1-\frac{4}{3}p)Z\) & \(Z\) \\ \(\epsilon(\mathbb{I})\) & \(\mathbb{I}+pZ\) & \(\mathbb{I}\) & \(\mathbb{I}\) \\ \hline \end{tabular} \end{table} Table 3: Expressions for the error channel \(\epsilon\) when applied to the four basis Pauli operators. \(\tilde{M}_{m}=\mathbb{I}\otimes\cdots\otimes M_{m}\otimes\mathbb{I}\otimes\cdots \mathbb{I}\), with \(M_{m}\) in the \(j\)-th position. Suppose now that we measure \(P_{i}\) (the \(i\)-th operator in the Pauli basis associated to coefficient \(a_{i}\)), where \(P_{i}\) acts as a \(Z\) operator on the \(j\)-th qubit (\(P_{i}=P^{0}\otimes\cdots P^{j-1}\otimes Z\otimes P^{j+1}\otimes\cdots P^{n}\)). Let's also take \(P_{k}=P^{0}\otimes\cdots\otimes P^{j-1}\otimes\mathbb{I}\otimes P^{j+1} \otimes\cdots\otimes P^{n}\), with \(a_{k}\) associated to \(P_{k}\). Then, the coefficient \(b_{i}\) is \[b_{i} = \frac{1}{2^{n}}\operatorname{tr}[P_{i}\epsilon(\rho)]=\frac{1}{2^ {n}}\sum_{l}a_{l}\operatorname{tr}[P_{i}\epsilon(P_{l})]\] \[= \frac{1}{2^{n}}\Big{(}a_{i}\operatorname{tr}[P_{i}\epsilon(P_{i} )]+a_{k}\operatorname{tr}[P_{i}\epsilon(P_{k})]\Big{)}\] \[= \frac{1}{2^{n}}\Big{(}a_{i}(1-p)\operatorname{tr}\bigl{[}P_{i}^ {2}\bigr{]}+a_{k}\operatorname{tr}[P_{i}(P_{k}+pP_{i})]\Big{)}\] \[= (1-p)a_{i}+pa_{k}\] When \(a_{i}=0\) but \(a_{k}\neq 0\), the coefficient \(b_{i}\) is different from \(0\), and thus the amplitude damping noise introduces an extra coefficient in the Pauli space. Therefore, we can conclude that the amplitude damping channel allows to introduce additional non-zero coefficients in the Pauli space, instead of only mitigating them. For this reason, for \(p\) small enough, the amplitude channel can introduce new non-zero terms in the Pauli space without mitigating too much the rest of them. The previous theorem can be further illustrated with a two qubits toy model example. We design a QR with the three different quantum noise models and calculate the distribution of the Pauli coefficients at the end of the circuit. Figure 2 shows the outcomes of the measurements for a random circuit with \(10\) gates and an error rate of \(p=0.2\). We see that all noise models mitigate the non-zero coefficients. However, the shadowed area shows a region where the noiseless simulation (as well as the depolarizing and phase damping simulations) give zero expectation values. More importantly, the amplitude damping circuit has non-zero expectation values for the same operators, which means that this quantum channel has introduced non-zero terms in the Pauli distribution. For small error rates, the noisy quantum reservoirs provide better performance, since having amplitude damping noise produces a similar effect as having more quantum gates in the circuit. To better visualize this effect, we design \(4000\) random circuits and see how the final state \(\rho\) fills the Pauli space. Since the Pauli space in the \(2\)-qubit system is a \(16\)-dimensional space, we use a dimensionality reduction technique called UMAP [13] to visualize the distribution in \(2\)D. The results are shown in Fig. 3. We see that the amplitude damping channel fills the Pauli space faster than the other circuits, including the noiseless QR, thus confirming the hypothesis that the amplitude damping channel acts equivalently as having more quantum gates. ## III Conclusions In this Letter, the effect on the QRC performance of three different paradigmatic noise models, effectively covering all current possibilities affecting quantum devices, is studied. Contrary to common belief, we demonstrate that, under certain circumstances, noise, which constitutes the biggest challenge for quantum computing and QML, is beneficial to the quantum algorithms. Remarkably, we show that for error rates \(p\lesssim 0.0005\) or state fidelities of at least \(0.96\), the presence of an amplitude damping channel renders better performance than noiseless QRs for ML tasks. This phenomenon is explained by analyzing the distribution in the Pauli space of the resulting density matrices after suffering the amplitude Figure 3: Reduced \(2\)D (from \(16\)D) representation of the distribution in the Pauli space of \(400\) simulations of the toy model of Fig. 2. Variables \(x_{1}\) and \(x_{2}\) are selected using the UMAP algorithm of Ref. [13]. Figure 2: Coefficients in the Pauli space of a \(2\)-qubits toy model (see text for motivation) consisting of a random quantum circuit with \(10\) gates from the G\(3\) family and error probability \(p=0.2\), for the three noise models studied in this work together with the noiseless coefficients in black. damping noise. This channel introduces additional non-zero coefficients in the Pauli space, which produces a similar effect as having more quantum gates in the original circuits. On the other hand, the depolarizing and phase damping channels only reduce the amplitude of the coefficients in the Pauli space, this producing poorer results. The depolarizing channel is the one that mitigates fastest these values, so our prescription is that its correction should be a priority. ## IV Methods In this work, QRs are used to predict the first excited electronic energy \(E_{1}\) using only the associated ground state \(\left|\psi_{0}\right\rangle_{R}\) with energy \(E_{0}\) for the LiH molecule, as described in [9]. The ground state \(\left|\psi_{0}\right\rangle_{R}\) for the LiH Hamiltonian is calculated by exact diagonalization for different values of the internuclear distance \(R\in[0.5,3.5]\) a.u. For this case, \(n=8\) qubits are needed to describe the ground state, and QRs are used to predict the relative excited energy \(\Delta E(R)\). The dataset \(\{\left|\psi_{0}\right\rangle_{R},\Delta E(R)\}_{R}\) is split into training and test sets, where the test set contains the 30% of the data \(R\in[1.1,2.0]\) a.u., and it is designed so that the QML algorithm has to extrapolate to _new_ data samples. The QRs used to predict the excited energies are random quantum circuits whose gates are chosen from a finite set. It was proven that the G3={CNOT,H,T} family provides an optimal design for QRs [9], where CNOT is the controlled-NOT gate, H stands for Hadamard, and T is the \(\pi/8\) phase gate. Thus, the G3 family is used to generate QRs with a fixed number of gates. The goal of this work is to study the effect of three noise models on the performance of the ML task, for different error probabilities and number of quantum gates. It is important to note that these models embody the overwhelming majority of noise types to which modern hardware is subjected to, this pointing out to the generality of our conclusions. The first noise model that we consider is the _amplitude damping channel_, which reproduces the effect of energy dissipation, that is, the loss of energy of a quantum state to its environment. It provides a model of the decay of an excited two-level atom due to the spontaneous emission of a photon with probability \(p\). The Kraus operators of this channel are given by \[M_{0}=\begin{pmatrix}1&0\\ 0&\sqrt{1-p}\end{pmatrix},\quad M_{1}=\begin{pmatrix}0&\sqrt{p}\\ 0&0\end{pmatrix}. \tag{5}\] The operator \(M_{1}\) transforms \(\left|1\right\rangle\) to \(\left|0\right\rangle\), which corresponds to the process of losing energy to the environment. The operator \(M_{0}\) leaves \(\left|0\right\rangle\) unchanged, but reduces the amplitude of \(\left|1\right\rangle\). The quantum channel is thus \[\epsilon(\rho)=M_{0}\,\rho\,M_{0}^{\dagger}+M_{1}\,\rho\,M_{1}^{\dagger}= \begin{pmatrix}\rho_{00}+p\,\,\rho_{11}&\sqrt{1-p}\,\,\rho_{01}\\ \sqrt{1-p}\,\,\rho_{10}&(1-p)\,\,\rho_{11}\end{pmatrix}. \tag{6}\] The second noise model is described by the _phase damping channel_, which models the loss of quantum information without loss of energy. The Kraus operators for the process are \[M_{0}=\sqrt{1-p}\,\,\,\mathbb{I},\quad M_{1}=\begin{pmatrix}\sqrt{p}&0\\ 0&0\end{pmatrix},\quad M_{2}=\begin{pmatrix}0&0\\ 0&\sqrt{p}\end{pmatrix}, \tag{7}\] and the quantum channel is then \[\epsilon(\rho) =M_{0}\,\rho\,M_{0}^{\dagger}+M_{1}\,\rho\,M_{1}^{\dagger}+M_{2} \,\rho\,M_{2}^{\dagger}\] \[=\left(1-\frac{p}{2}\right)\,\rho+\frac{p}{2}\,\,Z\,\rho\,Z. \tag{8}\] An alternative interpretation of the phase damping channel is that the state \(\rho\) is left intact with probability \(1-p/2\), and a \(Z\) operator is applied with probability \(p/2\). The last noise model is described by the _depolarizing channel_. In this case, a Pauli error \(X\), \(Y\) or \(Z\) occurs with the same probability \(p\). The Kraus operators are \[M_{0}=\sqrt{1-p}\,\mathbb{I},\,M_{1}=\sqrt{\frac{p}{3}}X,\,\,M_{2}=\sqrt{ \frac{p}{3}}Y,M_{3}=\sqrt{\frac{p}{3}}Z, \tag{9}\] and the quantum channel is \[\epsilon(\rho)=\left(1-p\right)\rho+\frac{p}{3}\left(X\rho X+Y\rho Y+Z\rho Z \right)=\left(1-p\right)\rho+\frac{p}{2}\,\,\mathbb{I}. \tag{10}\] The depolarizing channel transforms the state \(\rho\) into the maximally mixed state with probability \(p\). Notice that the amplitude damping channel is the only one which is not _unital_, since it does not map the identity operator into itself. In general terms, it belongs to the kind of volume contracting environments in phase space with many generalizations that include the quantization of classical friction. The training steps of the algorithm are the following. First, the quantum circuit is initialized with the molecular ground state \(\left|\psi_{0}\right\rangle_{R}\) for a certain configuration \(R\). Next, a noisy quantum circuit with fixed number of gates is applied to \(\left|\psi_{0}\right\rangle_{R}\). Then, we measure the local Pauli operators \(\{X_{0},Z_{0},\cdots,X_{n},Z_{n}\}\), where \(X_{i},Z_{i}\) are the Pauli operators \(X,Z\) applied to the \(i\)-th qubit, thus obtaining the vector \[X(R)=\left(\left\langle X_{0}\right\rangle,\left\langle Z_{0}\right\rangle, \cdots,\left\langle X_{n}\right\rangle,\left\langle Z_{n}\right\rangle\right)^ {T} \tag{11}\] which provides the extracted information from the ground state. Recall that for a noisy state \(\rho\), the expectation value of an operator \(P\) is given by \(\left\langle P\right\rangle=\mathrm{tr}(P\rho)\). The vector \(X(R)\) is fed to a classical machine learning algorithm, in this case a ridge regression, which is a linear model with \(L^{2}\) regularization. The optimal regularization parameter was \(\alpha=10^{-9}\), which reduces overfitting while maintaining optimal prediction capacity [9]. The effect of the different noise channels in the algorithm performance is studied by varying the error probability \(p\). We perform 100 simulations for probabilities \(p=0.0001,0.0005,0.001,0.003\) for each quantum channel, and compare the performance of the model with the noiseless simulation (\(p=0\)). We also study how the number of quantum gates affects the performance of the reservoirs. We design circuits varying the number of gates from 25 to 215 in intervals of 10 gates. Also, we study the performance for large number of quantum gates, using 300, 500, 700 and 900 of them. ## V Data availability The datasets generated and analysed during the current study are available in the GitHub repository, [https://github.com/laiadc/Optimal_QRC_noise](https://github.com/laiadc/Optimal_QRC_noise). ## VI Code availability The underlying code for this study is available in the Github repository and can be accessed via this link [https://github.com/laiadc/Optimal_QRC_noise](https://github.com/laiadc/Optimal_QRC_noise). ## VII Competing interests The authors declare no competing financial or non-financial interests. ## VIII Author contributions All authors developed the idea and the theory. LD performed the calculations and analyzed the data. All authors contributed to the discussions and interpretations of the results and wrote the manuscript. ## IX Acknowledgments The project that gave rise to these results received the support of a fellowship from "la Caixa" Foundation (ID 100010434). The fellowship code is LCF/BQ/DR20/11790028. This work has also been partially supported by the Spanish Ministry of Science, Innovation and Universities, Gobierno de Espana, under Contracts No. PGC2018-093854-BI00, ICMAT Severo Ochoa CEX2019-000904-S. The funders played no role in study design, data collection, analysis and interpretation of data, or the writing of this manuscript.
2306.03713
Ultra-miniature dual-wavelength spatial frequency domain imaging for micro-endoscopy
There is a need for a cost-effective, quantitative imaging tool that can be deployed endoscopically to better detect early stage gastrointestinal cancers. Spatial frequency domain imaging (SFDI) is a low-cost imaging technique that produces near-real time, quantitative maps of absorption and reduced scattering coefficients, but most implementations are bulky and suitable only for use outside the body. We present an ultra-miniature SFDI system comprised of an optical fiber array (diameter $0.125$ mm) and a micro camera ($1\times1$ mm package) displacing conventionally bulky components, in particular the projector. The prototype has outer diameter $3$ mm, but the individual components dimensions could permit future packaging to $<1.5$ mm diameter. We develop a phase-tracking algorithm to rapidly extract images with fringe projections at $3$ equispaced phase shifts in order to perform SFDI demodulation. To validate performance, we first demonstrate comparable recovery of quantitative optical properties between our ultra-miniature system and a conventional bench-top SFDI system with agreement of $15$\% and $6$\% for absorption and reduced scattering respectively. Next, we demonstrate imaging of absorption and reduced scattering of tissue-mimicking phantoms providing enhanced contrast between simulated tissue types (healthy and tumour), done simultaneously at wavelengths of $515$ nm and $660$ nm. This device shows promise as a cost-effective, quantitative imaging tool to detect variations in optical absorption and scattering as indicators of cancer.
Jane Crowley, George S. D. Gordon
2023-06-06T14:22:01Z
http://arxiv.org/abs/2306.03713v2
# Ultra-miniature dual-wavelength spatial frequency domain imaging for micro-endoscopy ###### Abstract There is a need for a cost-effective, quantitative imaging tool that can be deployed endoscopically to better detect early stage gastrointestinal cancers. Spatial frequency domain imaging (SFDI) is a low-cost imaging technique that produces near-real time, quantitative maps of absorption and reduced scattering coefficients, but most implementations are bulky and suitable only for use outside the body. We present an ultra-miniature SFDI system comprised of an optical fiber array (diameter \(0.125\) mm) and a micro camera (\(1\times 1\) mm package) displacing conventionally bulky components, in particular the projector. The prototype has outer diameter \(3\) mm, but the individual components dimensions could permit future packaging to \(<1.5\) mm diameter. We develop a phase-tracking algorithm to rapidly extract images with fringe projections at \(3\) equispaced phase shifts in order to perform SFDI demodulation. To validate performance, we first demonstrate comparable recovery of quantitative optical properties between our ultra-miniature system and a conventional bench-top SFDI system with agreement of \(15\)% and \(6\)% for absorption and reduced scattering respectively. Next, we demonstrate imaging of absorption and reduced scattering of tissue-mimicking phantoms providing enhanced contrast between simulated tissue types (healthy and tumour), done simultaneously at wavelengths of \(515\) nm and \(660\) nm. This device shows promise as a cost-effective, quantitative imaging tool to detect variations in optical absorption and scattering as indicators of cancer. spatial frequency domain imaging, miniaturisation, optical properties, optical fibers a \(2\)D illumination pattern of known spatial frequency to be generated and projected onto a sample of interest, with the result captured on a standard CMOS camera. Demodulation is then performed to obtain the high and low frequency modulation amplitudes by capturing three separate patterns equally shifted in phase (\(I_{1},I_{2}\),and \(I_{3}\)) using the equations: \[I_{AC}(x_{i})=\frac{\sqrt{2}}{3}[(I_{1}(x_{i})-I_{2}(x_{i}))^{2}+(I_{2}(x_{i})- I_{3}(x_{i}))^{2}+(I_{3}(x_{i})-I_{1}(x_{i}))^{2}]^{1/2} \tag{1}\] \[I_{DC}(x_{i})=\frac{1}{3}[I_{1}(x_{i})+I_{2}(x_{i})+I_{3}(x_{i})] \tag{2}\] This is repeated on a reference material of known optical properties (and hence known diffuse reflectance values) such that the modulation transfer function of the imaging system can be corrected for, and diffuse reflectance values obtained. Diffuse reflectance values are then used to estimate absorption and reduced scattering coefficients using a look-up table generated from either the Diffusion Approximation or Monte Carlo simulation as solutions to the radiative transfer equation. Obtaining the optical properties at more than one wavelength allows for the extraction of additional tissue information, such as chromophore concentration via the Beer-Lambert law. This addition of endogenous contrast information is an aid in diagnosing tissue types, and has been used during breast reconstructive surgery for oxygenation imaging [6]. Performing SFDI at more than one wavelength simultaneously is advantageous for several reasons, Firstly, it has the capability to reduce speckle noise by averaging it out over several wavelengths. Secondly, it gives the opportunity to penetrate to different depths in the sample of interest with different wavelengths [7]. Third, it introduces the capability to obtain chromophore information, such as oxyhaemoglobin and deoxyhaemoglobin concentration, by measuring the variation in absorption coefficient at more than one wavelength[8].[9] Blood oxygenation SFDI systems often operate in the Red/IR e.g.,[10] but most fiber bundle systems operate in the green to avoid too much cross-coupling between fibers.[11] Also, using just one individual wavelength (e.g. \(515\) nm), one can obtain different structural tissue information. To improve speed of SFDI systems towards real-time operation, a single phase image can be used instead of three: a technique termed single snapshot of optical properties (SSOP).[12] SSOP uses a Fourier demodulation method to perform spatial frequency demodulation which typically results in poorer image quality, although emerging convolutional neural network techniques can improve resolution.[13, 14] SSOP has been shown to successfully quantify bowel ischaemia.[15] SFDI has shown to return successful contrast between healthy and malignant resected oesophageal and colon tissue.[16, 17]_Sweer et. al._ imaged resected oesophageal tissue from eight patients undergoing oesophagealomy. By comparing regions imaged with a commercially available SFDI system from _Modulim[18]_ with results from histological analysis of tissue, it was determined that healthy oesophageal tissue has a reduced scattering coefficient higher than the reduced scattering coefficient of both invasive squamous cell carcinoma and Barrett's oesophagus with mild chronic inflammation. The absorption coefficient of healthy oesophageal tissue is lower than invasive squamous cell carcinoma and analogous to that of Barrett's oesophagus with mild chronic inflammation. _Nandy et. al._ found that healthy colon tissue has a higher reduced scattering coefficient than malignant colon tissue and a lower absorption coefficient. SFDI is an attractive choice for an imaging modality because it does not require high-powered lasers, sensitive detectors (mobile phone cameras are sufficient) or complex optical components. It is therefore relatively low-cost to manufacture and operate devices, and they can be miniaturised easily. As a result, a number of SFDI systems exist, such as large commercial systems,[18] portable handheld systems [19], handheld \(3\)D printed systems [20], compact multispectral imaging systems [21]. However, in most existing systems the projector element remains costly and difficult to miniaturise, being typically comprised of either a digital micromirror device (DMD) projector [19] or a motorized grating [20, 21]. There have been a number of approaches to miniaturise SFDI projectors to make them suitable for endoscopic deployment. Fixed gratings have been used to achieve SSOP via rigid endoscopes [22]. While SSOP is advantageous as it reduces acquisition times, it poses several disadvantages, such as reduced image quality due to the use of filtering a single image. The previously developed probe is rigid in nature and not suitable for imaging in the gastrointestinal tract. Fixed gratings have also been used for optical sectioning via flexible imaging fiber bundles [23]. The use of a micro camera is advantageous over imaging through a fiber bundle as an imaging fiber bundle is highly sensitive to vibrations, cross coupling and fiber movements, making the reconstruction of images challenging. Phase-shifted illumination has been demonstrated via an imaging fiber bundle [24], but the use of DMDs is relatively high cost, and commercial fiber bundles projection only support high fidelity fringes at green wavelengths due to increased cross-coupling between cores in red wavelengths [11]. Ultra-thin fiber arrays have been used to create fringe patterns interferometrically for profilometry but not, to our knowledge, for SFDI [25]. None of these existing systems are suitable for routine endoscopic deployment in the gastrointestinal tract because they either use DMD-based projectors which are costly and cannot be sufficiently miniaturised; use fiber bundles which produce low-quality fringe patterns at a limited set of wavelengths and only record low resolution images; or use rigid endoscopes which are not flexible enough. We have therefore developed an ultra-miniature SFDI system, with an outer diameter \(3\) mm that uses a fiber array to interferometrically produce fringe patterns at green (\(515\) nm) and red (\(660\) nm) wavelengths and records images at \(320\times 320\) pixel resolution using a micro camera. The prototype packaging is sufficiently small that is it compatible with the instrument channel of a standard colonoscope. This makes the device comparable to the thinnest previous SFDI system designs that used fiber bundles to achieve a total diameter of \(2.7\) mm[23]. We first compare optical property measurements in our ultra-miniature system to that of a conventional bench top system and find agreement between absorption and reduced scattering coefficients of \(15\)% and \(6\)% respectively. We show the potential to operate the system at more than one wavelength simultaneously, enabling rapid tissue property measurements. This device therefore shows potential to be deployed endoscopically for _in-vivo_ gastrointestinal imaging to detect optical properties as potential indicators of cancer. ## 2 Methods ### Component design and selection The primary components needed for an SFDI system are a source of pattern projection and an image detector to capture the projected patterns on a sample of interest. We chose to use an optical fiber array as the source of projection patterns and a micro camera as the detector. To create an ultra-miniature fringe projector without using DMD elements, we designed a customised two-dimensional pitch-reducing optical fiber array (PROFA(tm), _Chiral Photonics_, NJ) to create fringes interferometrically, shown in Fig. 1. The fiber array was designed to produce interference patterns within a widely used spatial frequency range (\(0.1-0.3\) mm\({}^{-126,27}\)) at an initial test working distance of \(50\) mm when two adjacent channels are illuminated by the same laser source. To compute the required fiber spacings, we used a double slit equation: \[m\lambda=d\sin\theta \tag{3}\] where \(m\) is the number of the interference line spacings from the central point, \(\lambda\) is the wavelength Figure 1: Proposed ultra-miniature SFDI system (a) Schematic of fiber array in ultra-miniature SFDI system showing dual wavelength illumination simultaneously. Light passes from the two lasers into the fiber array via a selection of 7 single-mode fiber input ports. At the tip of the fused taper, the fibers are spaced in a hexagonal array, providing three possible spacings. Crossed polarisers are placed in front of the fiber tip and the micro camera to reduce specular reflections from the imaging sample. (b) Photograph of experimental set up. (c) Prototype device package of \(3\) mm diameter with inset showing zoomed in view of fiber tip and camera. of light, \(d\) is the distance between slits and \(\theta\) is the angle of projection. The desired wavelength was chosen to be \(660\) nm. The distance from slit to projection pattern, i.e. the working distance, was chosen initially to be \(50\) mm, which is the maximum working distance of the camera. Using Eqn 3, we can therefore determine the spacing \(d\) required to produce our spatial frequencies of interest. The fabricated fiber array has spacings of \(5\), \(8.66\) and \(10\)\(\mu\)m, which will produce spatial frequencies of \(0.15\), \(0.25\), \(0.3\) mm\({}^{-1}\) at 660nm, as shown in Fig. 2 a. We can then determine the spatial frequency projection at varying fiber to sample working distances, shown in Fig 2 (b). Typical endoscope working distances are \(20-30\) mm,[28] which is achievable using the 5\(\mu\)m spacing option of our array with \(0.3\) mm\({}^{-1}\) spatial frequency, though in future designs a 2.5\(\mu m\) spacing could enable even shorter working distances. The 7 fiber channels are spaced at the tip as shown in Fig 1. The light sources used are a \(5\) mW \(660\) nm laser diode (LPS-660-FC, _Thorlabs_) and a \(3\) mW \(515\) nm laser (LP515-SF3, _Thorlabs_). The camera chosen is a \(1\times 1\) mm micro camera module (Osiris M module, _OptaSensor_, Germany). The camera has a resolution of \(320\times 320\) pixels, with an individual pixel size of \(2.4\)\(\mu\)m. An in-built lens placed in front of the sensor provides horizontal and diagonal field of views of \(68^{\circ}\) and \(90^{\circ}\) respectively, accompanied by a depth of focus of \(5-50\) mm. The camera module produces a \(12\) bit RGB raw image output. The camera is accompanied by software to control camera parameters such as exposure, gamma correction and black level correction. The automatic exposure correction was disabled so that all image frames contain the same optical power ranges. The micro camera has a frame rate of \(10\) fps, which is the minimum rate required for proper endoscopic visualisation.[29] To minimise specular reflections present on the imaging sample, adhesive-backed polymer polariser sheets are cross-polarised and placed in front of the camera and fiber tip. The camera is also placed at a small angle of \(4^{\circ}\) to the fiber to further limit specular reflections on the imaging sample. This angle is smaller than conventional SFDI systems [6], but is more amenable to miniaturisation. Previous work has shown that this angle can still produce high quality optical property maps [30] ### Phase-tracking algorithm An inherent property of an interferometer such as our fiber array is that the sinusoidal pattern produced will shift over time due to mechanical drifts, vibrations, temperature and intensity variations [31]. Conventional wisdom may suggest using a complex set up consisting of a phase-shifting control system and a piezoelectric transducer driver to stabilise and control this phase shift [25]. However, we exploit the natural phase drift to our advantage via a phase-tracking algorithm. A video, typically \(10-20\) s, is first recorded of the shifting sinusoidal pattern on a sample of interest. To determine which frames to use for demodulation, we first take an average of all frames Figure 2: Determining desired fiber spacing to produce spatial frequencies within our range of interest \(0.1-0.3\) mm\({}^{-1}\) (a) Addressable spatial frequency projection at working distances (WD) of \(50\) mm (solid lines) and \(30\) mm (dashed lines). The dotted lines represent the three possible fiber tip spacings of \(5\), \(8.66\) and \(10\)\(\mu\)m (b) proposed design of spatial frequency projection at various working distances for fiber tip spacing (d) of \(5\)\(\mu\)m (solid lines) and \(2.5\)\(\mu\)m (dashed lines), useful for smaller working distances. within the video and subtract this from each individual frame. This allows us to visualise the spatial frequency pattern with reduced noise (see Fig 3 a). We then take an average across several rows within the frame, applying a smoothing filter, and plot the sinusoidal pattern. We select a zeroth frame and designate the phase of the extracted sinusoid to be \(0^{\circ}\). Next, we calculate the average distance between adjacent maxima of this sinusoid in pixels. This value gives the period of the Figure 3: Characterisation of fringes and phase tracking (a) image of selected zeroth frame and average of all frames, where \(N\) is the total number of all frames within video capture, and corresponding cross sections (b) image of zeroth frame, \(120^{\circ}\) shifted frame and \(240^{\circ}\) shifted frame and corresponding cross sections. pattern, in pixel units. Custom _Python_ code then cycles through all frames in the captured video and selects frames of equal intensity variation whose sinusoidal projections have relative phases of \((120\pm 10)^{\circ}\) and \((240\pm 10)^{\circ}\) from the selected zeroth frame (see Fig 3 b). Frames where the sine wave is non-discernible or the intensity variation between peak and trough is low relative to the zeroth frame are disregarded. This eliminates frames where coherence is temporarily disturbed while perturbations are still in progress. We then select these frame numbers from the initial video and demodulate the images using Eqns 1 and 2. ### Imaging homogeneous tissue-mimicking phantoms In order to perform initial validation of the system, we fabricated tissue mimicking co-polymer in oil phantoms with tunable optical properties by controlling concentrations of TiO\({}_{2}\) and Nigrosin dye [32]. The fabricated phantoms had a thickness of \(30\) mm and were ensured to be non-transparent so as to meet the the semi-infinite thickness requirement of SFDI [33]. We fabricated two phantom batches; one with increasing amounts of dye stock solution from \(0.5-1\) g corresponding to an absorption coefficient range of \(0.006-0.017~{}mm^{-1}\) at \(660\) nm and the second with increasing amounts of TiO\({}_{2}\) from \(0.07-0.13\) g corresponding to a reduced scattering coefficient range of \(0.52-0.99~{}mm^{-1}\) at \(660\) nm. The batch with increasing dye stock solution each had \(0.1\) g of TiO\({}_{2}\) and the batch with increasing TiO\({}_{2}\) each had \(0.5\) g of dye stock solution to ensure the semi-infinite material requirement was met. We chose these optical property ranges as they lay within optical properties of interest of typical gastrointestinal tissue samples [16] and they had previously been calibrated for in literature using a double integrating sphere [32]. We imaged the phantoms in our bench top SFDI system and our miniature SFDI system. The bench top system consists of a Raspberry Pi camera, a commercial projector (_LG Minibeam_ _PH150g HD ready mini projector_), crossed polarisers in front of the camera and projector, and a \(635\) nm filter placed in front of the camera to ensure only red light was captured. For these particular phantoms, the measured difference via double integrating sphere between absorption and reduced scattering coefficients from \(635\) nm to \(660\) nm is \(14\)% and \(3\)% respectively[32]. Therefore, we adjusted reference optical properties for optical property calculation depending whether we were using the bench top or miniature system. When imaging phantoms at \(515\) nm in the miniature system, the reference optical properties were also adjusted for accordingly. We placed the phantoms such that the top of the phantom was \(50\) mm from the distal end of the imaging probe and the projection pattern was in the center of the sample. We took videos of the shifting projection pattern on the phantom for \(10-20\) s. The video was then input to _Python_ phase-tracking code for processing described in Sect 2.2 to find the exact frames needed to calculate the optical properties. We imaged each phantom at three different spatial frequencies by illuminating three different fiber channel combinations. We calculated the optical properties using a look-up table generated from the Diffusion Approximation. For each phantom, we calculated the optical property maps a total of \(18\) times, using every other phantom as a reference in turn for each spatial frequency. This approach helps to average out errors arising from mismatches in expected optical properties of phantoms, which arises in turn due to discrepancies between DIS and SFDI measurements, which can be up to \(20\)%[34]. Finally, the mean of all \(18\) optical property maps is used to determine the absorption and reduced scattering coefficients. A 2D Gaussian filter with standard deviation of \(5\) pixels was applied to resultant optical property maps using _scipy.ndimage.gaussian_filter_. ### Dual wavelength imaging Multi-wavelength imaging is possible with this system as the fiber array consists of seven channels and only two are needed per wavelength to produce an interference pattern. Therefore, this system has the potential to explore up to tri-wavelength simultaneous illumination. We imaged three phantoms with \(660\) nm projection only, then \(515\) nm projection only, and finally with \(660\) nm and \(515\) nm projected simultaneously. We perform dual-wavelength imaging by illuminating channels \(1\&7\) with \(660\) nm and channels \(2\&5\) with \(515\) nm, producing spatial frequency patterns of \(0.3\) mm\({}^{-1}\) and \(0.2\) mm\({}^{-1}\) respectively at a \(50\) mm working distance. A video is captured of both illumination patterns simultaneously, and analysis is carried out by extracting the red and green channels from the video capture. Following the same process in Sect 2.2, fringes of interest are selected and optical properties calculated. Expanding the existing system to tri-colour would be possible either by adding an additional laser of, say \(\sim 450\) nm, to two available illumination channels will produce a spatial frequency of \(0.22\) mm\({}^{-1}\), and could be analysed from the blue channel of the captured video. Multi-wavelength imaging would probe different depths and could be used to image optical properties of layered material. ## 3 Results ### Projector performance The expected spatial frequency of the projected illumination pattern is comparable to the desired spatial frequency with \(12\)% and \(7\)% error for \(660\) nm and \(515\) nm respectively. Some channels produce clearer interference patterns than others, due to cross talk between fibres. This also results in the interference pattern from some channels being more stable than others in time, with inter ference patterns being stable for \(<1\) s under typical operating conditions, but \(>10\) s if the fibers are kept still. Through imaging a resolution target (R3L3S1N - Negative 1951 USAF Test Target, 3" x 3", _Thorlabs_, UK), we determined the resolution of the imaging system to be \(0.793\) lp/mm at a working distance of \(50\) mm[35], shown in Fig 4 a. Fig 4 b shows the raw performance of the projection fiber. ### Comparing ultra-miniature SFDI system to bench top SFDI system We then compared optical property measurements from our conventional bench top SFDI system to our ultra-miniature SFDI system. The results are shown in Fig 5 a & b. We found that the average standard error in absorption and reduced scattering coefficients between the ultra-miniature system and the bench top system were \(15\)% and \(6\)% respectively. The bench top system images were filtered using a \(635\) nm filter and the ultra-miniature images used a laser source at \(660\) nm laser. Therefore, the \(15\)% error in absorption may be largely accounted for by the expected \(14\)% Figure 4: Raw performance of camera and projector (a) Image of USAF target captured with mini camera module taken with room lights on (b) image captured with mini camera module of dual wavelength projection from fiber tip showing extracted red and green channels respectively. difference in optical properties due to the wavelength shift[32]. ### Imaging typical gastrointestinal conditions with ultra-miniature SFDI system We fabricated two phantoms to simulate gastrointestinal tissue states: one with optical properties mimicking squamous cell carcinoma, and the second optical properties mimicking healthy oesophageal tissue, and placed them side by side. SFDI imaging of this sample was then performed at 660nm, with the resulting optical property maps shown in Fig 6 c & f, and resultant optical property maps with filtering applied shown in Fig 6 d & g. ### Dual-wavelength imaging Finally, we characterized the performance across the two wavelengths. We found that the recovered optical properties varied by \(\leq 10\%\) when the two wavelengths are measured simultaneously, compared to measuring them sequentially. This demonstrates the capability of the system to image optical properties at two wavelengths simultaneously with relatively low cross-coupling. Figure 5: Comparison of bench top SFDI system and ultra miniature system: (a) absorption coefficient and (b) reduced scattering coefficient measured from bench top system (\(x\) axis) and miniature system (\(y\) axis). Error bars represent the standard deviation across the image We then imaged two phantoms with different optical properties placed adjacent to one another, one mimicking the optical properties of squamous cell carcinoma and the other mimicking the optical properties of healthy oesophageal tissue. The results are shown in Fig 7 (a-l). The difference in optical properties is visible from both the red and green channels. The optical properties measured from the red and green channel are not expected to be the same as the phantom properties shift with wavelength[32]. We expect the phantom optical properties measured from the green channel to be higher than phantom optical properties measured from the red channel. Figure 6: Imaging a phantom simulating oesophageal tissue at 660nm: (a) white light image of two phantoms with different optical properties side by side (b) expected and (c) measured absorption coefficient of phantoms (d) measured absorption coefficient with smoothing filter applied (e) expected and (f) measured reduced scattering coefficient of phantoms (g) measured reduced scattering coefficient with smoothing filter applied. Expected optical properties are mean of the individual phantoms measured in bench top SFDI system. ## 4 Discussion We have developed an ultra-miniature SFDI system and shown its capability to quantitatively image differences in optical properties of typical gastrointestinal conditions simulated in tissue-mimicking phantoms, providing enhanced contrast. It is sufficiently small to fit in the instrument channel of a standard colonoscope (\(<3\)mm). This work could therefore form the the basis of new devices suitable for cost-effective endoscopic deployment for screening of gastrointestinal cancers. This work has limitations that need further investigation before clinical translation. The first limitation is the choice of wavelengths, which in these experiments was \(660\) and \(515\) nm. By evaluating the absorption coefficient at two wavelengths tissue information such as chromophore concentration can be determined. Oxyhaemoglobin (HbO2) and deoxyhaemoglobin (Hb) are im Figure 7: Optical properties measured from dual-wavelength imaging experiment: (a) expected absorption coefficient from red channel (b) measured absorption coefficient from red channel with (c) filtering applied (d) expected absorption coefficient from green channel (e) measured absorption coefficient from green channel with (f) filtering applied. (g) expected reduced scattering coefficient from red channel (h) measured reduced scattering coefficient from red channel with (i) filtering applied (j) expected reduced scattering coefficient from green channel (k) measured reduced scattering coefficient from green channel with (l) filtering applied. portant tissue optical properties because they can detect perfusion, which enables differentiation between malignant and benign tumours [36] though wavelengths of \(670\) and \(850\) nm are more commonly used [9]. Our system has two constraints which make it challenging to extend to the NIR e.g. \(850\) nm. First, the micro camera module has an IR filter that blocks light in this range but future versions may remove this. Secondly, the fiber array was designed for \(660\) nm, and therefore very lossy when using a \(850\) nm laser, with \(<1\)% efficiency. In future, a fiber array could be designed to operate successfully at both \(660\) and \(850\) nm: indeed fiber arrays with low-coupling between cores and that operate well into the NIR (\(1550\) nm) are routinely used in telecommunications [37]. The second limitation is the need for real time operation for clinical application. In our system, the projected spatial frequency pattern often cycles through a period, giving the required 3 phases, in a short period of time (\(<1\) s) and these can be suitably captured by a camera operating at \(10\) fps. Though this gives an effective SFDI frame rate of at most 3.3 fps, faster frame-rate cameras could likely improve this: \(>100\) fps cameras are widely available. However, the phase tracking algorithm is currently relatively slow (several minutes), so does not allow for real-time operation. This could be addressed by implementing the algorithm on a fast GPU that processes images as they arrive. Alternatively, images with non-optimal phases could be used for sinusoid fitting instead of waiting for 3 equispaced phases [38]. The third limitation is image quality, which is somewhat reduced by the non-ideal illumination patterns produced by the fiber array. The image quality here could be improved by using AI [39] or building custom LUTs based on non-ideal projection patterns [30] Further miniaturisation of the device could look at the use of metasurfaces for polarisers on the fiber tip [40], various fiber tip filters to image different reflected wavelengths [41], or patterned surface to produce a concentric circle illumination pattern required for wide-field imaging inside tubular lumen. ## 5 Conclusion We have shown the capability of an ultra-miniature (\(3\) mm diameter) SFDI system to detect quantifiable variances in absorption and reduced scattering coefficients in tissue mimicking phantoms with errors of \(15\)% and \(6\)% respectively, compared to a conventional bench-top SFDI system. Our device has the capability to project two wavelengths simultaneously, enabling extraction of additional properties such as tissue chromophore information. We fabricated tissue-mimicking phantoms simulating typical gastrointestinal condition of squamous cell carcinoma adjacent to healthy oesophageal tissue, where the absorption coefficient of squamous cell carcinoma is much greater than that of healthy tissue and the reduced scattering coefficient is lower. We have shown the capability of our system to image this variation at both one and two wavelengths simultaneously, providing enhanced contrast between the two tissue types. We envisage this system could be used for cost-effective endoscopic screening of gastrointestinal cancers. ## Disclosures The authors declare no conflict of interest ### Acknowledgements The authors acknowledge support from a UKRI Future Leaders Fellowship (MR/T041951/1) and an ESPRC Ph.D. Studentship (2268555). The data presented in this study are available from the the following source: [DOI to be inserted later].
2302.00935
Policy Expansion for Bridging Offline-to-Online Reinforcement Learning
Pre-training with offline data and online fine-tuning using reinforcement learning is a promising strategy for learning control policies by leveraging the best of both worlds in terms of sample efficiency and performance. One natural approach is to initialize the policy for online learning with the one trained offline. In this work, we introduce a policy expansion scheme for this task. After learning the offline policy, we use it as one candidate policy in a policy set. We then expand the policy set with another policy which will be responsible for further learning. The two policies will be composed in an adaptive manner for interacting with the environment. With this approach, the policy previously learned offline is fully retained during online learning, thus mitigating the potential issues such as destroying the useful behaviors of the offline policy in the initial stage of online learning while allowing the offline policy participate in the exploration naturally in an adaptive manner. Moreover, new useful behaviors can potentially be captured by the newly added policy through learning. Experiments are conducted on a number of tasks and the results demonstrate the effectiveness of the proposed approach.
Haichao Zhang, We Xu, Haonan Yu
2023-02-02T08:25:12Z
http://arxiv.org/abs/2302.00935v3
# Policy Expansion for Bridging Offline-to-Online Reinforcement Learning ###### Abstract Pre-training with offline data and online fine-tuning using reinforcement learning is a promising strategy for learning control policies by leveraging the best of both worlds in terms of sample efficiency and performance. One natural approach is to initialize the policy for online learning with the one trained offline. In this work, we introduce a policy expansion scheme for this task. After learning the offline policy, we use it as one candidate policy in a policy set. We then expand the policy set with another policy which will be responsible for further learning. The two policies will be composed in an adaptive manner for interacting with the environment. With this approach, the policy previously learned offline is fully retained during online learning, thus mitigating the potential issues such as destroying the useful behaviors of the offline policy in the initial stage of online learning while allowing the offline policy participate in the exploration naturally in an adaptive manner. Moreover, new useful behaviors can potentially be captured by the newly added policy through learning. Experiments are conducted on a number of tasks and the results demonstrate the effectiveness of the proposed approach. Code is available: [https://github.com/Haichao-Zhang/PEX](https://github.com/Haichao-Zhang/PEX). ## 1 Introduction Reinforcement learning (RL) has shown great potential in various fields, reaching or even surpassing human-level performances on many tasks (e.g. Mnih et al., 2015; Silver et al., 2017; Schrittwieser et al., 2019; Tsividis et al., 2021). However, since the policy is learned from scratch for a given task in the standard setting, the number of samples required by RL for successfully solving a task is usually large, which limits its applicability in many practical scenarios such as robotics, where physical interaction and data collection has a non-trivial cost. In many cases, there is a good amount of offline data that has already been available (Kober et al., 2013; Rastgoftar et al., 2018; Cabi et al., 2019), e.g., collected during previous iterations of experiments or from human (e.g. for the task of driving). Instead of ab initio learning as in the common RL setting, how to effectively leverage the already available offline data for helping with online policy learning is an interesting and open problem (Vecerik et al., 2017; Hester et al., 2018; Nair et al., 2018). Offline RL is an active recent direction that aims to learn a policy by purely using the offline data, without any further online interactions (Fujimoto et al., 2019; Kumar et al., 2020; Fujimoto & Gu, 2021; Levine et al., 2020; Ghosh et al., 2022; Chen et al., 2021; Janner et al., 2021; Yang et al., 2021; Lu et al., 2022; Zheng et al., 2022). It holds the promise of learning from suboptimal data and improving over the behavior policy that generates the dataset (Kumar et al., 2022), but its performance could still be limited because of its full reliance on the provided offline data. To benefit from further online learning, one possible way is to pre-train with offline RL, and warm start the policy of an online RL algorithm to help with learning and exploration when learning online. While this pre-training + fine-tuning paradigm is natural and intuitive, and has received great success in many fields like computer vision (Ge & Yu, 2017; Kornblith et al., 2019) and natural language processing (Devlin et al., 2018; Radford & Narasimhan, 2018; Brown et al., 2020), it is less widely used in RL. Many early attempts in RL community report a number of negative results along this direction. For example, it has been observed that initializing the policy with offline pre-training and then fine-tuning the policy with standard online RL algorithms (e.g. SAC (Haarnoja et al., 2018)) sometimes suffers from non-recoverable performance drop under certain settings (Nair et al., 2020; Uchendu et al., 2022), potentially due to the distribution shift between offline and online stages and the change of learning dynamics because of the algorithmic switch. Another possible way is to use the same offline RL algorithm for online learning. However, it has been observed that standard offline RL methods generally are not effective in fine-tuning with online data, due to reasons such as conservativeness of the method (Nair et al., 2020). Some recent works in offline RL also start to focus on the offline-pre-training + online fine-tuning paradigm (Nair et al., 2020; Kostrikov et al., 2022). For this purpose, they share the common philosophy of designing an RL algorithm that is suitable for both offline and online phases. Because of the unified algorithm across phases, the network parameters (including those for both critics and actor) trained in the offline phase can be reused for further learning in the online phase. Our work shares the same objective of designing effective offline-to-online training schemes. However, we take a different perspective by focusing on how to bridge offline-online learning, and not on developing yet another offline or online RL method, which is orthogonal to the focus of this work. We will illustrate the idea concretely by instantiating our proposed scheme by applying it on existing RL algorithms (Kostrikov et al., 2022; Haarnoja et al., 2018). The contributions of this work are: * we highlight the value of _properly connecting_ existing offline and online RL methods in order to enjoy the best of both worlds, a perspective that is alternative and orthogonal to developing completely new RL algorithms; * we propose a simple scheme termed as _policy expansion_ for bridging offline and online reinforcement learning. The proposed approach is not only able to preserve the behavior learned in the offline stage, but can also leverage it adaptively during online exploration and along the process of learning; * we verify the effectiveness of the proposed approach by conducting extensive experiments on various tasks and settings, with comparison to a number of baseline methods. ## 2 Preliminaries We briefly review some related basics in this section, first on model-free RL for online policy learning, and then on policy learning from offline dataset. ### Online Reinforcement Learning Standard model-free RL methods learn a policy that maps the current state \(s\) to a distribution of action \(a\) as \(\pi(s)\). The policy is typically modeled with a neural network \(\pi_{\theta}(s)\) with \(\theta\) denoting the learnable parameters. To train this policy, there are different approaches including on-policy (Sutton et al., 2000; Schulman et al., 2017) and off-policy RL methods (Lillicrap et al., 2016; Haarnoja et al., 2018; Fujimoto et al., 2018; Zhang et al., 2022). In this work, we mainly focus on off-policy RL for online learning because of its higher sample efficiency. Standard off-policy RL methods rely on the state-action value function \(Q(s,a)\) using TD-learning: \[Q(s,a)=r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim T(s,a),a^{\prime}\sim\pi_{ \theta}(s^{\prime})}\big{[}Q(s^{\prime},a^{\prime})\big{]},\] where \(T(s,a)\) denotes the dynamics function and \(r(s,a)\) the reward. \(\gamma\in(0,1)\) is a discount factor. By definition, \(Q(s,a)\) represents the accumulated discounted future reward starting from \(s\), taking action \(a\), and then following policy \(\pi_{\theta}\) thereafter. The optimization of \(\theta\) is achieved by maximizing the following function: \[\max_{\theta}\mathbb{E}_{s\sim\mathcal{D}}\mathbb{E}_{a\sim\pi_{\theta}}Q(s,a). \tag{1}\] where \(\mathcal{D}\) denotes replay buffer for storing online trajectories. In the typical RL setting, learning is conducted from scratch by initializing all parameters randomly and interacting with the world with the randomly policy. \(Q(s,a)\) can be implemented as a neural network \(Q_{\phi}(s,a)\) with parameter \(\phi\). ### Policy Learning from Offline Dataset Policy learning from offline datasets has been investigated from different perspectives. Given expert-level demonstration data, behavior cloning (BC) (Pomerleau, 1988; Bain & Sammut, 1996) is an effective approach for offline policy learning because of its simplicity and effectiveness. In fact, in some recent work, BC has shown to perform competitively with some offline RL methods (Fujimoto and Gu, 2021; Chen et al., 2021). Given a dataset \(\mathcal{D}_{\text{offline}}=\{(s_{i},a_{i})\}\) consisting of expert's state action pairs \((s_{i},a_{i})\), BC trains the policy with maximum likelihood over the data: \[\max_{\theta}\mathbb{E}_{(s,a)\sim\mathcal{D}_{\text{offline}}}\log\pi_{ \theta}(a|s). \tag{2}\] Although BC has the benefits of reducing the policy learning task to an ordinary supervised learning task, it suffers from the well-known distributional shift issue (Codevilla et al., 2019; Muller et al., 2005; de Haan et al., 2019; Wen et al., 2020). Another limitation is that BC has a relatively strong requirements on the data quality, and is not good at learning from suboptimal data. Offline RL is a category of methods that are more suitable for policy learning from noisy and suboptimal offline data (Kumar et al., 2022). When focusing on offline learning only, the core challenge is how to address the extrapolation error due to querying the critic function with out-of-distribution actions (Fujimoto et al., 2019; Kumar et al., 2020). Common strategies include constraining the actions to be close to dataset actions (Fujimoto et al., 2019; Fujimoto and Gu, 2021), and constraining the critic to be conservative for out of data distribution actions (Kumar et al., 2020). The recent implicit Q-learning (IQL) method (Kostrikov et al., 2022) addresses this issue by learning a value network to match the expectite of the critic network, thus avoiding querying the critic with the actions not in the offline dataset. For policy update, IQL use a weighted BC formulation \[\max_{\theta}\mathbb{E}_{(s,a)\sim\mathcal{D}_{\text{offline}}}w(s,a)\cdot \log\pi_{\theta}(a|s), \tag{3}\] where \(w(s,a)\) denotes a data dependent weight, typically calculated based on the estimated advantages (Nair et al., 2020; Xu et al., 2022; Kostrikov et al., 2022). ## 3 Offline and Online RL Revisited: Connections and Gaps **Connections between Offline and Online RL.** Offline RL and online RL are closely connected in many aspects. Historically, many offline RL approaches are branched off from off-policy RL algorithms to the full offline setting (Fujimoto et al., 2019; Fujimoto and Gu, 2021). Algorithms more specific to offline setting are then further developed, motivated by the challenges residing in the offline setting (Levine et al., 2020; Kumar et al., 2020). Besides algorithmic connections, offline and online RL are also complementary to each other in terms of strengths and weaknesses. Offline RL is sample efficient since no online interactions are required, but the performance is bounded by the fixed data set. Online RL enjoys more opportunities for performance improvement, but is comparatively much less sample efficient. Because of the connections and complementary strengths, instead of treating them as two isolated topics, it is more natural to connect both in pursuit of a performant policy in practice. **The Direct Offline-Online Approach.** Because of the above mentioned connections, it is tempting to directly use the same algorithm (thus the same network architectures as well) for both phases. Unfortunately, this is ineffective in _either directions_: either directly using offline RL algorithms for online learning (_forward_ direction), or directly using online RL algorithms for offline learning (_reverse_ direction). The reverse direction has been explored extensively in the offline RL community. The current wisdom is that instead of directly using an existing online RL algorithm (e.g. TD3 (Fujimoto et al., 2018)), special treatments need to be incorporated into the algorithm (e.g. incorporating BC into TD3 (Fujimoto and Gu, 2021)) for handling the challenges arising in offline learning, due to issues such as querying the critic function with out-of-distribution actions. In the _forward_ direction, as noted in previous work (Nair et al., 2020), it is exceptionally difficult to first train a policy using offline data and then further improve it using online RL. There are some efforts in the literature on directly transferring the parameters learned offline for online fine-tuning, _i.e._, by initializing the policy in Eqn.(1) with parameters learned offline. This scheme is illustrated in Figure 1 as the _Direct_ approach for Offline-to-Online RL. While simple, this approach has several potential issues as noted in the literature. For example, one common issue is that the behavior of the offline policy can be compromised or even destroyed at the initial phase of online training, e.g. because of the noisy gradient for policy update due to cold start learning of critic network (Uchendu et al., 2022) (as in the case of reward-free pre-training) or distribution shift between offline and online dataset (Lee et al., 2021). Another issue is the conservativeness of the offline RL algorithms. While this is a desirable feature when considering only offline training, it is not preferred for online learning, where exploration is valuable for a further improvement (Rezaeifar et al., 2022). This is a phenomenon that is commonly observed and reported in the literature (Lee et al., 2021; Campos et al., 2021; Uchenudu et al., 2022). ## 4 Bridging Offline and Online RL via Policy Expansion In this section, we will introduce a simple scheme called Policy Expansion for bridging offline and online training. It is worthwhile to note that the proposed scheme is orthogonal to specific off/online RL algorithms and is compatible to be used with different value-based offline and online algorithms. The final performance of such a combination may vary depend on the selection of methods. ### Policy Expansion and Adaptive Composition **Policy Expansion.** To mitigate the above mentioned issues, we propose an alternative scheme that can be readily combined with existing algorithms. The proposed approach is illustrated in Figure 1. Given a policy \(\pi_{\beta}\) obtained from offline training phase, instead of directly fine-tuning the parameters, we freeze \(\pi_{\beta}\) and add it into a policy set \(\Pi=[\pi_{\beta}]\). To enable further learning, instead of directly modifying \(\pi_{\beta}\) as in the _Direct_ method, which has the potential of destroying useful behaviors learned offline, we freeze \(\pi_{\beta}\) and expand the policy set \(\Pi\) with another learnable policy \(\pi_{\theta}\) as \[\Pi=[\pi_{\beta},\pi_{\theta}] \tag{4}\] which is responsible for a further performance improvement during online training. We refer this type of policy construction as Policy Expansion (PEX). It is intuitive to understand that the behavior of offline policy \(\pi_{\beta}\) is free from being negatively impacted, while the newly added policy can be updated. The policies in the policy set \(\Pi\) will all get involved into exploration and learning in an collaborative and adaptive manner as detailed in the following. **Adaptive Policy Composition.** Both policies in the policy set \(\Pi\) will form a single composite policy \(\vec{\pi}\), which will be used in both exploration and learning. More specifically, given the current state \(s\), we first sample actions for each member of the policy set \(\Pi\) and form a proposal actions set \(\mathbb{A}=\{a_{i}\!\sim\!\pi_{i}(s)|\pi_{i}\in\Pi\}\). Then all the action proposals will be taken into consideration and they will be selected with the probability related to their potential utilities (e.g. values). For example, we can compute their values at the current state \(\mathbf{Q}_{\phi}\!\!=\!\![Q_{\phi}(s,a_{i})|a_{i}\in\mathbb{A}]\in\mathbb{R}^{K}\), with \(K\) denotes the cardinality of \(\Pi\) (here \(K\!=\!2\)), and construct a categorical distribution for selecting the final action: \[P_{\mathbf{w}}[i]=\frac{\exp(Q_{\phi}(s,a_{i})/\alpha)}{\sum_{j}\exp(Q_{\phi}( s,a_{j})/\alpha)},\quad\forall i\in[1,\cdots K] \tag{5}\] where \(\alpha\) is temperature. Then we can sample \(\mathbf{w}\) from it for selecting the actions \(\mathbf{w}\sim P_{\mathbf{w}}\) to decide which action will be used during unroll for interacting with environment. Using value for policy composition has been used in different contexts in the literature (Yu et al., 2021; Shah et al., 2022). Figure 1: **Illustration of Different Training Schemes. Offline training and online RL have been developed within their own training stages. Direct Offline-Online learning approach continues the online training stage after the offline stage is finished, updating the same policy network. The proposed Policy Expansion approach bridges offline and online training by retaining the policy after offline learning (\(\pi_{\beta}\)), and expand the policy set with another learnable policy (\(\pi_{\theta}\)) for capturing further performance improvements. The two policies both participate in interactions with environment and learning in an adaptive fashion.** Conceptually, the composite policy \(\tilde{\pi}\) can be represented as follows: \[\tilde{\pi}(a|s)=[\delta_{a\sim\pi_{\beta}(s)},\delta_{a\sim\pi_{\theta}(s)}] \mathbf{w},\quad\mathbf{w}\sim P_{\mathbf{w}} \tag{6}\] where \(\mathbf{w}\in\mathbb{R}^{K}\) a one-hot vector, indicating the policy that is selected for the current state \(s\). \(\delta_{a\sim\pi}\) denotes the Dirac delta distribution centered at \(a\) which is sampled from \(\pi\). By allowing only the newly added policy (\(\pi_{\theta}\) in this case) to be fine-tuned while freezing all others (\(\pi_{\beta}\)), we can avoid the problem of compromising (e.g. destroying or forgetting) the behavior of offline policy. At the same time, we have the advantage of adaptiveness in the sense of allowing learning of new abilities. From this perspective, the policy expansion plays the role of bridging the offline and online learning phases, while mitigating commonly encountered issues. It is interesting to note that a similar compositional form of policy has appeared in DAgger (Ross et al., 2011), although with a uniform weight across states and under a different context of imitation learning. Here our compositional weight is state adaptive and the compositional policy is used for bridging offline-to-online reinforcement learning. PEX has several advantages compared to direct offline-online learning (illustrated in Figure 1): 1. _offline policy preservation_: it can retain the useful behaviors learned during offline training phase by retaining the policy and avoid it being destroyed in the initial online training phase; 2. _flexibility in policy form_: the offline policy does not need to be of the same form with the online policy (e.g. same network structure) as in the direct offline-online approach, offering more flexibilities in design; 3. _adaptive behavior selection_: both the behavior of the offline policy and the online learning policy are used in interacting with the environment and they are involved in an adaptive manner, e.g., according to their respective expertise in handling different states. ``` Input: offline RL algorithm \(\{L^{Q_{a}}_{\text{offline}},L^{\pi_{\beta}}_{\text{offline}}\}\), online RL algorithm \(\{L^{Q_{a}}_{\text{online}},L^{\pi_{\theta}}_{\text{online}}\}\)1 Initialize: network parameters \(\phi,\beta,\theta\), offline replay buffer \(\mathcal{D}_{\text{offline}}\) while in offline training phasedo % offline policy training using batches from the offline replay buffer \(\mathcal{D}_{\text{offline}}\) \(\phi\leftarrow\phi-\lambda_{Q}\nabla_{\phi}L^{Q}_{\text{offline}}(\phi)\), \(\quad\beta\leftarrow\beta-\lambda_{\pi}\nabla_{\beta}L^{\pi_{\beta}}_{\text{ offline}}(\beta)\) endwhile Policy Expansion: \(\tilde{\pi}=[\pi_{\beta},\pi_{\theta}]\); transfer \(Q_{\phi}\) while in online training phasedo for each environment step do \(a_{t}\sim\tilde{\pi}(a_{t}|s_{t})\) according to (6), \(s_{t+1}\sim T(s_{t+1}|s_{t},a_{t})\), \(\mathcal{D}\leftarrow\mathcal{D}\cup\{(s_{t},a_{t},r(s_{t},a_{t}),s_{t+1})\}\) endfor for each gradient step do % online training using batches from both \(\mathcal{D}_{\text{offline}}\) and \(\mathcal{D}\) \(\phi\leftarrow\phi-\lambda_{Q}\nabla_{\phi}L^{Q}_{\text{online}}(\phi)\), \(\quad\theta\leftarrow\theta-\lambda_{\pi}\nabla_{\theta}L^{\pi_{\theta}}_{ \text{online}}(\theta)\) endfor endwhile ``` **Algorithm 1** PEX: Policy Expansion for Offline-to-Online RL ### Bridged Offline-Online Training with Policy Expansion We focus on value-based RL algorithms for both stages in this work. For _offline-training_, we conduct offline RL with an offline RL algorithm (e.g. IQL) on the offline dataset to obtain the offline policy \(\pi_{\beta}\). Then we can construct policy expansion following Eqn.(4) before entering the online phase. And we also transfer the Q function (critic) learned in the offline stage to online stage for further learning. We also transfer the offline buffer to online stage as an additional buffer, as shown in Figure 1. For _online training_, we use the policy adaptively composed from the policy set as in Eqn.(6), and then conduct online training by interleaving environmental interaction and gradient update. The newly collected transitions are stored into the online replay buffer \(\mathcal{D}\). For training, batches randomly sampled from both \(\mathcal{D}\) and \(\mathcal{D}_{\text{offline}}\) are used. The value loss and policy loss are calculated based on the losses corresponding to the chosen algorithm. The proposed scheme can be used together with different existing RL algorithms. The complete procedure is summarized in Algorithm 1, taking an offline RL and online RL algorithm as inputs. ## 5 Related Work **Pre-Training in RL.** A number of different directions have been explored in RL pre-training, including representation pre-training and policy pre-training. Note that for RL, pre-training can be either offline or online. Representative works include pre-training of the feature representation using standard representation learning methods (e.g. contrastive learning (Yang and Nachum, 2021)), dynamics learning-based representation learning (e.g. Schwarzer et al., 2021; Seo et al., 2022), unsupervised RL driven by intrinsic rewards in reward-free environments (e.g. Liu and Abbeel, 2021), or directly using ImageNet pre-training for visual RL tasks (e.g. Shah and Kumar, 2021; Yuan et al., 2022). Apart from representation pre-training, another category of work is on policy pre-training, with the goal of acquiring behaviors during the pre-training phase that are useful for the online phase. When the downstream task is unknown, there are approaches for unsupervised pre-training, e.g., maximizing behavior diversity (Eysenbach et al., 2019) or converting the action space Singh et al. (2021), with the hope of discovering some behaviors that are useful for the downstream task. When the offline-online task is more aligned, there are some early attempts on directly transferring policy parameters (Rajeswaran et al., 2018), based on the intuition that a policy initialized this way can produce more meaningful behaviors than randomly initialized networks. Some recent work focuses on behavior transferring, _i.e._, leveraging the offline trained policy for exploration during online training (Campos et al., 2021; Uchendu et al., 2022). Our work falls into this latter category of methods. One notable difference compared to Campos et al. (2021); Uchendu et al. (2022) is that for the proposed approach, the offline policy is one part of the final policy, and with its role been determined adaptively. **Data-Driven RL and Offline RL.** Training a policy by leveraging a large amount of existing data (_i.e._ data-driven RL) is a promising approach that is valuable to many real-world scenarios, where offline data is abundant. Offline RL is one active topic towards this direction. The main motivation of offline RL is to train a policy by leveraging a pre-collected dataset, without requiring additional environmental interactions (Levine et al., 2020). Many research works in this direction focus on addressing the special challenges brought by offline learning, including out-of-distribution value issue (Kumar et al., 2020). Common strategies include constraining the value (Kumar et al., 2020) to be small for out of distribution actions or the policy to be close to the action distribution of dataset (Fujimoto et al., 2019; Fujimoto and Gu, 2021). Recently, Kostrikov et al. (2022) proposes an implicit Q-learning (IQL) method as an alternative way to handle this issue. It learns a value function that predicts a certain expectile of the values for state-action pairs from the dataset, which can be used for computing the value target without querying the critic with out-of-distribution actions. **Offline Training with Online Fine-tuning.** Combining offline data with online learning is an effective approach that have been demonstrated by several early attempts with demonstration data (Vecerik et al., 2017; Hester et al., 2018; Nair et al., 2018; Rajeswaran et al., 2018). Traditionally, the offline RL methods are purely focused on the offline training setting. However, the offline learned policy could be limited in performance given a fixed dataset. In the case when further online interaction is allowed, it would be natural to fine-tune the policy further with data collected online. This paradigm of two-stage policy training is related to the iterative interleaved policy and data collection schemes used in imitation learning (Ross et al., 2011; Ross and Bagnell, 2012). Several different approaches have been explored towards online fine-tuning of a offline pre-trained policy, including balancing offline-online replay data (Lee et al., 2021), parameter transferring (Rajeswaran et al., 2018; Xie et al., 2021), policy regularization (Rudner et al., 2021; Tirumala et al., 2020) and guided exploration (Campos et al., 2021; Uchendu et al., 2022). It has been observed that directly applying some offline RL methods does not benefit from online interactions (Nair et al., 2020; Uchendu et al., 2022), potentially due to the conservative nature of the offline policy (Nair et al., 2020). Based on this observation, there are some recent efforts on developing algorithms that are not only suitable for offline training, but also can leverage online interactions for further learning. Nair et al. (2020) shows that an advantage-weighted form of actor-critic method is suitable for this purpose. IQL (Kostrikov et al., 2022) also leverages a similar form for policy learning and shows that it can benefit from online fine-tuning. Our work falls into this category and we provide an alternative approach for leveraging offline pre-training for helping with online training. ## 6 Experiments In this section, we first evaluate the effectiveness of the proposed approach on various types of benchmark tasks with comparison to a number of baseline methods. Then we further show a number of extensions where the proposed approach can also be applied. ### Offline-to-Online RL Experiments **Tasks and Settings.** We use the standard D4RL benchmark which has been widely used in offline RL community (Fu et al., 2020). For offline learning, we use the provided dataset for training. For online learning, we use the accompanied simulator for interaction and training. For offline phase, 1M training steps are used for training. Then we run online fine-tuning for another 1M environmental steps. Here we use IQL (Kostrikov et al., 2022) as the backbone algorithm for all methods listed below. The training is repeated with 5 different random seeds. **Baselines.** We compare the proposed approach with the following baselines: _(i)_**Offline**: offline training using IQL, without online fine-tuning; _(ii)_**Scratch**: train IQL online from scratch, without offline-pre-training. _(iii)_**Buffer**(Vecerik et al., 2017): train IQL online without offline pre-training, but has access to the offline buffer during online training, _i.e._ using a buffer as \(\mathcal{D}\cup\mathcal{D}_{\text{offline}}\). _(iv)_**Direct**(Kostrikov et al., 2022): a direct offline-to-online approach by directly transferring parameters trained offline to online stage using IQL (Kostrikov et al., 2022), which is a recent and representative RL algorithm that shows state-of-the-art performance on offline RL while allows online fine-tuning; _(v)_**AMAC**(Nair et al., 2020): an approach that uses an advantage-weighted form of actor-critic method for offline-to-online RL; _(vi)_**off2On**(Lee et al., 2021): a recent offline-to-online RL method that uses an ensemble of offline trained value and policies together with a balanced offline-online replay scheme; _(vii)_**BT**(Campos et al., 2021): Behavior Transfer which is an approach that leverages an offline learned policy in exploration, where the offline policy is used for exploration for a consecutive number of steps sampled from a distribution once activated; _(viii)_**JSRL**(Uchendu et al., 2022): Jump Start RL, which divides the rollout of a trajectory into two parts, using the offline learned policy for the first part and then unrolling with the online learning policy for the rest of trajectory. **PEX** denotes the proposed approach, which has the same offline and online RL algorithms as **Direct**, and is only different in using _Policy Expansion_ for connecting the two stages. More resources are available on the project page. 2 Footnote 2: [https://sites.google.com/site/hchangl/projects/pex](https://sites.google.com/site/hchangl/projects/pex) The return curves for all the tasks are shown in Figure 2. The aggregated return across all tasks is shown in Figure 3. The returns are first averaged across task and then across runs. It can be observed that all methods show some improvements after online training in general, compared to the initial performance before online training. **Scratch** has Figure 3: **Aggregated Return Curves across tasks (IQL-based).** Figure 2: **Normalized Return Curves of different methods on benchmark tasks from D4RL (Fu et al., 2020). IQL is used as the backbone apart from the AWAC baseline (Nair et al., 2020).** the lowest overall performance across all tasks. On the challenging sparse reward antmaze tasks, Scratch cannot learn a meaningful policy at all. Buffer has better performance than Scratch when incorporating the offline buffer, indicating that the offline buffer has some benefits in helping with learning. Notably, on the previously zero-performance antmaze tasks, Buffer achieves reasonable performance with the help of the offline buffer, leading to a large overall improvement over Scratch, (_c.f._ Figure 3). Direct (IQL-based) shows large improvements over Offline on average as shown in Figure 3, implying the benefits brought by the the additional online training over pure offline training. Off2On also shows large improvement during fine-tuning and achieves strong overall performance (Figure 3). BT shows some improvements over IQL on some tasks such as antmaze-medium-play and antmaze-large-diverse, with an overall performance comparable to that of Direct. JSRL outperforms Direct and BT on some tasks (e.g. hopper-medium, hopper-medium-replay), potentially due to its different way of leveraging the offline policy, and its overall performance is similar to BT. The proposed PEX approach performs comparably to other baselines on some tasks while outperforming all baseline methods on most of the other tasks (_c.f._ Figure 2), and outperforms baselines methods overall (_c.f._ Figure 3), demonstrating its effectiveness. ### Heterogeneous Offline-Online RL Bridging via Policy-Expansion We have shown the application of PEX to the case where both the offline and online algorithms are the same (referred to as PEX-IQL here since both are IQL) in Section 6.1. In this section, we further show the applicability of the proposed scheme in bridging heterogeneous RL algorithms, _i.e._ different RL methods are used for the offline and online stages. As an example, here we use IQL (Kostrikov et al., 2022) and SAC (Haarnoja et al., 2018) for offline and online RL stages respectively. We compare with Scratch (vanilla SAC (Haarnoja et al., 2018)), Buffer (SAC with additional offline replay buffer) as well as Direct (directly transferring policy and critic parameters learned with offline IQL to SAC for further online learning). Again PEX uses the same offline and online algorithms as in Direct but uses policy expansion instead of the direct transferring approach. The normalized return curves aggregated across tasks are shown in Figure 4. Individual return curves are shown in Appendix A.8. It can be observed that there is an overall improvement by simply applying PEX to the heterogeneous offline-online RL setting as well. ### Ablation Studies We will inspect the impact of several factors on the performance of the proposed method in the sequel. **Offline-Buffer.** This experiment investigates the impacts of including the offline replay buffer in online training stage. The results are shown in Figure 5(a). As can be observed, the inclusion of the offline replay buffer helps with the performance, but the performance drop caused by disabling offline replay buffer is smaller than the change of other algorithmic components (_c.f._ Figure 5(b)\(\sim\)(d)). **Critic Transfer.** This experiment studies the impacts of transferring the critic parameters trained from offline to online stage. As can be observed from Figure 5(b), disabling critic transferring will also greatly decrease the performance both in terms of sample efficiency as well as final performance. **Policy Transfer.** This experiment examines the impacts of transferring the offline pre-trained policy in the online training stage. When policy transferring is disabled, there is no need to use policy Figure 4: **Aggregated Return Curves across benchmark tasks (SAC-based).** Figure 5: **Ablation Results on a number of factors. Orange curves correspond to ablation variants.** expansion in the online training stage. The results are shown in Figure 5(c). It can be observed that there is a clear performance drop when policy transferring is disabled. **Offline Policy Freeze.** The offline learned \(\pi_{\beta}\) is freezed during the online learning stage in **PEX**. This experiment investigates the impact of this factor. If disabled, \(\pi_{\beta}\) will be trained in the same way as \(\pi_{\theta}\) during online learning. It is observed from Figure 5(d) that freezing the offline policy is important. Training by disabling policy freezing has a clear performance drop. This is consistent with the intuition on the usefulness of offline policy preservation, which is one advantage of our approach. Another set of ablation results are deferred to Appendix A.4. ### Visualization and Analysis **Policy Composition Probability.** Since a composite policy is involved in the proposed approach, it is interesting to inspect the participation of each member policy from the policy set \(\Pi\!=\!\{\pi_{\beta},\pi_{\theta}\}\) when interacting with the environment. We visualize the policy compositional probability \(P_{\mathbf{w}}\) during the rollout within a trajectory after training, as shown in Figure 6. It can be observed that composition probability for each member policy is state-adaptive and is changing along the progress of a trajectory, implying that both policies contribute to the final policy in an adaptive manner. **State Space Associations of Member Policies.** To get a better understand of the role of the member policies in the policy set, we visualize the association between the states and its selected policy. For this purpose, we embed a set of states into a 2D space using t-SNE (van der Maaten and Hinton, 2008), and then visualize the association of offline policy \(\pi_{\beta}\) and the newly expanded policy \(\pi_{\theta}\) to states in the projected space. States that select the offline policy \(\pi_{\beta}\) are colored with blue and states that select \(\pi_{\theta}\) are colored with red. It can be observed that \(\pi_{\theta}\) and \(\pi_{\theta}\) cover different parts of the state space, indicating that they have some complementary functionalities and are preferred differently at different states. ## 7 Conclusions, Limitations and Future Work We highlight the usefulness of properly connecting offline and online stages of reinforcement learning to gain the benefits of both worlds and present a policy expansion scheme as an attempt toward this direction. This scheme is an instance in the direction orthogonal to developing completely new offline-to-online RL algorithms. The proposed approach is simple and can be combined with existing RL algorithms, and is illustrated with two different combinations in this work. Experiments demonstrate the effectiveness of the proposed approach on a number of benchmark tasks. While achieving promising performance, the proposed approach also has some limitations. One limitation is that the number of parameters grows with the number of policies in the policy set. While this might not be an issue when the policy set is small as the case in this work, it will be less parameter efficient in the presence of a large policy set. One possible way to address this issue is by introducing a distillation stage (Rusu et al., 2016), by consolidating the multiple policies into a network with a smaller number of parameters. Generalizing the proposed scheme to the case with a set of pre-trained skill policies is an interesting direction (Eysenbach et al., 2019; Shu et al., 2018). For offline learning, we have built upon the strong IQL method. It would be interesting to see how much we can gain by upgrading it with more recently developed offline RL methods together with different online methods. The idea of using a policy set itself can potentially be applied to other cases beyond offline to online RL. We leave the exploration of its generalization and application as an interesting future work. Figure 6: **Visualization of Policy Composition Probability during the rollout of one trajectory. The probability curves are smoothed for visualization purpose.**
2307.14146
Sporadic dualities from tensor deconfinement
In this paper we give a field theory explanation of two confining dualities that have been proposed in the literature based on exact results from supersymmetric localization. The first confining model under investigation is 4d $SU(N_c+1)$ SQCD with a conjugate rank-$2$ anti-symmetric tensor, $N_c+3$ anti-fundamentals, $2N_c$ fundamentals and a superpotential that couples the anti-symmetric tensor and the fundamentals. The second confining model studied here is $3d$ $\mathcal{N}=2$ $USp(4)$ gauge SQCD with two fundamentals, two rank-$2$ anti-symmetric tensors and vanishing superpotential. Here we prove that these models are confining by using the technique of deconfining the anti-symmetric tensors and then by flowing to the IR description by sequential dualities. As a bonus the analysis provides (alternative) proofs of the identities obtained from supersymmetric localization.
Antonio Amariti, Fabio Mantegazza, Davide Morgante
2023-07-26T12:21:33Z
http://arxiv.org/abs/2307.14146v1
# Sporadic dualities from tensor deconfinement ###### Abstract In this paper we give a field theory explanation of two confining dualities that have been proposed in the literature based on exact results from supersymmetric localization. The first confining model under investigation is 4d SU(\(N_{c}+1\)) SQCD with a conjugate rank-2 anti-symmetric tensor, \(N_{c}+3\) anti-fundamentals, \(2N_{c}\) fundamentals and a superpotential that couples the anti-symmetric tensor and the fundamentals. The second confining model studied here is \(3d\)\({\cal N}=2\) USp(4) gauge SQCD with two fundamentals, two rank-2 anti-symmetric tensors and vanishing superpotential. Here we prove that these models are confining by using the technique of deconfining the anti-symmetric tensors and then by flowing to the IR description by sequential dualities. As a bonus the analysis provides (alternative) proofs of the identities obtained from supersymmetric localization. ## 1 Introduction The low energy dynamics of UV free strongly coupled supersymmetric gauge theories can be often be simplified by the existence of infrared dualities. The dual descriptions are in general associated to (more) weakly coupled QFTs, described by a different set of fields and interactions that share in the IR the same correlation functions for the physically observable conserved currents of the original description. The prototypical example of such dualities is the electromagnetic duality and for this reason the two dual models are usually referred to as the electric and the magnetic phase. Restricting to cases with four supercharges the basic example of these dualities was discovered by Seiberg in [1] for SU(\(N_{c}\)) 4d SQCD with \(N_{f}>N_{c}+1\) flavors and vanishing superpotential. This duality has also a limiting case, where the magnetic description does not correspond to any gauge theory but to a WZ model consisting in a collection of mesons and baryons of the electric description, in addition to a (classical) constraint among them. In this case, corresponding to the choice \(N_{f}=N_{c}+1\), the electric gauge theory confines without breaking the chiral symmetry (i.e. s-confines [2]), and the magnetic theory describes the dynamics of the confined degrees of freedom, with a superpotential imposing the classical constraint on the moduli space. There is also another confining case, corresponding to SU(\(N_{c}\)) 4d SQCD with \(N_{f}=N_{c}\) flavors, where the low energy dynamics described by the mesons and the baryons requires a quantum constraint on the moduli space. Such constraint breaks the chiral symmetry and for this reason this case is referred to as confinement with chiral symmetry breaking. This idea of confinement as a limiting case of a supersymmetric duality was then extended to various generalizations of Seiberg duality. Furthermore a full classification of s-confining gauge theories with vanishing superpotential was worked out in [3] for theories with a single gauge group. In this classification there are many models that do not correspond to any limiting case of any known duality. Such models are characterized usually by the presence of matter fields in a rank-two tensor representation of the gauge group. Despite the fact that gauge theories of this type do not have in general a Seiberg-like dual description, it has been shown in [4] that the s-confining dualities can be derived using only Seiberg-(like) dualities thanks to the rank-2 tensor deconfining technique originally proposed in [5] and subsequently generalized in [6]. The technique consists of substituting a rank-2 tensor matter field with a bifundamental field charged also under another (auxiliary) confining gauge group, such to recover the original description once this new gauge group confines. After deconfining the rank-2 tensors it has been possible to apply sequences of Seiberg dualities (see [7] for a general construction) and than to recover the confined phase proposed in [3], using only the s-confining dualities of \(\mathrm{SU}(N_{c})\) with \(N_{c}+1\) flavors of [1] and \(\mathrm{USp}(2N_{c})\) SQCD with \(2N_{c}+4\) fundamentals of [8]. This construction may require further refinements for models with a superpotential deformation, due to the possible presence of an Higgsing that breaks partially or completely the gauge group (see [9] for a general discussion). Recently new confining gauge theories have been obtained for 4d models with rank-2 tensors and non-vanishing superpotential [10]. Furthermore the deconfinement techniques have been applied to 3d \(\mathcal{N}=2\) gauge theories [11], where the zoo of confining gauge theories is richer, because of the presence of a dual photon and of a Coulomb branch. New confining dualities in this direction have been obtained in [12; 13]. In this paper we apply these techniques to a 4d and a 3d model that have been claimed to be confining because of integral identities in supersymmetric localization. We find a physical origin of these integral identities that allowed to state the new confining dualities, finding a field theoretical explanation for them. The 4d duality under inspection corresponds to \(\mathrm{SU}(N_{c}+1)\) SQCD, with a rank-2 conjugate anti-symmetric tensor, \(N_{c}+3\) anti-fundamentals and \(2N_{c}\) fundamentals. This theory is claimed to be confining if a cubic superpotential between the anti-symmetric and the fundamentals is turned on. Such claim was originally proposed in [14] based on the fact that the supersymmetric index1 on \(S^{3}\times S^{1}\)[15; 16] of this theory was computed exactly in [17]. The final result has a field theory interpretation representing the low energy description of the baryons and mesons of the \(\mathrm{SU}(N_{c}+1)\) gauge theory with the expected constraints from the truncation of the chiral ring and the moduli space. This duality has been referred to as Spiridonov-Warnaar-Vartanov (SWV) duality in [18], where it has been used in the study of 4d compactification of the 6d minimal (D,D) conformal matter theories on a punctured Riemann surface (see also [19]). Here we provide a physical derivation of the duality from the field theoretical perspective, by deconfining the rank-2 anti-symmetric tensor and sequentially dualizing the gauge groups. In the process we find that one of the steps requires a partial Higgsing, analogously to the analysis recently performed in [10] for similar 4d confining dualities. The partial Higgsing is triggered in our case by an USp\((2N_{c})\) gauge group with \(2N_{c}+2\) fundamentals, that confines breaking the chiral symmetry. Furthermore, following the various steps on the supersymmetric index we provide an alternative derivation of the identity of [17]. In the second part of the paper we study a 3d confining duality recently obtained in [20], corresponding to USp(4) with two rank-2 anti-symmetric tensors and two fundamentals. The existence of such a duality has been claimed by extending to the 3d bulk a boundary duality constructed from \(\mathcal{N}=(0,2)\) half-BPS boundary conditions in 3d \(\mathcal{N}=2\). Again we deconfine the two rank-2 anti-symmetric tensors and then provide the sequential dualities leading to the final description in terms of the gauge singlets of the original model. ## 2 The Spiridonov-Warnaar-Vartanov 4d duality In this section we derive the SWV duality from a physical approach, by deconfining a rank-2 conjugate anti-symmetric tensor with an auxiliary symplectic gauge group and then by sequentially applying IR dualities. Actually referring to the last step in such a sequence as a duality is improper, because, as we will see in the following, it corresponds to the case of a symplectic gauge theory that confines with a quantum constraint on the moduli space. The crucial aspect of this constraint is that it forces a Higgs mechanism on the leftover unitary gauge group, breaking it to a symplectic one, and assigning a superpotential mass term to some of the fields in the spectrum. This leads to the final step of the construction, where one is left with an s-confining gauge theory (namely USp\((2M)\) with \(2M+4\) fundamentals). After confining this theory we eventually find the expected WZ model, describing the magnetic phase of the SWV duality. The analysis is supported at each step by the relative (integral) identities matching the 4d supersymmetric index. On one hand this corroborates the validity of the results and on the other hand it provides an alternative derivation of the integral identity discovered in [17]. Let us start the analysis discussing the gauge theory that can be read from the Spiridonov-Warnaar identity [17]. It consists in SU\((N_{c}+1)\) SQCD with \(N_{c}+3\) anti-fundamentals \(Q_{1}\) and \(2N_{c}\) fundamentals \(Q_{2}\). In addition there is a rank-2 anti-symmetric conjugate tensor \(A\). This is a non-anomalous asymptotically free theory and it becomes confining if the superpotential deformation2 Footnote 2: In the rest of the paper the explicit contractions will be mostly understood. \[W_{ele}=Q_{1i}^{\ \alpha}J^{ij}_{2N_{c}}Q_{1j}^{\ \beta}A_{\overline{\alpha}, \overline{\beta}} \tag{1}\] is turned on. This deformation is relevant for any value of \(N_{c}\)[18] and it breaks the SU(\(2N_{c}\)) flavor symmetry group into USp(\(2N_{c}\)). The representations of the fields and their charges under the gauge and the flavor groups are summarized in the following \[\begin{array}{c|c||cccc}&\text{SU}(N_{c}+1)&\text{USp}(2N_{c})&\text{SU}(N_ {c}+3)&\text{U}(1)&\text{U}(1)_{R}\\ \hline Q_{1}&\overline{T}_{f}&1&T_{f}&1&0\\ Q_{2}&T_{f}&T_{f}&1&-\frac{N_{c}+3}{2}&1\\ A&\overline{T}_{A}&1&1&N_{c}+3&0\end{array} \tag{2}\] The \(S^{3}\times S^{1}\) supersymmetric index of this model has been explicitly computed [17]. The identity is \(I_{E}=I_{M}\) with \[\begin{split} I_{E}&=\frac{(p;p)_{\infty}^{N_{c}}(q;q)_{ \infty}^{N_{c}}}{(N_{c}+1)!}\prod_{\mathbb{T}^{N_{c}}}\prod_{1\leq i<j\leq N_{ c}+1}\frac{\Gamma(Sz_{i}^{-1}z_{j}^{-1})}{\Gamma((z_{i}/z_{j})^{\pm 1})}\\ &\quad\times\prod_{j=1}^{N_{c}+1}\prod_{k=1}^{N_{c}}\Gamma(t_{k}z_{ j})\cdot\frac{\prod_{m=1}^{N_{c}+3}\Gamma(s_{m}z_{j}^{-1})}{\prod_{k=1}^{N_{c}} \Gamma(St_{k}z_{j}^{-1})}\frac{\text{d}z_{j}}{2\pi iz_{j}}\end{split} \tag{3}\] and \[I_{M}=\prod_{m=1}^{N_{c}+3}\prod_{k=1}^{N_{c}}\frac{\Gamma(s_{m}t_{k})}{\Gamma (Ss_{m}^{-1}t_{k})}\prod_{1\leq l<m\leq N_{c}+3}\Gamma(Ss_{l}^{-1}s_{m}^{-1}) \tag{4}\] with the constraint \(S=\prod_{m=1}^{N_{c}+3}s_{m}\) imposed on the charges. It is possible to read from this identity that there are two contributions arising from the meson \(M=Q_{1}Q_{2}\) and the baryon \(B=Q_{1}^{N_{c}+1}\). The field content allows a non-vanishing superpotential of the form [18] \[W_{mag}=M^{2}B \tag{5}\] In the rest of this section we provide the physical derivation of this confining duality. We start by adding a singlet \(\alpha\) in the electric theory, flipping the meson \(M\) through a superpotential \[\Delta W_{ele}=\alpha Q_{1}Q_{2} \tag{6}\] In this way the dual superpotential becomes \[W_{mag}=\alpha M+MB^{2} \tag{7}\] that vanishes once we compute the F-terms of the massive fields \(\alpha\) and \(M\). The next step consists in deconfining the field \(A\). We distinguish two cases, depending on the parity of \(N_{c}\). Let us study them separately. ### Deconfinement with odd \(N_{c}=2k+1\) In this case we deconfine the rank-2 conjugate anti-symmetric tensor \(A\) of \(\mathrm{SU}(2k+2)\) We depicted the model in Figure 1 in terms of a quiver gauge theory. The superpotential for the deconfined model is schematically \[W=(BQ_{2})^{2}+\alpha Q_{1}Q_{2}+BLC+\beta L^{2} \tag{8}\] where the field \(\beta\) corresponds to the \(2\times 2\) antisymmetric matrix, i.e. it is a singlet. The anti-symmetric tensor \(A\) is recovered by confining the \(\mathrm{USp}(2k)\) gauge node in terms of the field B, i.e. \(A\sim B^{2}\), where the contraction is done on the \(\mathrm{USp}(2k)\) indices. The next step consists of Seiberg duality on \(\mathrm{SU}(2k+2)\). This gauge group is self dual and the quiver is represented in Figure 2. The superpotential of the dual theory is \[W=(M_{2})^{2}+\alpha M_{1}+M_{3}L+\beta L^{2}+M_{1}q_{1}q_{2}+M_{2}bq_{2}+M_{3 }bc+M_{4}q_{1}c \tag{9}\] where \(b,c,q_{1},q_{2}\) are the dual quarks and the mesons \(M_{1,2,3,4}\) are associated to the quarks of the previous phase through the dictionary \[M_{1}\longleftrightarrow Q_{1}Q_{2},\quad M_{2}\longleftrightarrow BQ_{2}, \quad M_{3}\longleftrightarrow BC,\quad M_{4}\longleftrightarrow Q_{1}C \tag{10}\] Figure 1: Quiver description of the model after the deconfining of the \(\mathrm{SU}(2k+2)\) rank-2 conjugate anti-symmetric tensor A. Gauge groups are represented as circles while flavor nodes are represented with squares. Symplectic groups are depicted in blue and unitary groups are depicted in red. y integrating out the massive fields this superpotential becomes \[W=(bq_{2})^{2}+\beta(bc)^{2}+M_{4}q_{1}c \tag{11}\] Then we consider the USp\((2k)\) gauge group with \(2k+2\) flavors. This gauge theory confines with a quantum constraint enforced on the moduli space. By considering the low energy dynamics we are left with a single gauge group SU\((2k+2)\) and the field content can be read from the quiver in Figure 3. The superpotential becomes \[W=aq_{2}^{2}+\beta ac^{2}+M_{4}q_{1}c+\lambda(\text{Pf}(a)-\Lambda^{2k+2}) \tag{12}\] The last term in (12) enforces the quantum constraint on the moduli space through the Lagrange multiplier \(\lambda\). Figure 3: Quiver obtained after confining the USp\((2k)\) gauge node. The anti-symmetric field \(a\) gets a vev from the quantum constraint on the moduli space. Figure 2: Quiver obtained after Seiberg duality on SU\((2k+2)\). The rank of the dual gauge group is the same as above, but there are new mesonic degresse of freedom that modify the superpotential. The constraint breaks the gauge symmetry to \(\text{USp}(2(k+1))\) and this Higgsing gives mass to \(Q_{2}\) as well. The leftover superpotential is \[W=\beta c^{2}+M_{4}q_{1}c \tag{13}\] This model s-confines and the final superpotential is \[W=\beta\gamma+M_{4}L+\text{Pf}\left(\begin{array}{cc}\gamma&L\\ -L^{T}&U\end{array}\right) \tag{14}\] where \(L,U\) and \(\gamma\) correspond to the \(\text{USp}(2k+2)\) contractions \(q_{1}c\), \(q_{1}^{2}\) and \(c^{2}\) respectively. Integrating out the massive fields we are left with just the meson \(L\) in the anti-symmetric representation of the flavor symmetry group \(\text{SU}(N_{c}+3)=\text{SU}(2k+4)\). This field corresponds indeed to the baryon \(B\) expected in the SWV duality. In order to connect with the WZ superpotential of the SWV duality (5) we have to flip the field \(\alpha\) in (5). This turns off the field \(\alpha\) in the derivation and keeps the meson \(M_{1}\) massless in (9). The superpotential (11) then becomes \[W=(bq_{2})^{2}+\beta(bc)^{2}+M_{4}q_{1}c+M_{1}q_{1}q_{2} \tag{15}\] The other steps in the derivation are straightforward and in the final superpotential (14) there is a further contribution \(\Delta W\propto M_{1}^{2}U\). This terms survives after the massive fields are integrated out and by the identification \(U\leftrightarrow B\) and \(M_{1}\leftrightarrow M\) we obtain exactly (5) as expected. This concludes the proof of the duality from the field theory analysis in the case with \(N_{c}=2k+1\). Before moving to \(N_{c}=2k\) it is instructive to reproduce the analysis using the \(S^{3}\times S^{1}\) supersymmetric index. In order to have a better physical intuition of the duality from localization we start by rewriting \(I_{E}\) and \(I_{M}\) by modifying the \(\text{USp}(2N_{c})\) fugacities as \(t\to t\sqrt{pq/S}\). This gives \[\begin{split} I_{E}&=\frac{(p,p)_{\infty}^{N_{c}}(q,q)_{\infty}^{N_{c}}}{(N_{c}+1)!}\int_{\mathbb{T}^{N_{c}}}\prod_{1\leq i<j\leq N _{c}+1}\frac{\Gamma(Sz_{i}^{-1}z_{j}^{-1};p,q)}{\Gamma(z_{i}^{-1}z_{j},z_{i}^{ -1}z_{j};p,q)}\\ &\times\prod_{j=1}^{N_{c}+1}\prod_{m=1}^{N_{c}+3}\Gamma(s_{m}z_{ j}^{-1};p,q)\prod_{k=1}^{N_{c}}\Gamma(\sqrt{\frac{pq}{S}}t_{k}^{\pm 1}z_{j};p,q) \prod_{j=1}^{N_{c}}\frac{\text{d}z_{j}}{2\pi iz_{j}},\end{split} \tag{16}\] and \[I_{M}=\prod_{k=1}^{N_{c}}\prod_{m=1}^{N_{c}+3}\Gamma(s_{m}t_{k}^{\pm 1}\sqrt{ \frac{pq}{S}};p,q)\prod_{1\leq l<m\leq N_{c}+3}\Gamma(Ss_{l}^{-1}s_{m}^{-1};p,q), \tag{17}\] again with the balancing condition \(S=\prod_{m=1}^{N_{c}+3}s_{m}\). Then we proceed to deconfine the rank-2 conjugate anti-symmetric tensors, to dualize the \(\text{SU}(2k+4)\) node and to integrate out the massive fields. These steps are done by using the integral identities collected in [14] (which are reproduced in the appendix A) and the reflection equation for the elliptic gamma functions \(\Gamma_{e}(pq/x)\Gamma_{e}(x)=1\). We skip these standard elementary steps and focus on the quiver described in Figure 4, where we also highlighted in blue the fugacity of each field in the \(S^{3}\times S^{1}\) supersymmetric index. The \(S^{3}\times S^{1}\) supersymmetric index for these models is given by formula \[I_{E}= \frac{(p,p)_{\infty}^{2k+1}(q,q)_{\infty}^{3k+1}}{2^{k}k!(3k+2)!} \Gamma(S^{k+1};p,q)\prod_{a=1}^{2k+4}\prod_{m=1,2}\Gamma(\sqrt{pqS^{k}}s_{a}y_ {m}^{-1})\] \[\times \int_{\mathbb{T}^{2k-1}}\int_{\mathbb{T}^{k}}\prod_{u=1}^{k}\frac {\mathrm{d}x_{u}}{2\pi ix_{u}}\prod_{i=1}^{2k+2}\frac{\mathrm{d}z_{i}}{2\pi iz _{i}}\prod_{1\leq u<v\leq k}\frac{1}{\Gamma(x_{u}^{\pm 1}x_{v}^{\pm 1};p,q)} \prod_{u=1}^{k}\frac{\prod_{i=1}^{2k+2}\Gamma(z_{i}x_{u}^{\pm 1};p,q)}{ \Gamma(x_{u}^{\pm 2};p,q)}\] \[\times \frac{\prod_{i=1}^{2k+2}\prod_{a=1}^{2k+4}\Gamma(\sqrt{S}s_{a}^{- 1}z_{i})\prod_{m=1,2}\Gamma(\sqrt{pq/S^{k+1}}z_{i}^{-1}y_{m})\prod_{k=1}^{2k+1 }\Gamma(\sqrt{pq}z_{i}^{-1}t_{k}^{\pm 1})}{\prod_{1\leq i<j\leq 2k+2}\Gamma(z_{i}z_{ j}^{-1},z_{i}^{-1}z_{j};p,q)}\] We then consider the change of variables \(z_{i}=e^{2\pi i\phi_{i}}\), where \(\phi_{i}\) are real and the balancing condition is \(\sum_{i=1}^{2k+2}\phi_{i}=0\). With such a change of variables we can substitute in the index the following terms \[\frac{(p,p)_{\infty}^{k}(q,q)_{\infty}^{k}}{2^{k}k!}\int_{\mathbb{ T}^{k}}\prod_{1\leq u<v\leq k}\frac{1}{\Gamma(x_{u}^{\pm 1}x_{v}^{\pm 1};p,q)} \prod_{u=1}^{k}\frac{\prod_{i=1}^{2k+2}\Gamma(e^{2\pi i\phi_{i}}x_{u}^{\pm 1 };p,q)}{\Gamma(x_{u}^{\pm 2};p,q)}\prod_{u=1}^{k}\frac{\mathrm{d}x_{u}}{2\pi ix_{u}}\] \[=\frac{1}{(p;p)_{\infty}^{k}(q,q)_{\infty}^{k}}\sum_{(\Phi_{1} \bigcup\Phi_{2})/S_{2}^{k}}\prod_{1\leq i<j\leq k+1}\Gamma(e^{2\pi i(\pm\tilde {\phi}_{i}\pm\tilde{\phi}_{j})};p,q)\sum_{S_{k+1}(\Phi_{2})}\prod_{i=1}^{k} \delta(\tilde{\phi}_{i}+\tilde{\phi}_{k+1+i}), \tag{2.19}\] Figure 4: \(\mathrm{SU}(2k+4)\times\mathrm{USp}(2k)\) quiver before confining the symplectic node. where we used the identity (A.6). This identity was derived in [21] and it represents the evaluation of the superconformal index for USp(\(2M\)) SQCD with \(2M+2\) fundamentals. The fact that the models confines with a quantum superpotential that breaks the chiral symmetry is reflected in the structure of the \(\delta\)-functions in (A.6). In this way (2.18) becomes \[\begin{split}& I_{E}=\Gamma(S^{k+1};p,q)\prod_{a=1}^{2k+4}\prod_{m=1,2}\Gamma(\sqrt{pqS^{k}}s_{a}y_{m}^{-1})\frac{(p,p)_{\infty}^{k+1}(q,q)_{ \infty}^{k+1}}{(2k+2)!}\int_{\mathbb{T}^{2k-1}}\prod_{i=1}^{2k+2}\frac{\text{ d}z_{i}}{2\pi iz_{i}}\\ &\frac{\prod_{i=1}^{2k+2}\prod_{a=1}^{2k+4}\Gamma(e^{2\pi i\phi_{i }}\frac{\sqrt{S}}{s_{a}})\prod_{m=1,2}\Gamma(\sqrt{\frac{pq}{S^{k+1}}}e^{-2\pi i \phi_{i}}y_{m})\prod_{k=1}^{2k+1}\Gamma(\sqrt{pq}e^{-2\pi i\phi_{i}}t_{k}^{ \pm 1})}{\prod_{1\leq i<j\leq 2k+2}\Gamma(e^{2\pi i(\phi_{i}-\phi_{j})},e^{2\pi i (-\phi_{i}+\phi_{j})};p,q)}\\ &\sum_{(\Phi_{1}\bigcup\Phi_{2})/S_{2}^{k}}\prod_{1\leq i<j\leq k +1}\Gamma(e^{2\pi i(\pm\tilde{\phi}_{i}\pm\tilde{\phi}_{j})};p,q)\sum_{S_{k+1} (\Phi_{2})}\prod_{i=1}^{k}\delta(\tilde{\phi}_{i}+\tilde{\phi}_{k+1+i}),\end{split} \tag{2.20}\] where \(\Phi_{1}=(\tilde{\phi}_{1},...,\tilde{\phi}_{k},\tilde{\phi}_{k+1}=\phi_{k+1})\) and \(\Phi_{2}=(\tilde{\phi}_{k+2},...,\tilde{\phi}_{2k+2})\). Using the constraints imposed by the balancing condition \(\sum_{i=1}^{2k+2}\phi_{i}=0\), the delta functions and the reflection equation we can simplify (2.20) to \[\begin{split} I_{E}=&\Gamma(S^{k+1};p,q)\prod_{a=1}^ {2k+4}\prod_{m=1,2}\Gamma(\sqrt{pqS^{k}}s_{a}y_{m}^{-1})\frac{(p,p)_{\infty}^{ k+1}(q,q)_{\infty}^{k+1}}{(k+1)!\cdot 2^{k+1}}\int_{\mathbb{T}}\prod_{i=1}^{k+1}\frac{ \text{d}z_{i}}{2\pi iz_{i}}\\ &\frac{\prod_{i=1}^{k+1}\prod_{a=1}^{2k+4}\Gamma(\sqrt{S}s_{a}^{- 1}z_{i}^{\pm 1})\prod_{m=1,2}\Gamma(\sqrt{pq/S^{k+1}}z_{i}^{\pm 1}y_{m})}{\prod_{1\leq i <j\leq k+1}\Gamma(z_{i}^{\pm 1}z_{j}^{\pm 1};p,q)\prod_{i=1}^{k+1}\Gamma(z_{i}^{\pm 2};p,q)}, \end{split} \tag{2.21}\] that represents the s-confining USp(\(2k+2\)) theory with \(2k+6\) fundamentals and superpotential (2.12). Using the limiting case identity associated to this confining duality (i.e. formula (A.5) for \(N_{f}=N_{c}+4\)) the identity between (2.16) and (2.17) is then correctly recovered. This concludes the proof of the derivation of the identity of [17] from the physical approach when \(N_{c}=2k+1\) ### Deconfinement with even \(N_{c}=2k\) In this case we deconfine the rank-2 conjugate anti-symmetric tensor \(A\) of SU(\(2k+1\)) We depicted the model in Figure 5 in terms of a quiver gauge theory. he superpotential for the deconfined model is given again by formula (8). The next step consists of Seiberg duality on \(\mathrm{SU}(2k+1)\). The dual gauge group is \(\mathrm{SU}(2k+2)\) and the quiver is represented in Figure 6. The superpotential of the dual theory is again given by (9). The fields \(b,c,q_{1},q_{2}\) are the dual quarks and the mesons \(M_{1,2,3,4}\) are associated to the quarks of the previous phase through the dictionary spelled out in (10). By integrating out the massive fields the superpotential becomes the one in (11). Then we observe that \(\mathrm{USp}(2k)\) with \(2k+2\) flavors confines with a quantum moduli space and after such confinement we are left with a \(\mathrm{SU}(2k+2)\) gauge group with superpotential (12). The partial Higgsing triggered by the quantum constraint enforced by the Lagrange multiplier reduces the theory to \(\mathrm{USp}(2k+2)\) with \(2k+6\) fundamentals and superpotential (14). Integrating out the massive fields we are left with a single field \(L\) that correspond to the baryon \(B\) of the original theory. Figure 5: Quiver description of the model after the deconfining of the \(\mathrm{SU}(2k+1)\) rank-2 conjugate anti-symmetric tensor A Figure 6: Quiver obtained after Seiberg duality on \(\mathrm{SU}(2k+1)\). Curiously in this case the dual gauge group increases its rank becoming \(\mathrm{SU}(2k+2)\). The analysis for even \(N_{c}\) is then almost identical to the case of odd \(N_{c}\). For this reason we skip the derivation of the duality from the superconformal index. The interested reader can reproduce it by following the stepwise procedure that we described for odd \(N_{c}\). ## 3 The Okazaki-Smith 3d duality In this section we study a 3d \(\mathcal{N}=2\) confining duality recently proposed in [20]. The electric model is USp(4) SQCD with two fundamentals and two rank-2 antisymmetric tensors. The model has an \(\mathrm{U}(2)^{2}\times\mathrm{U}(1)_{R}\) global symmetry and the charges of the fields under these symmetries are summarized in (10). \[\begin{array}{c|c||cccc}&\mathrm{USp}(4)&\mathrm{SU}(2)_{A}&\mathrm{SU}(2)_{ a}&\mathrm{U}(1)_{A}&\mathrm{U}(1)_{a}&U(1)_{R}\\ \hline A&6&2&1&1&0&0\\ Q&4&1&2&0&1&0\end{array} \tag{10}\] The model has vanishing superpotential and its low energy dynamics is described by the gauge invariant combinations \(M=Q_{1}Q_{2}\), \(\phi_{I}=\mathrm{Tr}\,A_{I}\), \(\phi_{IJ}=\mathrm{Tr}(A_{I}A_{J})\), \(B_{\alpha\beta}=Q_{\alpha}A_{1}A_{2}Q_{\beta}\) and \(B_{I}=Q_{1}\phi_{I}Q_{2}\). These fields interact through a superpotential with a singlet \(\mathcal{T}_{4}\) that corresponds to the minimal monopole of USp(4). The charges of the fields with respect to the global \(\mathrm{U}(2)^{2}\times\mathrm{U}(1)_{R}\) symmetry are: \[\begin{array}{c|cccc}&\mathrm{SU}(2)_{A}&\mathrm{SU}(2)_{a}&\mathrm{U}(1)_{ A}&\mathrm{U}(1)_{a}&U(1)_{R}\\ \hline M&1&1&0&2&0\\ B_{\alpha\beta}&1&3&2&2&0\\ \phi_{IJ}&3&1&2&0&0\\ \phi_{I}&2&1&1&0&0\\ B_{I}&2&1&1&2&0\\ \mathcal{T}_{4}&1&1&-4&-4&2\end{array} \tag{11}\] In the following we will derive this confining duality by deconfining the antisymmetric tensors and then by sequentially dualizing the gauge groups. We found that in order to proceed it is very useful to add to the electric theory an \(\mathrm{SU}(2)_{A}\) vector \(\vec{s}=(s_{1},s_{2})\) interacting with \(\,\mathrm{Pf}A_{1}\) and \(\,\mathrm{Pf}A_{2}\) through the superpotential \[W=\vec{s}\cdot\,\mathrm{Pf}\vec{A}=\sum_{I=1,2}s_{I}\,\mathrm{Pf}A_{I}=\frac{ 1}{8}\sum_{I=1,2}s_{I}\big{(}\,\mathrm{Tr}(A_{I})^{2}-2\,\mathrm{Tr}\big{(}A_{ I}^{2}\big{)}\big{)} \tag{12}\] where the trace of an anti-symmetric matrix is defined as \(\mathrm{Tr}\,A_{I}=A_{I}^{ij}J_{ij}\). ### Field theory analysis In the following we will derive the duality using the field theory approach. We proceed by representing the model in terms of a quiver gauge theory, using the same conventions of the previous section: the blue circles refer to symplectic gauge groups while the red squares identify the special unitary flavor groups. We start by considering the model with the flip in formula (10) \[\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad \qquad W=\sum_{I=1,2}s_{I}\,\text{Pf}A_{I} \tag{11}\] We then deconfine the two rank-2 anti-symmetric fields \(A_{I}\) with two auxiliary USp(2) nodes with the assignment \[A_{1}^{ij}=q_{1}^{\alpha_{1}i}q_{1}^{\beta_{1}j}\epsilon_{\alpha_{1}\beta_{1}},\qquad A_{2}^{ij}=q_{2}^{\alpha_{2}\,i}q_{2}^{\beta_{2}\,j}\epsilon_{\alpha_{ 2}\beta_{2}} \tag{12}\] where the \(i\)-index refers to the USp(4) node and \((\alpha_{1,2},\beta_{1,2})\) are indices of the two USp(2)\({}_{1,2}\) gauge groups. Therefore the deconfined theory is \[\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad W=0 \tag{13}\] Observe that the superpotential is vanishing because the singlets \(s_{1,2}\) have flipped the monopoles of the USp(2)\({}_{1,2}\) gauge groups. The central USp(4) node in this theory has then 6 fundamentals and therefore it confines [22]. The IR description has then two USp(2)\({}_{1}\) and USp(2)\({}_{2}\) gauge groups connected by a bifundamental field. There is still a manifest SU(2) flavor symmetry associated to a node in the quiver and there are further fundamental fields for both the USp(2)\({}_{1,2}\) gauge factors. There is also a singlet \(Y_{4}\) identified with the monopole of the USp(4) gauge group for the model in (13), that interacts through a superpotential with the generalized meson of USp(4) itself. The quiver and the superpotential for this dual theory are \[\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad \qquad W=Y_{4}\,\text{Pf}X\\ \begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array} \qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps} \end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps} \end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps} \end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps} \end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs.eps}\end{array} \tag{15}\] Observe that the components \(X_{11}\), \(X_{22}\) and \(X_{33}\) of the meson \(X\) correspond to \(2\times 2\) anti-symmetric matrix, i.e. they are singlets. The two USp(2) nodes have each 4 fundamentals and are therefore confining [22]. Here we choose to confine the USp(2)\({}_{1}\) group. The other choice is completely equivalent because of the SU(2)\({}_{A}\) global symmetry that rotates the two anti-symmetric in the original description of the model (we will further comment on this equivalence below). After confining the USp(2)\({}_{1}\) gauge group we are left with a USp(2)\({}_{2}\) SQCD with four fundamentals and a non-trivial superpotential. The quiver and the operator mapping are reported below \[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/q1.eps}\end{array} \tag{13}\] while the superpotential is \[\begin{array}{c}W=\epsilon_{\alpha_{2}\beta_{2}}\epsilon_{\alpha\beta} \left[Y_{4}\left(-\tilde{X}_{23}^{\alpha_{2}\beta}X_{23}^{\beta_{2}\alpha}+ \frac{1}{8}S_{1}X_{22}^{\alpha_{2}\beta_{2}}X_{33}^{\alpha\beta}-\frac{1}{4} \tilde{X}_{33}^{\alpha\beta}X_{22}^{ab}\right.\\ -\left.\frac{1}{4}\tilde{X}_{22}^{\alpha_{2}\beta_{2}}X_{33}^{\alpha\beta}- \frac{1}{4}S_{1}X_{23}^{\alpha_{2}\alpha}X_{23}^{\beta_{2}\beta}\right)+\frac{ Y_{2}^{(1)}}{4}\left(\tilde{X}_{22}^{\alpha_{2}\beta_{2}}\tilde{X}_{33}^{ \alpha\beta}-2\tilde{X}_{23}^{\alpha_{2}\alpha}\tilde{X}_{23}^{\beta_{2}\beta }\right)\right]\end{array} \tag{14}\] The field \(Y_{2}^{(1)}\) is the monopole of the USp(2)\({}_{1}\) gauge group acting as a singlet in the confined phase. We conclude the sequence by confining the USp(2)\({}_{2}\) gauge group, that has indeed four fundamentals. This leads to the final, confined, theory where the new mesons are mapped to the fundamentals of USp(2)\({}_{2}\) as \[V^{\alpha\beta}=\epsilon_{\alpha_{2}\beta_{2}}\tilde{X}_{23}^{\alpha_{2}\beta} X_{23}^{\beta_{2}\alpha},\,U^{\alpha\beta}=\epsilon_{\alpha_{2}\beta_{2}}X_{23}^ {\alpha_{2}\alpha}X_{23}^{\beta_{2}\beta},\,T^{\alpha\beta}=\epsilon_{\alpha_ {2}\beta_{2}}\tilde{X}_{23}^{\alpha_{2}\alpha}\tilde{X}_{23}^{\beta_{2}\beta} \tag{15}\] Furthermore there are two singlets of USp(2)\({}_{2}\) that we redefine as \(R_{1}=\epsilon_{\alpha_{2}\beta_{2}}X_{22}^{\alpha_{2}\beta_{2}}\) and \(R_{2}=\epsilon_{\alpha_{2}\beta_{2}}\tilde{X}_{22}^{\alpha_{2}\beta_{2}}\). The superpotential of this final WZ model is \[\begin{array}{c}W=\epsilon_{\alpha\beta}\left[Y_{4}\left(-V^{\alpha\beta}+ \frac{1}{8}S_{1}R_{1}X_{33}^{\alpha\beta}-\frac{1}{4}R_{1}\tilde{X}_{33}^{ \alpha\beta}-\frac{1}{4}R_{2}X_{33}^{\alpha\beta}-\frac{1}{4}S_{1}U^{\alpha \beta}\right)\right.\\ +\left.\frac{Y_{2}^{(1)}}{4}\left(R_{2}\tilde{X}_{33}^{\alpha\beta}-2T^{ \alpha\beta}\right)+\frac{Y_{2}^{(2)}}{2}\epsilon_{\ell m}\left(U^{\alpha \ell}T^{\beta m}-V^{\alpha\ell}V^{\beta m}\right)\right]\end{array} \tag{16}\] where the field \(Y_{2}^{(2)}\) is the monopole of the USp(2)\({}_{2}\) gauge group acting as a singlet in the confined phase. The expression (16) needs some massage in order to simplify its interpretation. For example some fields appear quadratically in the superpotential and they can be integrated out. By writing \[V_{\alpha\beta}=\sigma^{\mu}_{\alpha\beta}v_{\mu},\qquad\sigma^{\mu}=({\bf 1}, \sigma^{i}) \tag{17}\] we see that the \(v_{3}\) field is massive. The singlet field \(\epsilon_{\alpha\beta}T^{\alpha\beta}\) also acquires a mass and it can be integrated out in the IR. By considering the various F-term conditions, we get the final superpotential \[W=Y_{2}^{(2)}\left[\frac{1}{2}R_{2}U\tilde{X}_{33}-\frac{1}{64}\left(R_{1}(S_{1} X_{33}-2\tilde{X}_{33})-2(R_{2}X_{33}+S_{1}U)\right)^{2}+\det V_{\alpha\beta}\right] \tag{3.13}\] We can identify the fields here with the ones in formula (3.2) by first flipping the singlets \(s_{I}\) in the original USp(4) gauge theory. This can be done by adding two fields, denoted as \(r_{I}\) through the superpotential \(\Delta W=r_{I}s_{I}\). These fields can be integrated out in the electric description and \(F\)-terms of \(s_{I}\) leave us with the identification \(r_{I}\propto\operatorname{Tr}A_{I}^{2}\). In the dual description the fields \(r_{I}\) are crucial in order to reconstruct the correct field content of the duality. Looking at the global symmetry structure the explicit mapping is then \[Y_{2}^{(2)} \leftrightarrow \mathcal{T}_{4}\] \[X_{33} \leftrightarrow M\] \[(S_{1},R_{1}) \leftrightarrow (\phi_{1},\phi_{2})\] \[(R_{2},r_{1},r_{2}) \leftrightarrow (\phi_{12},\phi_{11},\phi_{22}) \tag{3.14}\] \[V_{\alpha\beta} \leftrightarrow B_{\alpha\beta}\] \[(\tilde{X}_{33},U) \leftrightarrow (B_{1},B_{2})\] Substituting this mapping into the superpotential (3.13) we obtain \[W=\mathcal{T}_{4}\Big{(}B_{1}B_{2}\phi_{1,2}-\left(\phi_{2}\left(M\phi_{1}-B_{ 1}\right)-\left(M\phi_{1,2}+B_{2}\phi_{1}\right)\right)^{2}+\det B_{\alpha \beta}\Big{)} \tag{3.15}\] where we absorbed the numerical coefficients into the fields. Actually we could have reversed the order of the last two confining dualities on USp(2)\({}_{1}\) and USp(2)\({}_{2}\), arriving to a different results, with the role of \(B_{I}\) and \(\phi_{I}\) exchanged. However the two WZ models must be equivalent, and this equivalence corresponds to the following more symmetric formulation of the superpotential \[W = \mathcal{T}_{4}\Big{(}B_{1}B_{2}\phi_{1,2}-\left(\phi_{2}\left(M \phi_{1}-B_{1}\right)-\left(M\phi_{1,2}+B_{2}\phi_{1}\right)\right)^{2} \tag{3.16}\] \[- \left(\phi_{1}\left(M\phi_{2}-B_{2}\right)-\left(M\phi_{1,2}+B_{ 1}\phi_{2}\right)\right)^{2}+\det B_{\alpha\beta}\Big{)}\] The last step of the derivation of the superpotential of the WZ model consists of flipping \(s_{I}\) in the electric model. This gives rise to the interactions involving the fields \(\phi_{11}\) and \(\phi_{22}\). By a symmetry argument we claim that the interactions allowed by the global symmetry are generated at quantum level and that the flipped fields reconstruct the SU(2)\({}_{A}\) adjoint field \(\phi_{IJ}\). In the next subsection we will confirm this expectation from the analysis of the partition function. ### 3d partition function We complete our analysis by reproducing the derivation of the duality from supersymmetric localization on the squashed three sphere. Such procedure gives rise to the identity between the partition function of USp(4) with with two anti-symmetric and two fundamentals and the partition function of the WZ model for the gauge singlets \(B_{\alpha,\beta},B_{I},\phi_{I},\phi_{IJ},M\) and \(\mathcal{T}_{4}\). The global symmetry enters in these identities in terms of real masses, that from the field theory side are associated to vevs of the reals scalars in the vector multiplets of the weakly gauged background flavor symmetries. Before studying the deconfinement of two rank-2 anti-symmetric tensors from the three sphere partition function we briefly review the necessary definitions. The partition function on the squashed three sphere \(S_{b}^{3}\), obtained from localization in [23] (see also [24; 25; 26] for the round case) is a matrix integral over the reals scalar in the vector multiplet in the Cartan of the gauge group. There is a classical term corresponding to the CS action (global and local) and the matter and the gauge multiplet contribute with their one loop determinant. These last can be associated to hyperbolic Gamma functions, formally defined as \[\Gamma_{h}(z;\omega_{1},\omega_{2})=\prod_{n_{1},n_{2}\geq 0}^{\infty}\frac{( n_{1}+1)\omega_{1}+(n_{2}+1)\omega_{2}-z}{n_{1}\omega_{1}+n_{2}\omega_{2}+z} \tag{3.17}\] The argument of such Gamma functions is physically interpreted as a holomorphic combination between the real masses for the gauge and the global symmetries and the R-charges (or mass dimensions). The purely imaginary parameters \(\omega_{1}=ib\) and \(\omega_{2}=i/b\) are related to the squashing parameter of the three sphere \(S_{b}^{3}\). Here we will only focus on the case of symplectic gauge group. Let us consider the partition function of an USp(\(2N_{c}\)) gauge theory with \(2N_{f}\) fundamentals. It is given by \[Z_{USp(2N_{c}),N_{f}}(\mu) = \frac{1}{2^{n}n!\sqrt{(-\omega_{1}\omega_{2})^{n}}}\int\prod_{i= 1}^{N_{c}}\mathrm{d}z_{i}\,\frac{\prod_{a=1}^{2N_{f}}\Gamma_{h}(\pm z_{i}+\mu_ {a})}{\Gamma_{h}(\pm 2z_{i})}\prod_{i<j}\ \frac{1}{\Gamma_{h}(\pm z_{i}\pm z_{j})}\] In our analysis we will use an identity involving this partition function and its dual Aharony phase [22]. The identity is (see Theorem 5.5.9 of [27]) \[Z_{USp(2N_{c}),N_{f}}(\mu) = \Gamma_{h}\left(2\omega(N_{f}-N_{c}))-\sum_{a=1}^{2N_{f}}\mu_{a}\right) \tag{3.19}\] \[\times \prod_{a<b}\Gamma_{h}(\mu_{a}+\mu_{b})Z_{USp(2(N_{f}-N_{c}-1)),N_{ f}}(\omega-\mu)\] with \(2\omega\equiv\omega_{1}+\omega_{2}\). Observe that the identity (3.19) remains valid for \(N_{f}=N_{c}+1\), that corresponds to the confining case of Aharony duality [22], where only the meson \(M\) and the minimal \({\rm USp}(2N_{c})\) monopole \(Y\) survive in the WZ model and they interact through the superpotential \(W=Y\,{\rm Pf}M\). We start considering the original model, adding also the flippers \(s_{I}\) arising from the superpotential (10). The partition function is \[Z = \frac{\prod_{A=1,2}\Gamma_{h}(2\omega-2n_{A})}{8\sqrt{-\omega_{1} \omega_{2}}^{2}}\int\prod_{i=1,2}{\rm d}z_{i}\,\frac{\prod_{a=1,2}\Gamma_{h}( \pm z_{i}+m_{a})}{\Gamma_{h}(\pm 2z_{i})}\frac{\prod_{A=1,2}\Gamma_{h}(\pm z_{1} \pm z_{2}+n_{A})}{\Gamma_{h}(\pm z_{1}\pm z_{2})}\] In this formula \(m_{1,2}\) are the real masses of the two fundamental fields and \(n_{1,2}\) are the real masses of the two anti-symmetric fields. We can also use a different basis \[m_{1}=\rho+\sigma,\quad m_{2}=\rho-\sigma,\quad n_{1}=\mu+\nu,\quad n_{2}=\mu-\nu \tag{39}\] giving an explicit parameterization it terms of the Cartan of the \({\rm U}(2)^{2}\) flavor symmetry. Indeed in this way \(\sigma\) and \(\nu\) parameterize the Cartan of \({\rm SU}(2)_{a}\) and \({\rm SU}(2)_{A}\) respectively. We then proceed by deconfining the two rank-2 anti-symmetric tensors. This step produces two \({\rm USp}(2)\) gauge nodes, two bifundamentals, each connecting one of these \({\rm USp}(2)\) gauge groups to the original \({\rm USp}(4)\). The partition function of the model becomes \[Z = \frac{1}{32\sqrt{(-\omega_{1}\omega_{2})^{4}}}\int\frac{{\rm d}z_ {1}\,{\rm d}z_{2}\,{\rm d}w_{1}\,{\rm d}w_{2}}{\Gamma_{h}(\pm 2z_{1})\Gamma_{h}( \pm 2z_{2})\Gamma_{h}(\pm 2w_{1})\Gamma_{h}(\pm 2w_{2})} \tag{40}\] \[\times \prod_{i=1,2}\left(\prod_{a=1,2}\Gamma_{h}(\pm z_{i}+m_{a})\cdot \prod_{A=1,2}\Gamma_{h}(\pm z_{i}\pm w_{A}+n_{A}/2)\right)\] As a check we can see that (38) is obtained by applying (20) to the two \({\rm USp}(2)\) gauge groups in (40). The partition function (40) corresponds to the one for the model in represented in (12). The next step consists of Aharony duality on \({\rm USp}(4)\). At the level of the partition function it corresponds to use the identity (20) on the gauge theory identified by the variables \(z_{1,2}\). The partition function becomes \[Z = \frac{1}{4\sqrt{(-\omega_{1}\omega_{2})^{2}}}\Gamma_{h}(2\omega-m _{1}-m_{2}-n_{1}-n_{2})\Gamma_{h}(m_{1}+m_{2})\prod_{A=1,2}\Gamma_{h}(n_{A})\] \[\times \int{\rm d}w_{1}\,{\rm d}w_{2}\,\frac{\prod_{A=1,2}\Gamma_{h}(m_{ a}\pm w_{A}+n_{A}/2)\cdot\Gamma_{h}(\pm w_{1}\pm w_{2}+(n_{1}+n_{2})/2)}{\Gamma_{h}( \pm 2w_{1})\Gamma_{h}(\pm 2w_{2})}\] The partition function (40) corresponds to the one for the model in represented in (13). The next step consists of a confining limit of Aharony duality on one of the \(\mathrm{USp}(2)\) factor. Choosing one of the two \(\mathrm{USp}(2)\) nodes has the effect of making the \(\mathrm{SU}(2)_{A}\times\mathrm{SU}(2)_{a}\) global symmetry not manifest in the integrand of the partition function. Following the discussion on the field theory side here we choose to dualize the \(\mathrm{USp}(2)_{1}\) gauge group, such that the partition function becomes \[Z = \frac{1}{2}\Gamma_{h}(2\omega-m_{1}-m_{2}-n_{1}-n_{2})\Gamma_{h}(m _{1}+m_{2})\prod_{A=1,2}\Gamma_{h}(n_{A})\] \[\times \Gamma_{h}(m_{1}+m_{2}+n_{1})\Gamma_{h}(n_{1}+n_{1})\Gamma_{h}(2 \omega-2n_{1}-n_{2}-m_{1}-m_{2})\] \[\times \int\mathrm{d}w_{2}\,\frac{\prod_{a=1,2}\Gamma_{h}(m_{a}\pm w_{2 }+n_{2}/2+n_{1})\cdot\prod_{A=1,2}\Gamma_{h}(m_{a}\pm w_{2}+n_{2}/2)}{\Gamma_{ h}(\pm 2w_{2})}\] The partition function (3.2) corresponds to the one for the model in represented in (3.1). The last step of the procedure requires a confining limit of Aharony duality on the leftover \(\mathrm{USp}(2)_{2}\) gauge group. This gives the final partition function \[Z = \Gamma_{h}(2\omega-m_{1}-m_{2}-n_{1}-n_{2})\Gamma_{h}(m_{1}+m_{2} )\prod_{A=1,2}\Gamma_{h}(n_{A}) \tag{3.25}\] \[\times \Gamma_{h}(m_{1}+m_{2}+n_{1})\Gamma_{h}(n_{1}+n_{2})\Gamma_{h}(2 \omega-2n_{1}-n_{2}-m_{1}-m_{2})\] \[\times \Gamma_{h}(m_{1}+m_{2}+2n_{1}+n_{2})\Gamma_{h}(m_{1}+m_{2}+n_{1})\] \[\times \Gamma_{h}(2\omega-2m_{1}-2m_{2}-2n_{1}-2n_{2})\prod_{a,b=1,2} \Gamma_{h}(m_{a}+m_{b}+n_{1}+n_{2})\] This expression still needs some massage. First we can integrate out the massive fields, as done on the field theory approach. Here this integration corresponds to take advantage of the formula \(\Gamma_{h}(2\omega-x)\Gamma_{h}(x)=1\). After this step we can also write down (3.25) in a manifestly \(\mathrm{SU}(2)_{A}\times\mathrm{SU}(2)_{a}\) invariant form. We arrive to the expression \[Z = \Gamma_{h}(m_{1}+m_{2})\Gamma_{h}(n_{1}+n_{2})\prod_{A=1,2}(\Gamma _{h}(n_{A})\,\Gamma_{h}(m_{1}+m_{2}+n_{A})) \tag{3.26}\] \[\times \Gamma_{h}(2\omega-2m_{1}-2m_{2}-2n_{1}-2n_{2})\prod_{a\leq b} \Gamma_{h}(m_{a}+m_{b}+n_{1}+n_{2})\] or using (3.2) \[Z = \Gamma_{h}(2\rho)\Gamma_{h}(2\mu)\Gamma_{h}(\mu\pm\nu)\Gamma_{h}( 2\rho+\mu\pm\nu) \tag{3.27}\] \[\times \Gamma_{h}(2\omega-4\mu-4\rho)\Gamma_{h}(2\rho\pm 2\sigma+2\mu,2 \rho+2\mu)\] This is the final expression that matches with (3.2). We can also flip the singlets \(s_{I}\) in the electric side, and on the magnetic side two new singlets appear with their contribution to the \(\Gamma_{h}(2n_{A})=\Gamma_{h}(2\mu\pm 2\nu)\) to the partition function. In this way we can see that all the fields \(B_{I}\), \(\phi_{I}\), \(\phi_{IJ}\), \(B_{\alpha,\beta}\), \(M\) and the monopole \(\mathcal{T}_{4}\) appear in the partition function with the expected real masses. Explicitly we can associate these Gamma functions to the singlets of the confined phase using the mapping \[\begin{split} B_{\alpha\beta}\leftrightarrow\Gamma_{h}(2\rho \pm 2\sigma+2\mu,2\rho+2\mu)&\phi_{IJ}\leftrightarrow\Gamma_{h}(2 \mu\pm 2\nu,2\mu)\\ \phi_{I}\leftrightarrow\Gamma_{h}(\mu\pm\nu)& B_{I}\leftrightarrow\Gamma_{h}(2\rho+\mu\pm\nu)\\ M\leftrightarrow\Gamma_{h}(2\rho)&\mathcal{T}_{4} \leftrightarrow\Gamma_{h}(2\omega-4\mu-4\rho)\end{split} \tag{3.28}\] Indeed the arguments of hyperbolic Gamma functions correspond to the real masses that can be read from the charges in formula (3.2). ## 4 Conclusions In this paper we have derived, using field theory arguments, two confining dualities that have been proposed in the literature from supersymmetric localization. Here the dualities have been derived by combining the technique of rank-2 tensor deconfinement of [5] together with the sequential application of ordinary dualities and/or confining dualities. There are many interesting directions that would be worth to explore. For example the Higgsing triggered, in the 4d model, by the quantum correction imposed on the moduli space, corresponds, on the superconformal index, to the pole pinching [28] vastly used in [4] for the derivation of 4d confining dualities in presence of a superpotential. The confining theories obtained in this way were not discussed in [2], because of the absence of a superpotential. On the other hand confining gauge theories of this type have been discussed in [29], at least for the limiting cases of ordinary dualities with rank-2 tensor matter fields. Therefore we expect many more 4d confining gauge theories with a simple gauge group not discovered yet. Another interesting question regards the 3d duality for USp(4) with two rank-2 anti-symmetric tensors and two fundamentals. As discussed in [20] the field content in this case corresponds to the one of the dualities studied in [30; 31] with D-type superpotential. Nevertheless as observed in [20] there are differences in the operator mapping and in the charge spectrum. Furthermore the USp(4) duality discussed here appears sporadic and its generalization to USp(\(2N_{c}\)) does not seem straightforward. For example we did not find any confining duality by increasing the rank of the gauge group and keeping fixed the field content (i.e. keeping two rank-2 antisymmetric tensors and possibly increasing the number of fundamentals). It is nevertheless possible that further fields and interactions should be considered in order to have an USp(\(2N_{c}\)) confining theory with two rank-2 anti-symmetric tensors. A last, related, question regards the existence of 4d confining dualities with two rank-2 tensors. Beyond the case of USp(\(2N_{c}\)) with two rank-2 anti-symmetric tensors, one can imagine also cases with unitary or orthogonal gauge groups or cases with more general rank-2 tensor matter fields. ###### Acknowledgments. We are grateful to Sara Pasquetti and Simone Rota for discussions. D.M thanks the Perimeter Institute for Theoretical Physics for the hospitality and the organizers of the "Strings 2023" conference, during which this work has been completed. The work of A.A., D.M. has been supported in part by the Italian Ministero dell'Istruzione, Universita e Ricerca (MIUR), in part by Istituto Nazionale di Fisica Nucleare (INFN) through the "Gauge Theories, Strings, Supergravity" (GSS) research project and in part by MIUR-PRIN contract 2017CC72MK-003. ## Appendix A Remarks on the 4d index In this appendix we collect the mathematical identities that have been useful in our analysis of the SWV duality. Such identities correspond to the matching of the electric and the magnetic index for the 4d duality of [1] and of [8]. The identities hold also in the s-confining case, when the dual theory become a WZ model. Skipping the definitions and the conventions that we use for the index (that correspond to the ones of [14; 32]) where the relevant quantities are the elliptic gamma functions and the Pochammer symbols \[\Gamma_{e}(z;p,q)=\prod_{\ell,m=1}^{\infty}\frac{1-z^{-1}p^{\ell+1}q^{m+1}}{1 -zp^{\ell}q^{m}},\qquad(x,p)_{\infty}=\prod_{\ell=0}^{\infty}(1-xp^{\ell}) \tag{10}\] here we provide the integral identities matching the indices across duality. In the case of SU(\(N_{c}\)) SQCD with \(N_{f}\) flavors Seiberg duality corresponds on the supersymmetric index to the integral identity between \[\begin{split} I_{E}=&\frac{(p;p)_{\infty}^{N_{c}-1}( q;q)_{\infty}^{N_{c}-1}}{N_{c}!}\\ &\quad\times\int_{\mathbb{T}^{N_{c}-1}}\frac{\prod_{i=1}^{N_{f}} \prod_{j=1}^{N_{c}}\Gamma_{e}\left(s_{i}z_{j},t_{i}^{-1}z_{j}^{-1};p,q\right) }{\prod_{1\leq i<j\leq N_{c}}\Gamma_{e}\left(z_{i}z_{j}^{-1},z_{i}^{-1}z_{j};p,q\right)}\prod_{j=1}^{N_{c}-1}\frac{\mathrm{d}z_{j}}{2\pi\mathrm{i}z_{j}} \end{split} \tag{11}\] and \[\begin{split} I_{M}=&\frac{(p;p)_{\infty}^{\widetilde {N}_{c}-1}(q;q)_{\infty}^{\widetilde{N}_{c}-1}}{\widetilde{N}_{c}!}\prod_{1 \leq i,j\leq N_{f}}\Gamma_{e}\left(s_{i}t_{j}^{-1};p,q\right)\\ &\quad\times\int_{\mathbb{T}^{\widetilde{N}_{c}-1}}\frac{\prod_{ i=1}^{N_{f}}\prod_{j=1}^{\widetilde{N}_{c}}\Gamma_{e}\left(S^{1/\widetilde{N}_{c}} s_{i}^{-1}z_{j},T^{-1/\widetilde{N}_{c}}t_{i}z_{j}^{-1};p,q\right)}{\prod_{1 \leq i<j\leq\widetilde{N}_{c}}\Gamma_{e}\left(z_{i}z_{j}^{-1},z_{i}^{-1}z_{j} ;p,q\right)}\prod_{j=1}^{\widetilde{N}_{c}-1}\frac{\mathrm{d}z_{j}}{2\pi \mathrm{i}z_{j}}\end{split} \tag{12}\] where \(S=\prod_{i=1}^{N_{f}}s_{i}\), \(T=\prod_{i=1}^{N_{f}}t_{i}\) and \(\widetilde{N}_{c}=N_{f}-N_{c}\). The equality holds with the following constraint on the fugacities \(ST^{-1}=(pq)^{N_{f}-N_{c}}\). Observe that the relation between (A.2) and (A.3) holds also for \(N_{f}=N_{c}+1\) where the integral on the RHS vanishes. This provides the relation for the s-confining limit of Seiberg duality. In the case of USp(\(2N_{c}\)) SQCD with \(2N_{f}\) fundamentals the index is given by \[I_{\text{USp}(2N_{c}),2N_{f}}(t) = \frac{(p,p)_{\infty}^{N_{c}}(q,q)_{\infty}^{N_{c}}}{2^{N_{c}}N_{c }!}\int_{\mathbb{T}^{N_{c}}}\prod_{1\leq u<v\leq N_{c}}\frac{1}{\Gamma_{e}(x_{ u}^{\pm 1}x_{v}^{\pm 1};p,q)}\] (A.4) \[\times \prod_{u=1}^{N_{c}}\frac{\prod_{i=1}^{2N_{f}}\Gamma_{e}(t_{i}x_{u }^{\pm 1};p,q)}{\Gamma_{e}(x_{u}^{\pm 2};p,q)}\prod_{u=1}^{N_{c}}\frac{ \text{d}x_{u}}{2\pi ix_{u}}\] and Intriligator-Pouliot duality corresponds to the integral identity \[I_{\text{USp}(2N_{c}),2N_{f}}(t)=\prod_{i<j}\Gamma_{e}(t_{i}t_{j})\,I_{\text{ USp}(2(N_{f}-N_{c}-4)),2N_{f}}(pq/t)\] (A.5) where the fugacities are constrained by \(\prod_{i=1}^{2N_{f}}t_{i}=(pq)^{N_{f}-N_{c}-1}\) Observe that the relation (A.5) holds also for \(N_{f}=N_{c}+2\) where the integral on the RHS vanishes. This provides the relation for the s-confining limit of Intriligator-Pouliot duality. Another identity that played a crucial role in our derivation was obtained in [21] for the case of USp(\(2N_{c}\)) SQCD with \(2N_{c}+2\) fundamentals. The identity in this case is \[I_{\text{USp}(2N_{c}),2N_{c}+2}\left(e^{2\pi i\phi}\right)=\frac {1}{(p;p)_{\infty}^{N_{c}}(q,q)_{\infty}^{N_{c}}}\] (A.6) \[\times \sum_{(\Phi_{1}\bigcup\Phi_{2})/S_{2}^{k}}\prod_{1\leq i<j\leq N _{c}+1}\Gamma(e^{2\pi i(\pm\tilde{\phi}_{i}\pm\tilde{\phi}_{j})};p,q)\sum_{S_ {N_{c}+1}(\Phi_{2})}\prod_{i=1}^{N_{c}}\delta(\tilde{\phi}_{i}+\tilde{\phi}_{ N_{c}+1+i}),\] with \(z_{i}=e^{2\pi i\phi_{i}}\), \(\sum_{i=1}^{2N_{c}+2}\phi_{i}=0\), \(\Phi_{1}=(\tilde{\phi}_{1},...,\tilde{\phi}_{N_{c}},\tilde{\phi}_{k+1}=\phi_{ N_{c}+1})\) and \(\Phi_{2}=(\tilde{\phi}_{N_{c}+2},...,\tilde{\phi}_{2N_{c}+2})\). This relation reflects the statement that the theory confines with a quantum corrected moduli space.
2305.15421
Generative Adversarial Networks for Brain Images Synthesis: A Review
In medical imaging, image synthesis is the estimation process of one image (sequence, modality) from another image (sequence, modality). Since images with different modalities provide diverse biomarkers and capture various features, multi-modality imaging is crucial in medicine. While multi-screening is expensive, costly, and time-consuming to report by radiologists, image synthesis methods are capable of artificially generating missing modalities. Deep learning models can automatically capture and extract the high dimensional features. Especially, generative adversarial network (GAN) as one of the most popular generative-based deep learning methods, uses convolutional networks as generators, and estimated images are discriminated as true or false based on a discriminator network. This review provides brain image synthesis via GANs. We summarized the recent developments of GANs for cross-modality brain image synthesis including CT to PET, CT to MRI, MRI to PET, and vice versa.
Firoozeh Shomal Zadeh, Sevda Molani, Maysam Orouskhani, Marziyeh Rezaei, Mehrzad Shafiei, Hossein Abbasi
2023-05-16T17:28:06Z
http://arxiv.org/abs/2305.15421v1
# Generative Adversarial Networks for Brain Images Synthesis: A Review ###### Abstract In medical imaging, image synthesis is the estimation process of one image (sequence, modality) from another image (sequence, modality). Since images with different modalities provide diverse biomarkers and capture various features, multi-modality imaging is crucial in medicine. While multi-screening is expensive, costly, and time-consuming to report to radiologists, image synthesis methods are capable of artificially generating missing modalities. Deep learning models can automatically capture and extract the high dimensional features. Especially, generative adversarial network (GAN) is one of the most popular generative-based deep learning methods, uses convolutional networks as generators, and estimated images are discriminated as true or false based on a discriminator network. This review provides brain image synthesis via GANs. We summarized the recent developments of GANs for cross-modality brain image synthesis including CT to PET, CT to MRI, MRI to PET, and vice versa. Generative Adversarial Networks, Image Synthesis, CT, MRI, PET ## 1 Introduction Artificial intelligence (AI) has aroused widespread interest in medical imaging. Especially with rapid progress in deep learning (DL) and development of the various image processing models, AI has turned into one of the hot topics of radiology research. Currently, enormous Conventional Neural Networks (CNN) are applied to different research and clinical applications, such as image segmentation, lesion detection, diagnosing, classification, and even interpretation. Multiple imaging modalities are used by radiologists to provide complementary information and a comprehensive description of a disease. Each of these expensive, time-consuming imaging modalities comes with disadvantages. Computed tomography (CT) images have high radiation risk, and positron emission tomography (PET) scans even entail additional radiation exposure. The long scanning time of magnetic resonance imaging (MRI) causes motion artifacts and results in low-resolution imaging. Novel DL-based approaches are applied to address these limitations by generating missing modalities from available modalities. Image synthesis allows us to improve the quality and resolution of the imaging examinations and provide the opportunity to have more information and details about the disease in a time- and cost-effective manner [1-5]. In medicine, while statistical methods analyze the brain data clinically [6], deep neural networks are end-to-end learning models that automatically extract the high number of features in brain images including CT, MRI, and PET and are capable of learning complex pattern in which provide the great performance. Deep learning algorithms have also been utilized in brain image analysis. The applications of deep learning models in neuroimages can be mentioned as brain tumor classification and segmentation [7] for measuring and visualizing the brain's anatomical structures, analyzing brain changes, and detecting the shape of lesions or tumors in the brain. However, in some cases images with multi modalities provide different features as decision makers and bring the diverse features to consider. For example, to early detection of Alzheimer's disease at pre-clinical stage, screening of MRI and PET should be conducted simultaneously to analyze the various biomarkers. MRI analyses the anatomical structures of the brain while PET measure brain metabolism and amyloid tracers. Therefore, to make a true decision about the disease, using multi-modality to analyze features derived from different screening would be useful [8]. However, the main problem of multi-modality imaging is the cost of extra scan, radiation dose and delay clinical workflow. As a result, image synthesis (cross-modality image estimation) methods have been proposed to overcome these limitations. Images synthesis is the process of artificial generation of one image modality from different modality. Image synthesis techniques provide fewer scans, less delay, and lower radiation. Recently, generative adversarial deep neural networks (GANs) [9] got a growing attention by the researchers. GAN is a class of machine learning frameworks including two neural networks: generative and discriminator part. While two networks contest with each other in a game, the generative network tries to produce the fake image so that the discriminator part cannot recognize the fake from the real image. GANs have been employed to tackle a wide array of challenges. One of the most prominent applications of GANs is image translation [10]. The main goal is to translate images between different techniques, as they can translate images across different modalities or generate new images within the same modality, but different sequences. For example, generating T1 sequence from T2. Since GANs can generate new data, they can be used for augmentation of brain images when we suffer from the lack of data. Super resolution to generate high resolution images from low resolution ones is another interesting application of GANs [11]. While noisy images are too challenging to interpret by the radiologists, GANs are used to remove noise and generate clear images [12]. The automatic segmentation of tumors and lesions in brain is another interesting application of GANs [13]. Finally, GANs are practical models to obtain the highly accurate reconstruction of natural images from brain activity [14]. In this paper, we review the different applications of GANs in brain image synthesis. The background section provides the definition of image synthesis in medical imaging and explains the basic model and the new versions of generative adversarial network. Section 3 describes the usage of GANs in brain image synthesis from CT to PET, CT to MRI, MRI to PET, and vice versa. ## 2 Background ### Generative Adversarial Networks Adversarial networks in general, and GANs more specifically, are trained to play a minimax game between a generator network which tries to maximize a certain objective function in tandem with a discriminator network which tries to minimize that same objective function hence the 'Adversarial' denomination. In their most basic formulation, GANs are trained to optimize the following value function [9,15] \[\min_{G}\max_{D}V(D,G)=\mathbb{E}_{\mathbf{x}\sim p_{data}(\mathbf{x})}[ \log D(\mathbf{x})]\] \[+\mathbb{E}_{\mathbf{z}\sim p_{\mathbf{z}}(\mathbf{z})}[\log(1-D(G(\mathbf{z})))].\] Here, \(G(\mathbf{z})\) is the _generator network_ with parameters \(\mathbf{\theta}_{G}\). It is fed with a random variable \(\mathbf{z}\sim p_{\mathbf{z}}\) sampled from a given prior distribution that \(G\) tries to map to \(\mathbf{x}\sim p_{data}\). To achieve this, another network \(D\) with parameters \(\mathbf{\theta}_{D}\) is trained to differentiate between real samples \(\mathbf{x}\sim p_{data}\) from a given dataset and fake samples \(\hat{\mathbf{x}}\sim p\mathbf{\theta}_{G}(\mathbf{x}|\mathbf{z})\) produced by the generator. In doing so, the generator is pushed to gradually produce more and more realistic samples with the goal of making the discriminator misclassify them as real. In order to handle the issues of the convergence speed, vanishing gradients, and model collapse, some modifications such as Deep Convolutional GAN, Least Square GAN [16], Wasserstein GAN [17], and Style GAN [18] have been proposed. Although this paper concentrates on cross-modality image synthesis and reviews generation of a modality from another modality (inter-modality) in brain, GANs have been used as image estimation for intra-modality applications. For example, in [19] authors translated the T1 sequence to T2 for reconstruction of high resolution from low resolution. Figure 1: The original GAN [22] Synthesizing diffusion map from T1 via GAN was conducted by [20]. Moreover, authors in [21] synthesized 7T MRI from 3T MRI. ### Approaches of Image Synthesis with GAN GANs usually use three methods to generate fake images including Direct Methods, Iterative Methods and Hierarchical Methods [22]. The main difference comes from the number of generators and discriminators networks. While the Direct Method works with only one generator and one discriminator, the other two methods get benefit of using multiple generators and discriminators. In contrast to the Direct Method, algorithms under the Hierarchical Method such as SS-GAN [23] employ two networks for both generator and discriminator. These methods separate an image into two parts, like "styles & structure" and "foreground & background". The generators are connected through parallel or sequencing. However, the Iterative Methods use similar multiple generators, and they generate images from coarse to fine. In this model, generator (i) refines the results from the previous generator (i-1). Moreover, Iterative Methods take advantage of weight-sharing among the generators. ## 3 Brain Image Synthesis with GAN In this section, we summarize different applications of GAN for brain image synthesis including MRI-CT, CT-PET, and MRI-PET as well. ### Mri-Ct MR image synthesis from CT images is a challenging task due to large soft-tissue signal intensity variations. In this section, we review the methods which have been recently published to handle this issue. [5] proposed a Fully CNN combined with a cyclic GAN to generate CT images from MR images to reduce patients' radiation exposure. They conducted 315 images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, and they successfully could generate CT images from MR images. MAE (Mean Absolute Error) and MSE (Mean Squared Error) were used to estimate the model consistency loss. After training the model, generator loss has come nearly equal to the discriminator loss. [26] developed a model to improve image synthesis performance using 3D Common-feature-learning-based Context-aware GAN (CoCa-GAN). They used encoder-decoder architecture to map available imaging modalities of glioma into a common feature space by the encoder to generate target missing imaging modalities by the decoder. Two different models: early-fusion-CoCa-GAN (eCoCa-GAN) and intermediate-fusion-CoCa-GAN (iCoCa-GAN), were compared to accommodate the common feature space; in their experiment, iCoCa-GAN outperformed eCoCa-GAN and finally recommended to develop the common feature space. They also conducted segmented images of the tumors to enhance synthesis tasks by emphasizing tumor regions. As segmentation tasks shared the same common feature space as synthesis tasks, given the input rough segmentation mask, allowed synthesis loss function to focus more on mass regions and represent the specific tumor information. This helped make the representation as similar as tumor appearance for image synthesis. Results indicated that iCoCa-GAN outperforms other models for image synthesis in terms of quality and improving the tumor segmentation, especially where available modalities are limited. [27] introduced a deep learning ResNet50-based CNN model to classify Alzheimer's disease using brain MRI. Besides, to increase the data set, they developed a CycleGAN model to generate MR images of the brain. They conducted a dataset of 705 samples labeled as normal cognition (NC) and 476 samples labeled as Alzheimer's disease (AD). The CycleGAn model with two generators synthesized NC samples from AD images and AD samples from the NC real images. Then the discriminator compared the fake synthetic images with the real ones to measure the GAN loss function. They successfully improved the accuracy of the Alzheimer's disease classification with promising progress in data synthesis. Figure 2: Three approaches of image synthesis using Generative Adversarial Networks [22] [28] developed a switchable CycleGAN to augment cross-contrast MRI images and compare it to the original CycleGAN. Original CycleGAN needed 2 separate image generators (forward and backward generators) for the training phase, which required more time and parameters, while switchable CycleGAN used a switchable generator to synthesize images with different styles. Available Adolescent Brain Cognitive Development (ABCD), a large dataset of available brain MR images, was used to collect 1,517 subjects of T1- and T2-weighted images. Ten slices of each image were obtained, resulting in a total of 30,340 slices; of which 70% were used for training,10% for testing, and 20% for testing. Peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) were used for quantitative comparison of the 2 models and demonstrated that switchable CycleGAN outperformed the original Cycle GAN. In qualitative evaluation, which compared the visualization results of each model, switchable CycleGAN generated more consistent results with the target images with less artifacts and more details of brain tissue. Switchable CycleGAN also was found to be more robust on small datasets with less training time than the original CycleGAN. [29] generated a novel end-to-end hierarchical GAN architecture to augment high-resolution 3D images by conducting 9,276 thoracic CT and 3,538 brain MR images. Most AI model training is done by low-resolution images as the Graphical Processing Units (GPUs) memory is limited, which results in low-quality images with artifacts. The hierarchical structure is represented as a memory-efficient model, which simultaneously generates a low-resolution version of the images and a randomly selected sub-volume of the high-resolution images. The incorporated encoder enabled clinical-relevant feature extraction from sub-volume high-resolution images to ensure anatomical consistency and generate high-resolution images with reduced required memory for training. The performance of the model was explored both qualitatively and quantitatively. If fake images were similar to real images, the quality of the image was quantitatively assessed by Frechet Inception Distance (FID), Maximum Mean Discrepancy (MMD), and Inception Score (IS). The hierarchical GAN model achieved better results in qualitative and quantitative analysis and could generate more realistic images than the baseline model. [30] adopted pix2pix with a 3D framework of the GAN model to augment CT images from contrast-enhanced MR images. Their study aimed to generate CT images to help plan radiotherapy treatment. The model is also designed to improve the quality and resolution of the generated CT images. They used 26 paired CT and MRI scans to train their model and, the rest 5 paired were used as a testing set. The generated scan's similarity to real images was evaluated by quantized image similarity formulas, including cosine angle distance, Euclidean distance, mean square error, PSNR, and SSIM. The satisfaction rated by radiologists was excellent for spatial geometry and noise level, good for contrast and artifacts, and fair for anatomical and structural details. [31] developed a GAN-based model to generate MRI from CT scan to detect acute ischemic stroke in suspected patients. They hypothesized that the diagnostic accuracy of brain lesions would increase by using synthetic MRI instead of CT. They used 140 examinations for training and 53 imaging examinations for testing to build the pix2pix GAN framework. In their model, the generator used CT images to synthesize MRI while the discriminator took synthetic or real MRI to assess whether it was fake or real. A neuroradiologist with 9 years of practical experience assessed the quality of the synthetic MRI visually, and no significant structural or signal intensity differences were found. PSNR and SSIM were used to estimate the similarity of the synthetic images to real onesRegarding reader performance in patient selection, the sensitivity increased by using Synthetic MRI than CT, but the specificity decreased. They concluded that the GAN model has the potential to generate MR images from non-contrast CT scans and can improve the sensitivity of acute stroke detection. Although, the image similarity performance was poor and, further expert discrimination was recommended to enhance the correctness of synthetic images. Fig. 3: An example of synthetic brain PET image generator [24] ### Mri-Pet Magnetic Resonance Image (MRI) and Positron emission tomography (PET) are used to diagnose a wide range of diseases. PET imaging is expensive and not offered in most of the medical centers in the world due to its high cost and increased risk of radiation exposure. PET synthesis from MRI multi-modal images has become a popular method that can reduce the cost and patient's radiant dose caused by PET imaging. [32] proposed a 3D self-attention conditional GAN named SC-GAN by extending a 2D conditional GAN into a 3D conditional GAN and adding a 3D self-attention module to it to generate PET synthetic images from MRI scans. A self-attention module models the relationship between widely separated image voxels which helps to improve the quality of generated images and reduce the blurriness. Exhaustive loss functions were used in this method like spectral normalization, feature matching loss, and brain area RMS error (RMSE) that improved the accuracy of image synthesis. The dataset used in this work was obtained from Alzheimer's Disease Neuroimaging Initiative 3 (ADNI-3). The input was MRI scans selected from T1-weighted (T1w) and fluid-attenuated inversion-recovery (FLAIR) structures and the target were PET scans selected from amyloid PET. 265 subjects were selected where 207 of them were used for training and 58 of them were used for testing. The model was then evaluated by comparing NRMSE, PSNR, and SSIM metrics with other works. \begin{table} \begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline **Author (year)** & **Method** & **Input** & **Estimation** & **Architecture** & **Dataset** & **Application** & **Metric** \\ \hline **Huang (2022)** & eCoCa- and iCoCa-GAN & Synthetic MRI & Target missing & 3D CoCa-GAN & 335 MRI of MICCAI-BraTS 2019 & Diagnosis and treatment of glioma & PSNR, NMSE, SSIM \\ \hline **Badr (2021)** & ResNet50-based CNN & MRI MRI & MRI & CycleGAN & 1,181 MRI from IITP date set & Alzheimer’s classification & Accuracy \\ \hline **Zhang (2022)** & switchable and original CycleGAN & T2WI & T1WI & Switchable CycleGAN & 1,517 MRI from ABCD data set & Enhance image synthesis quality & PSNR and SSIM \\ \hline **Nehra (2021)** & Cycle GAN & MRI & CT & FCN and GAN & 315 MRI from ADNI date set & N/A MAE, MSE \\ \hline **Sun (2021)** & Hierarchical Amortized GAN & MRI MRI & End-to-end GAN & 3,538 MRI from GSP date set & High-resolution & FID, MMD, IS \\ \hline **Wang et al. (2022)** & pix2pix with a 3D framework & MRI & CT & GAN & 31 paired CT and MRI of Chang Gung Memorial Hospital & Radiotherapy & CAD and MSSIM \\ \hline **Na Hu et al. (2022)** & 3D-CT2MR (2022) & CT & MRI & GAN & 193 & Detection ischemic stroke & PSNR, SSIM \\ \hline \end{tabular} * _eCaCa- and iCoCa-GAN: early-fusion intermediate-fusion Common-feature learning-based Context-aware, PSNR: peak signal-to-noise ratio, SSIM: the structural similarity index measure, IITP: Institute for Information & Communications Technology Promotion, ABCD: Adolescent Brain Cognitive Development, GSP: Brain Genomics Superstract Project, FID: Frechet Inception Distance, MMD: Maximum Mean Discrepancy, IS: Inception Score, CAD: cosine angle distance, MSSIM: mean structural similarity index_ \end{table} Table 1: A Summary of MRI-CT Image Synthesis Figure 4: An example of sCT estimated from MRI [25] [33] focused on the cross-modality synthesis of PET scans from MRI images using globally and locally aware image-to-image translation GAN (GLA-GAN) with a multi-path architecture for Alzheimer's disease (AD) diagnosis. It was assumed that by exploiting both global and local contexts, the quality of synthesized PET scans can be improved. In this work, SSIM (MS-SSIM) was used as an additional objective function to improve synthetic image quality. 402 input and target samples were selected from the ADNI dataset having both prepossessed MRI and FDG-PET modalities for training procedure. Finally, the quality of the synthesized images and the model accuracy for AD diagnosis were evaluated using SSIM, PSNR, and MAE metrics to compare with other works. [34] proposed a new method named GANBERT to generate PET images from MRI scans in a wide intensity range. The architecture is composed of a 3D U-Net-like generator that generates PET images from the MRI scans. It also has two Bidirectional Encoder Representations from Transformers (BERT) that are trained to predict real and synthetic PET images where its next sentence prediction (NSP) acts as a GAN discriminator. ADNI dataset was used to train and test the proposed model. Target was selected from 2,387 Amyloid PET (AV45), 536 Tau PET (AV1451), and 3,108 fluorodeoxyglucose PET (FDG) that were paired with T1 - weighted input MRI images. Then the model was evaluated by comparing the quality of generated PET images with other methods using PSNR, SSIM, and RMSE metrics. [35] introduced a new approach called Sketcher-Refiner GAN consisting of two conditional GANs to predict the PET-derived myelin content from multimodal MR images. The Sketcher network generates global anatomical information, and the Refiner network calculates the tissue myelin content. The dataset used in this work was MRI and PET images collected from 18 MS patients and 10 age-matched healthy volunteers. The generated image quality was compared with the state-of-the-art methods using MSE and PSNR. They also compared myelin content prediction in three: white matter (WM) in healthy controls (HC), normal appearing white matter (NAWM) in MS patients, and lesions in MS patients ROIs with other works showing noticeable improvement on prediction accuracy. [36] proposed a new method called 3D Cycle-consistent GAN which is a two-stage deep learning method for AD diagnosis using MRI and PET data. First, PET images were generated from the corresponding MRI data by using 3D Cycle-consistent Generative Adversarial Networks (3D-cGAN). Then a deep multi-instance neural network was implemented for AD diagnosis and mild cognitive impairment (MCI) prediction using the synthetic PET and MRI images. The proposed model was evaluated using two ADNI sub-datasets, ADNI-1 and ADNI-2. The model was first trained by ADNI-1 (containing both PET and MRI) images and then tested on the complete subjects in ADNI-2. The quality of the synthetic images was then evaluated using the PSNR metric and the experimental results demonstrated that the synthetic PET images produced by this method were reasonable. [37] proposed BPGAN which is, a 3D end-to-end generative adversarial network, that can synthesize brain PET images from MRI scans for multi-modal medical imaging research. They designed a 3D multiple convolution U-Net (MCU) generator to improve the quality of synthesized PET images and then employed a 3D gradient profile (GP) and structural similarity index measure (SSIM) loss functions to gain higher similarity to the ground truth images. They tested their model on ADNI database and evaluated it by using mean absolute error (MAE), PSNR and SSIM metrics. Qualitative evaluations demonstrated improvement on multi-class AD diagnosis accuracies compared to the stand-alone MRI. \begin{table} \begin{tabular}{c c c c c c c c} \hline **Author (year)** & **Method** & **Input** & **Estimation** & **Architecture** & **Dataset** & **Application** & **Metric** \\ \hline **Lan** & SC-GAN & MRI & PET & Conditional & ADNI & Multimodal 3D & NRMSE \\ **(2021)** & & & & GAN & Neuroimaging & PSNR \\ & & & & & Synthesis & SSIM \\ \hline **Sikka** & GLA-GAN & MRI & PET & GAN & ADNI & Diagnosis of & MAE \\ **(2021)** & & & & & Alzheimer’s & PSNR \\ & & & & & Disease & SSIM \\ \hline **Shin** & GANBERT & MRI & PET & GAN & ADNI & MRI to PET & PSNR \\ **(2020)** & & & & & Image Synthesis & SSIM \\ & & & & & & & \\ \hline **Wei** & Sketcher- & MR & PET & Conditional & Clinical & Myelin Content & MSE \\ **(2019)** & Refiner & & & GAN & Dataset & Prediction & PSNR \\ & GAN & & & & & & \\ \hline **Pan** & 3D Cycle- & MRI & PET & CycleGAN & ADNI & Diagnosis of & PSNR \\ **(2018)** & Consistent & & & & & Alzheimer’s & \\ & GAN & & & & Disease & \\ \hline **Zhang** & BPGAN & MRI & PET & U-Net & ADNI & Multi- & MAE \\ **(2022)** & & & & & & Modal & PSNR \\ & & & & & Medical & SSIM \\ & & & & & Imaging & \\ \hline \end{tabular} \end{table} Table 2: A Summary of MRI-PET Image Synthesis ### Ct-Pet Synthesizing CT from PET images is challenging due to less resolution and detailed information in PET images compared to CT images [38]. Despite these challenges, several studies were able to generate results with low average errors. [39] A new proposed GAN framework (MedGAN) combines the fragmented benefits of several translation approaches such as ResNets, pix2pix, PAN and Fila-sGAN with a new high-capacity generator architecture. The purpose of this framework is to improve technical post-processing tasks that require globally consistent image properties with an application in PET-CT translation. Furthermore, they incorporated non-adversarial losses such as the perceptual, style and content losses as part of the framework. To test this framework, a dataset of 46 patients of the brain region acquired on a joint PET/CT scanner was used. The proposed framework produced realistic and homogeneous structures in the CT images that closely matched the ground truth CT images. [40] In another study adapts this proposed framework and evaluate the MedGAN framework for independent attenuation correction of brain fluorine-18-fluorodeoxyglucose (F-FDG) PET images only based on non-attenuation corrected PET data (NAC PET). In this study, a dataset consisting of NAC PET and the corresponding CT data from 50 patients were used for training and the information from 40 patients were used for technical and clinical validation. The results show that independent attenuation correction of brain F-FDG PET is feasible with high accuracy using the proposed framework. [41, 42, 43] proposed a framework to obtain attenuation information in a delayed clinical PET scanner without the need for additional CT scans. For this purpose, a GAN-based image synthesis network is developed to convert the PET back projection (BP) image and the NAC PET image into a pseudo-CT image. Later a non-rigid registration is performed between the CT image of the first scan and this pseudo-CT image to obtain the transformation field between the two scans. The final estimated CT image for the delayed PET image is obtained by applying the transformation field onto the CT images of the first scan. In this study, experiments with clinical datasets are implemented to assess the effectiveness of the proposed method with the Generative Adversarial Networks (GAN) method. ## Conclusion In this paper, we reviewed the generative adversarial network and its applications to brain image synthesis. We also summarized the methods of generating artificial images used by GANs including Direct Methods, Hierarchical Methods, and Iterative Methods. Since GANs are composed of two different networks, generative and discriminative network, are powerful deep learning-based models to generate images and synthesize medical images. We then categorized the applications of GANs for brain image synthesis into three classes: CT-MRI, MRI-PET, and CT-PET and reviewed each method separately. \begin{table} \begin{tabular}{c c c c c c c c} \hline **Paper** & **Method** & **Input** & **Estimation** & **Architecture** & **Dataset** & **Application** & **Metric** \\ \hline **Armanious** & MedGAN & PET & CT & cGAN & SOMATOM & Image-to-image & SSIM, \\ **K, et.al** & & & & mCT,Siemens & translation & PSNR \\ **2020** & & & & & Healthineers, & (dB), \\ & & & & & Germany & & MSE, \\ & & & & & & VIE, \\ & & & & & & UQI, \\ & & & & & & LPIPS, \\ \hline **Armanious** & MedGAN & PET & CT & cGAN & 90 patients & Attenuation & Mean \\ **K, et.al** & & & & with & correction & difference \\ **(2020)** & & & & & Fluorine-18- & & \\ & & & & & FDG PET/CT & & \\ & & & & scans of the & & \\ & & & & head region & & \\ \hline **Rao et. Al** & Image & PET & pseudo-CT & GAN & 25 patients & Attenuation & PSNR, \\ **(2022)** & synthesis & & & & for training. & correction & MAPE \\ & network & & & & 12 patients & \\ & and nonrigid & & & & for evaluation & \\ & registration & & & & & \\ \hline **Liu et. Al** & deepAC & PET & pseudo-CT & data-driven & Discovery & Attenuation & Dice \\ **(2018)** & & & & deep learning & PET/CT 710 & correction & coefficient, \\ & & & & approach & scanner (GE & MAE \\ & & & & Healthcare, & \\ & & & & & Waukesha, & \\ & & & & & WI, USA) & \\ \hline \end{tabular} \end{table} Table 3: A Summary of CT-PET Image Synthesis
2307.15258
Field-Free Switching in Symmetry Breaking Multilayers: The Critical Role of Interlayer Chiral Exchange
It is crucial to realize field-free, deterministic, current-induced switching in spin-orbit torque magnetic random-access memory (SOT-MRAM) with perpendicular magnetic anisotropy (PMA). A tentative solution has emerged recently, which employs the interlayer chiral exchange coupling or the interlayer Dzyaloshinskii-Moriya interaction (i-DMI) to achieve symmetry breaking. We hereby investigate the interlayer DMI in a Pt/Co multilayer system with orthogonally magnetized layers, using repeatedly stacked [Pt/Co]n structure with PMA, and a thick Co layer with in-plane magnetic anisotropy (IMA). We clarify the origin and the direction of such symmetry breaking with relation to the i-DMI effective field, and show a decreasing trend of the said effective field magnitude to the stacking number (n). By comparing the current-induced field-free switching behavior for both PMA and IMA layers, we confirm the dominating role of i-DMI in such field-free switching, excluding other possible mechanisms such as tilted-anisotropy and unconventional spin currents that may have arisen from the symmetry breaking.
Yung-Cheng Li, Yu-Hao Huang, Chao-Chung Huang, Yan-Ting Liu, Chi-Feng Pai
2023-07-28T01:54:26Z
http://arxiv.org/abs/2307.15258v2
# Field-Free Switching in Symmetry Breaking Multilayers - ###### Abstract It is crucial to realize field-free, deterministic, current-induced switching in spin-orbit torque magnetic random-access memory (SOT-MRAM) with perpendicular magnetic anisotropy (PMA). A tentative solution has emerged recently, which employs the interlayer chiral exchange coupling or the interlayer Dzyaloshinskii-Moriya interaction (i-DMI) to achieve symmetry breaking. We hereby investigate the interlayer DMI in a Pt/Co multilayer system with orthogonally magnetized layers, using repeatedly stacked [Pt/Co]\({}_{\text{n}}\) structure with PMA, and a thick Co layer with in-plane magnetic anisotropy (IMA). We clarify the origin and the direction of such symmetry breaking with relation to the i-DMI's effective field, and show a decreasing trend of the said effective field magnitude to the stacking number (n). By comparing the current-induced field-free switching behavior for both PMA and IMA layers, we confirm the dominating role of i-DMI in such field-free switching, excluding other possible mechanisms such as tilted-anisotropy and unconventional spin currents that may have arisen from the symmetry breaking. **I. Introduction** Making industrially viable spin-orbit torque (SOT) based magnetic random-access memory (MRAM) has been at the forefront of spintronics research for the past decade. For this goal, great endeavors have focused on improving various key parameters such as the efficiency of spin generation [1-3], readability [4-6] and thermal stability [7-9],...etc. The most critical issue, however, is the conundrum that magnetizations with perpendicular magnetic anisotropy (PMA) cannot be deterministically controlled by a conventional spin current with in-plane polarization due to the symmetry constraint, unless some sort of symmetry breaking element is introduced, such as an in-plane field parallel to the applied current [10,11]. This issue significantly hampers the feasibility for building SOT-MRAMs with PMA, which is essential in developing high density magnetic memories [7,12]. Numerous methods have been proposed to make the switching deterministic and integrable with conventional magnetic tunnel junctions with PMA (p-MTJs), such as anisotropy engineering through wedged structures [13-17], unconventional spin current generations [18,19], exchange biased structures [20,21], and the magnetic hard mask approach [22]. These proposals often generate new shortcomings, however. For example, dipolar coupling/exchange coupling's switching polarity may be magnetic history dependent. Unconventional spins require more exotic fabrication procedures and are sometimes magnetic history correlated as well [19,23]. One alternative approach has been proven to be feasible lately, which involves harnessing the newly discovered interlayer chiral exchange or interlayer Dzyaloshinskii-Moriya interaction (i-DMI). DMI is an antisymmetric exchange interaction which favors perpendicular arrangement of adjacent spins [24-27]. Contrary to the more conventional interfacial case of DMI, the interlayer version mediates orthogonal spin configurations between separate magnetic layers, rather than within the same magnetic layer. Recently, after being predicted by Monte Carlo calculations [28] and experimental verifications of i-DMI in double PMA and synthetic antiferromagnetic (SAF) systems [29,30], several seminal works had further utilized it to achieve current-induced field-free magnetization switching [31,32]. The advantages of exploiting i-DMI to realize field-free switching within PMA systems are manifold. Since i-DMI has been proven to exist in both orthogonally magnetized and SAF systems, the versatility of stack engineering may be significantly increased when compared to the conventional interfacial case. The magnetization switching behavior is also governed by the characteristic DMI vector **D**, which defines both the strength and direction of the chiral exchange, and subsequently the switching polarity. Harnessing i-DMI to achieve field-free switching also eliminates the possibility of bipolar switching since the magnetization configuration reinitializes itself under the applied current [33]. These features are all essential for realizing practical SOT-MRAM. The Pt/Co multilayer system has already been shown to be important for magnetic memory applications [34,35]. In this work, we systematically study the feasibility to incorporate i-DMI into heterostructures with multilayers having in-plane magnetized Co and perpendicularly magnetized [Pt/Co]\({}_{\rm n}\) and reveal the correlation between the obliquely-grown [Pt/Co]\({}_{\rm n}\) and the i-DMI strength. The i-DMI-induced effective field is found to decrease while increasing the stacking number n of the [Pt/Co]\({}_{\rm n}\) structure. Furthermore, a non-negligible tilted anisotropy is also observed in such symmetry broken multilayer system. We also demonstrate current-induced field-free magnetization switching and show that the switching percentage drops as the i-DMI weakens, which confirms the dominating role of i-DMI over the tilted anisotropy in the observed field-free switching. Current-induced loop shift measurement results, finally, further exclude other mechanisms that might lead to field-free switching, such as unconventional z-polarized spin current, and reconfirm the critical role of i-DMI in Pt/Co multilayer system. **II. Sample preparation and characterization** The samples used in this work are prepared by magnetron sputtering under a base pressure of \(\sim\)10-8 Torr, with structures being Ta(0.5)/[Pt(1)/Co(0.8)]\({}_{\rm{n}}\)/Pt(2.2)/Co(1.7)/Ta(3) deposited on thermally oxidized Si wafers (n = 1, 2,,3, 4, and the unit of the numbers in parentheses is nm). Ta(0.5) and Ta(3) serve as the adhesion and the capping layer, respectively. As schematically shown in Fig. 1(a), [Pt/Co]\({}_{\rm{n}}\) multilayers have PMA while the top Co(1.7) layers in all samples have in-plane magnetic anisotropy (IMA). Since our magnetic multilayers are presumably polycrystalline, high symmetry is expected for all layers under normal sputtering conditions. However, throughout deposition, the substrate's rotation is disabled until the Pt(2.2) spacer which serves as the chiral exchange coupling media located between [Pt/Co]\({}_{\rm{n}}\) multilayers and Co in-plane layer was grown (For discussions on the choice of the Pt spacer thickness to be 2.2 nm, see Appendix A). Such oblique deposition technique is used to induce symmetry breaking, whose direction is mainly dominated by the atom flow direction of underlayer Ta of 25\({}^{\circ}\), as seen Fig. 1(a), due to the templating effect it could bring about which is further attributed to the formation of canted columnar structure by oblique deposition of the seed layer [15, 36]. To achieve current-induced field-free switching, we make the symmetry breaking direction along \(y\) axis, that is, transverse to the applied current (\(x\)) direction [13-17, 32]. With the only remaining symmetry operation being a mirror along the oblique-deposition direction, i-DMI's effective field should be symmetrical across this mirror, thus, the DMI **D** vector has to point toward a direction perpendicular to the wedge direction [33, 37], just as we will demonstrate in the next section. For electrical measurements, micron-sized Hall bar devices are patterned through lift-off process with lateral dimensions of 5\(\upmu\)m by 60\(\upmu\)m. We measure anomalous Hall resistance \(R_{H}\) and unidirectional magnetoresistance (UMR) from these Hall bar devices to observe PMA and IMA magnetization states with direct sensing current \(I_{\text{DC}}\), respectively [38, 39]. The i-DMI characterization is carried out by sweeping out-of-plane magnetic field \(H_{z}\) under static in-plane magnetic field \(H_{\text{IP}}\) along different in-plane angle \(\varphi_{H}\) via a well calibrated vector electromagnet which creates accurate magnetic fields, as Fig. 1(b) depicts (please refer to Ref. [40] for the details of the calibration technique). Representative \(R_{H}\) hysteresis loop of a [Pt/Co]\({}_{\text{n}}\) multilayer sample with n = 1 is shown in Fig. 1(c). In comparison, shifts of the hysteresis loops due to the i-DMI effective field is observed when \(H_{\text{IP}}\) is introduced, as shown Fig. 1(d). Also note that the coercivity \(H_{\text{C}}\) of the [Pt/Co]\({}_{\text{n}}\) multilayer increases with increasing the stacking number n (listed in Table I) due to the enhancement of interfacial anisotropy generated by improved fcc texture [41, 42]. **III. i-DMI coupling between [Pt/Co]\({}_{n}\) and Co** To quantify the interlayer chiral exchange coupling induced by orthogonal ferromagnetic configuration with symmetry breaking, \(H_{\rm IP}\) is applied along different \(\varphi_{H}\) to magnetize the IMA Co layer, thereby inducing an i-DMI effective field acting upon the PMA [Pt/Co]\({}_{n}\) layer along \(z\) axis, \(H_{z}^{\rm shift}\), which can be estimated through the offset in the out-of-plane hysteresis loops. Subsequently, Fig. 2(a) demonstrates that such \(H_{z}^{\rm shift}\) for the wedged PMA [Pt/Co]\({}_{n=1}\) layer exhibits a sinusoidal variation with regard to \(\varphi_{H}\) under \(H_{\rm IP}=100\) Oe and the maximum (minimum) values of \(H_{z}^{\rm shift}\) are approximately located at \(\varphi_{H}=90^{\circ}\) (\(270^{\circ}\)), which corresponds to the direction of the oblique growth and is perpendicular to the vector **D** (**D** along \(x\), \(\approx\) 0\({}^{\circ}\)) in this broken symmetry system. The rest of the main samples possess identical angle dependence of \(H_{z}^{\rm shift}\). This sinusoidal dependence is in stark contrast to the control sample (structurally similar to \(n=1\) but grown with the sample holder rotation on) results as also shown in Fig. 2(a), a minimal \(H_{z}^{\rm shift}\) at all \(\varphi_{H}\) angles is demonstrated, showcasing the lack of i-DMI. This antisymmetric variation is attributed to the i-DMI effect instead of typica 1 dipolar field or symmetrical interlayer exchange coupling [43-45]. In this system, exchange energy term \(E_{\rm ex}\) can be expressed as \(E_{\rm ex}\)= -\(J_{H}{\bf M}_{1}\)\(\cdot\)\({\bf M}_{2}\) - \({\bf D}\)\(\cdot\)\({\bf M}_{1}\)\(\times\)\({\bf M}_{2}\) [26,46]. The first term represents conventional Heisenberg symmetric exchange term. The second term describes the antisymmetric exchange term, _i.e._, the i-DMI contribution. \(J_{H}\) and \({\bf M}_{1,2}\) represent the conventional Heisenberg exchange constant and the magnetization of the two magnetic layers, respectively. When a magnetic field is applied, Zeeman energy \(E_{\rm Zeeman}=\) - \({\bf M}\)\(\cdot\)\({\bf H}_{\rm ext}\) is also taken into consideration. Therefore, in our structure, if \(\mathbf{M}_{1}\) and \(\mathbf{M}_{2}\) are assigned to be \(\mathbf{M}_{\mathrm{PMA}}\) ([Pt/Co]\({}_{n}\)) and \(\mathbf{M}_{\mathrm{IMA}}\) (Co), the overall field acting upon the PMA layer can be expressed as \(\mathbf{H}_{z}^{\mathrm{eff}}\)\(=\)\(\mathbf{H}_{\mathrm{ext}}\) - \(\mathbf{D}\)\(\times\)\(\mathbf{M}_{\mathrm{IMA}}\) in the absence of \(J_{H}\). \(H_{z}^{\mathrm{shift}}\) then corresponds to the magnitude of - \(\mathbf{D}\)\(\times\)\(\mathbf{M}_{\mathrm{IMA}}\). Next, by performing in-plane field \(H_{\mathrm{IP}}\) scans with its direction fixed at \(\varphi_{H}=90^{\circ},\) we observe that \(H_{z}^{\mathrm{shift}}\) vs. \(H_{\mathrm{IP}}\) (or \(H_{y}\)) can be divided into two regions with different slopes, denoted as the shaded and the white sections in Fig. 2(b). The slope transition points \(H_{\mathrm{IP}}^{\mathrm{trans}}\) are approximately located at \(H_{\mathrm{IP}}\)\(=\)\(75\) Oe for sample with n = 1 and 100 Oe for n = 2, 3, and 4, indicating that i-DMI's contribution to \(H_{z}^{\mathrm{shift}}\) reaches saturation. These transition points also correspond to \(\mathbf{M}_{\mathrm{IMA}}\) being fully aligned toward \(\varphi_{H}\)\(=\)\(90^{\circ}\) under sufficient \(H_{\mathrm{IP}}\). Beyond \(H_{\mathrm{IP}}^{\mathrm{trans}}\), \(H_{z}^{\mathrm{shift}}\) still increases linearly with increasing \(H_{\mathrm{IP}}\). This additional \(H_{z}^{\mathrm{shift}}\) is attributed to the titled anisotropy due to the wedged structure [16, 17], which will be further examined in the next section. Note that the control sample shows a minimal \(H_{z}^{\mathrm{shift}}\) in Fig. 2(b) and its absence of a slope change under different \(H_{\mathrm{IP}}\) again demonstrates the lack of both i-DMI and tilted anisotropy without symmetry breaking. By considering the measured \(H_{z}^{\mathrm{shift}}\) to be the superposition of both i-DMI and tilted anisotropy contributions, and the fact that the tilted anisotropy's contribution as a function of \(H_{\mathrm{IP}}\) is linear [16, 47], maximum value of the effective i-DMI field \(H_{z,\,\mathrm{DMI}}^{\mathrm{sat}}\) can be extracted from subtracting the \(H_{z}^{\mathrm{shift}}\) at \(H_{\mathrm{IP}}^{\mathrm{trans}}\) (\(H_{z}^{\mathrm{shift,\,sat}}\)) by the products of corresponding \(H_{\mathrm{IP}}^{\mathrm{trans}}\) and \(H_{z}^{\mathrm{shift}}\)/\(H_{y}\) at white regions. It is found that the i-DMI \(H_{z,\,\mathrm{DMI}}^{\mathrm{sat}}\) gets attenuated when increasing the stack number n, as shown in Fig. 2(c). Note that the value of \(H_{z,\,\mathrm{DMI}}^{\mathrm{sat}}\) for sample n = 1 is close to the result in a previous study with a similar Pt interlayer thickness [48]. More details regarding the Pt spacer thickness dependence of i-DMI can be found in Appendix A. As further shown in Fig. 2(d), the \(\ H_{z}^{\rm shift}/H_{y}\) governed by i-DMI coupling (slopes from the shaded region in Fig. 2(b)) consequently possesses a decaying trend with n, which is due to the limited interaction length from \(\bf M_{\rm IMA}\) with Pt(2.2) as the mediating layer. On the other hand, the values of \(\ H_{z}^{\rm shift}/H_{y}\) governed by the tilted anisotropy (slopes from the white region in Fig. 2(b)) show a less obvious but opposite trend, increasing from 0.04 to 0.085 for sample n =1 to 4. Compared to a previous work on strain-induced tilted anisotropy, the variation of \(\ H_{z}^{\rm shift}\) to \(\ H_{\rm IP}\) in Fig. 2(b) shows a distinct difference, while the values of tilted anisotropy induced \(\ H_{z}^{\rm shift}/H_{y}\) are of the same scale [49]. **IV. Estimation of tilted anisotropy** Despite a smaller effect when compared to the i-DMI, tilted magnetic anisotropy's contribution to hysteresis loop shift is non-negligible and may potentially play a role in assisting current-induced field-free switching. Thus, to further quantify the tilted anisotropy, \(\ H_{z}^{\rm shift}/H_{\rm IP}\) in large field regime (white region) is measured under different static \(\ H_{\rm IP}\) with varying \(\ \varphi_{H}\), as shown in Fig. 3(a) for the n = 2 sample grown obliquely. Compared to the n = 2 sample without wedge, _i.e._, the one deposited with rotation throughout the entire sputter process, it is clear that there exists a significant difference of \(\ H_{z}^{\rm offset}/H_{\rm IP}\) variation with \(\ \varphi_{H}\) measured at large field, which is summarized in Fig. 3(b). These \(\ H_{z}^{\rm offset}/H_{\rm IP}\) vs. \(\ \varphi_{H}\) data can be further fitted to extract the easy axis tilted angle \(\ \theta_{\rm ani}\) (away from the z-axis) by [16,17], \[H_{z}^{\rm offset}{\rm cos}\theta_{\rm ani}{\rm=}H_{\rm IP}\,{\rm cos}\big{(}\varphi _{\rm ani}\ \ \mbox{- }\varphi\big{)}\,{\rm sin}\theta_{\rm ani}, \tag{1}\] where \(\theta_{\rm ani}\) in our main sample with \({\rm n=2}\) (wedged) is approximately 4.4\({}^{\circ}\) whereas only 0.4\({}^{\circ}\) in the control sample with \({\rm n=2}\) (non-wedged). Furthermore, \(\theta_{\rm ani}\) slightly increases with \({\rm n}\), as shown in Fig. 3(c). This is tentatively attributed to the template effect provided by the Ta buffer layer [36] being strengthened as the number of stacks of [Pt/Co]\({}_{\rm n}\) without rotation increased. Note that \(\varphi_{\rm ani}\) indicates the in-plane angle that \({\bf M}_{\rm PMA}\) tilts toward. As Fig. 3(b) exhibits, the maximum of \(H_{z}^{\rm offset}\)/\(H_{\rm IP}\) for sample \({\rm n=2}\) is located at \(\varphi_{H}=90^{\circ}\), showing that \(\varphi_{\rm ani}\) is close to 90\({}^{\circ}\), corresponding to the wedge direction. Additionally, the magnitudes of the wedge-induced \(\theta_{\rm ani}\) found in this work are comparable to other studies' values obtained from similar wedged structures with tilted anisotropy (3.3\({}^{\circ}\) in [16] and 2.6\({}^{\circ}\) in [47]). Furthermore, with the occurrence of a tilted \({\bf M}_{\rm PMA}\), the Heisenberg exchange contribution to \(H_{z}^{\rm shift}\) may need further scrutiny. Based on typical values of \(M_{\rm s}^{\rm Co}\)\({\rm=1.18\times}10^{6}\) A/m [50], \(M_{\rm s}^{\rm Pt-Co\, multilayers}\)\({\rm=1.8\times}10^{6}\) A/m [51] and the Heisenberg interlayer exchange energy areal density of \(E_{\rm ex}\cong 2\)\({\rm\sim 5\ \mu J/m^{2}}\) reported in similar structures and Pt interlayer thicknesses [52-54], these values are plugged into \(E_{\rm ex}\)\({\rm=}\)\(\mu_{0}M_{\rm s}H_{\rm ex}t_{\rm FM}\) to obtain the net effective field \(H_{\rm ex}\). Even when using the highest value of \(\theta_{\rm ani}\) in our main samples, which is 5.6\({}^{\circ}\) for \({\rm n=4}\), the net effective field \(H_{\rm ex}\) applied along canted \({\bf M}_{\rm PMA}\) would still be less than 4 Oe. In sharp contrast, when applying previously reported i-DMI energy areal density of \(E_{\rm DMI}\cong 24\)\({\rm\sim 44\ \mu J/m^{2}}\) [33,48] into \(E_{\rm DMI}\)\({\rm=}\)\(\mu_{0}M_{\rm s}H_{\rm DMI}\)\({\rm fFM}\), the calculated \(H_{\rm DMI}\) ranged from 40 to 80 Oe, which are in much better agreement with xour observed magnitude of the i-DMI effective fields. Therefore, the anti-symmetric i-DMI mechanism still dominates over the symmetric Heisenberg exchange even in the presence of tilted anisotropy. Another evidence of the minimal contribution from the symmetric Heisenberg exchange is the fact that when n increases from 1 to 4, the measured \(H_{z,\,\rm{DMI}}^{\rm{sat}}\) decreases from 101 to 34 Oe, while the measured \(\theta_{\rm{ani}}\) increased from 2.4 to 5.6\({}^{\circ}\). This significant \(H_{z,\,\rm{DMI}}^{\rm{sat}}\) decrease while any contributions from Heisenberg exchange should have increased more than twofold (due to the increase in \(\theta_{\rm{ani}}\)) suggests that the Heisenberg exchange effective field plays a minor role in contributing to \(H_{z,\,\rm{DMI}}^{\rm{sat}}\). **V. Current-induced field-free switching with i-DMI** Next, we examine the feasibility of employing i-DMI for current-induced field-free switching of \({\bf M}_{\rm{PMA}}\), by applying pulsed current \(I_{\rm{write}}\) with pulse width of 50 ms along longitudinal direction of the Hall bar devices and detect the switching of \({\bf M}_{\rm{PMA}}\) by means of \(R_{H}\). On the other hand, switching dynamics of \({\bf M}_{\rm{IMA}}\) is simultaneously probed by the UMR, which is recorded through longitudinal resistance difference \(\Delta R\) as sensed by \(I_{\rm{DC}}\) with opposite polarities [55,56]. From the symmetry perspective, the writing current is applied parallel to the **D**-vector \(I_{\rm{write}}\parallel{\bf D}\) ( \(I_{\rm{write}}\,\raisebox{-1.72pt}{$\perp$}\,{\bf M}_{\rm{IMA}}\)) such that the symmetry breaking effect is at its maximal. This configuration can maximize the \(H_{z,\,\rm{DMI}}^{\rm{sat}}\) that switches \({\bf M}_{\rm{PMA}}\) dictated by \({\bf M}_{\rm{IMA}}\), which is in turn controlled by the applied current \(I_{\rm{write}}\) polarity. As shown in Fig. 4(a), current-induced field-free switching can be achieved for both \({\bf M}_{\rm{PMA}}\) and \({\bf M}_{\rm{IMA}}\). As further summarized in Fig. 4(b), the critical switching currents \(I_{\rm{e}}\) share the same trend for both \({\bf M}_{\rm{IMA}}\) and \({\bf M}_{\rm{PMA}}\), which increase with higher stack order. This suggests that the moments in both layers can be electrically switched simultaneously due to the coupling of i-DMI. However, as summarized in Fig. 4(c), the current-driven switching percentage of \(\mathbf{M}_{\mathrm{PMA}}\) drops significantly from 70% to 2% as the stacking number n increased, while that of \(\mathbf{M}_{\mathrm{IMA}}\) remains fairly constant \(\sim\) 100%. The almost constant switching ratio for the in-plane Co layer (\(\mathbf{M}_{\mathrm{IMA}}\)) is ascribed to its type-\(y\) SOT switching nature (\(I_{\mathrm{write}}\,\bot\,\mathbf{M}_{\mathrm{IMA}}\)), which is inherently switchable in a field-free fashion [57-59] and unaffected by increasing the stacking number. In contrast, the fraction of the magnetic domains that can be switched in the PMA [Pt/Co]\({}_{\mathrm{n}}\) multilayers (\(\mathbf{M}_{\mathrm{PMA}}\)) becomes smaller as increasing n due to the attenuation of i-DMI strength by the weakened coupling between \(\mathbf{M}_{\mathrm{IMA}}\) and \(\mathbf{M}_{\mathrm{PMA}}\). This trend is also consistent with our observation that the decreased \(H_{z,\,\mathrm{DMI}}^{\mathrm{sat}}\) cannot fully overcome the coercive field \(H_{\mathrm{c}}\) of \(\mathbf{M}_{\mathrm{PMA}}\) with increasing n (Fig. 2(c)). Note that the SOT that drives \(\mathbf{M}_{\mathrm{IMA}}\) switching should be mainly originated from the spin Hall current produced in the Pt(2.2) layer. The larger \(I_{\mathrm{c}}\) required for samples with higher n is mainly due to additional current shunting with greater overall multilayers thickness, and the value of switching current density \(J_{\mathrm{write}}\) is similar among all main samples (\(J_{\mathrm{write}}\) for samples with n = 1, 2, 3, 4 are 2.7\(\times\)10\({}^{11}\), 3.5\(\times\)10\({}^{11}\), 3.5\(\times\)10\({}^{11}\), and 3.6\(\times\)10\({}^{11}\) A/m\({}^{2}\) for PMA switching, and 2.2\(\times\)10\({}^{11}\), 3.0\(\times\)10\({}^{11}\), 3.1\(\times\)10\({}^{11}\), and 3.2\(\times\)10\({}^{11}\) A/m\({}^{2}\) for IMA switching, respectively). The required current density shows no significant variation, suggesting the switching mechanism did not change between different samples (regardless of the \(\mathbf{M}_{\mathrm{PMA}}\)switching percentage). It is also important to note that the tilted anisotropy plus SOT scenario [14,16] is hardly the main mechanism for the field-free switching observed here, since the switching percentage of \(\mathbf{M}_{\mathrm{PMA}}\) is the lowest for the sample with the largest tilted angle \(\theta_{\mathrm{ani}}\) (see Table I). **VI. Damping-like SOT characterizations** To further shed light on the role of SOT and quantify its contribution in the observed field-free switching of \(\,{\bf M}_{\rm PMA}\,\), we perform current-induced loop shift measurement to evaluate the SOT-induced effective field \(\,H_{z}^{\rm eff}\,\) in these heterostructures. This is done by applying \(I_{\rm DC}\) along \(x\) direction of the Hall bar device to exert a sizable damping-like SOT acting on the Neel domain wall (DW) moments in \(\,{\bf M}_{\rm PMA}\), which translates to an effective field along \(z\)-direction if a symmetry-breaking \(x\)-direction in-plane field \(H_{x}\) is also applied and fully aligns the DW moments along \(x\)[2, 60] (overcoming the classical interfacial DMI effective field). A net \(\,H_{z}^{\rm eff}\,\) proportional to \(I_{\rm DC}\) then can be estimated from the shifted out-of-plane hysteresis loops, as shown in Fig. 5(a) (representative loop shifts for \(I_{\rm DC}=\pm 2.5\) mA) and (b) (extracted \(\,H_{z}^{\rm eff}\,\) vs. \(I_{\rm DC}\)) for the n = 1 sample under an in-plane field \(H_{x}=600\) Oe. The \(H_{x}\,(\varphi_{H}=0^{\circ})\) and \(H_{y}\,(\varphi_{H}=90^{\circ})\) dependence of \(\,H_{z}^{\rm eff}/I_{\rm DC}\,\) are further summarized in Fig. 5(c), which follow the features of conventional heavy metal (HM)/ferromagnetic metal (FM) bilayer structures [2, 16]; \(H_{z}^{\rm eff}/I_{\rm DC}\,\) appears to be an odd function with respect to \(H_{x}\), first increases with magnetic field then saturates. In all samples, \(H_{x}\) of \(\sim\) 600 Oe is enough to saturate interfacial DMI field to provide maximized \(\,H_{z}^{\rm eff}/I_{\rm DC}\,\), and there is no observable \(\,H_{z}^{\rm eff}/I_{\rm DC}\,\) when \(H_{y}\) is applied. However, even though there is indeed a finite contribution of \(\,H_{z}^{\rm eff}\,\) enforcing on \(\,{\bf M}_{\rm PMA}\,\) caused by the spin current, the value is far too small to be a deterministic factor to achieve field-free switching with these SOTs assisted by the tilted anisotropy. As shown in Fig. 5(d), even the highest \(\,H_{z}^{\rm eff}/I_{\rm DC}\,\) is lower than 3 Oe/mA when the interfacial DMI field is fully overcome. By taking the device geometries into account, we obtain \(\,H_{z}^{\rm eff}\)/\(J_{\rm DC}=11\), 5.8, 5.1, and 2.1 Oe/10\({}^{\rm 11}\)A\(\cdot\)m\({}^{\rm 2}\) for sample with n =1, 2, 3, and 4, respectively, where \(J_{\rm DC}\) represents the current density flowing through the Hall bar channels. These values are several times smaller than those reported in other works with Pt-based wedge system [2, 16]. Any attempts of \(H_{x}\) assisted current-induced switching measurements also results in non-switching, due to the low \(\,H_{z}^{\rm eff}\)/\(J_{\rm DC}\), and the pinned \({\bf M}_{\rm IMA}\). These results confirm our assumption that \({\bf M}_{\rm PMA}\) switching mainly relies on the i-DMI and its coupling to the \({\bf M}_{\rm IMA}\) rather than the tilted anisotropy and the SOT acting upon it. More recently, several studies also reported on the existence of unconventional spin currents with \(z\)-spin polarization (\(\sigma_{z}\)) that can be utilized to achieve field-free switching of \({\bf M}_{\rm PMA}\) in IMA-FM/HM/PMA-FM trilayer structures [19, 61]. One possible scenario is that when the classical \(y\)-spin \(\sigma_{y}\) from HM scattered and polarized within the IMA-FM, spin-orbit precession (SOP) induced \(\,\sigma_{z}\) would be generated following the symmetry of \(\,\sigma_{y}\times{\bf m}_{x}\), providing a finite \(\,H_{z}^{\rm eff}\) under zero field condition (\(H_{x}=0\) Oe), where \({\bf m}_{x}\) indicates magnetic moments in IMA-FM. However, \(\,H_{z}^{\rm eff}\)/\(I_{\rm DC}\) at \(\,H_{x}=0\) Oe are minimal for all our samples, as shown in Fig. 5(d). \(\,H_{z}^{\rm eff}\)/\(J_{\rm DC}\) at \(\,H_{x}=0\) Oe are also extremely low for all samples, as listed in Table 1 with other parameters. This may be ascribed to numerous reasons, including (i) the usage of a strong spin-orbit interaction material Pt(2.2) as the spacer layer, which leads to rapid spin dephasing [62, 63]; (ii) insufficient \({\bf m}_{x}\) in the IMA Co layer since the existing i-DMI will favor \({\bf D}\) (stabilized along \(x\)) \(\,\perp\,{\bf M}_{\rm IMA}\) (stabilized along \(y\)); or (iii) simply too low a net spin current (limited \(\,\sigma_{y}\)) for the SOP to manifest. **VII. Conclusions** In summary, we identify two contributions to \(H_{z}^{\rm shift}\) when measuring the out-of-plane hysteresis loops in the presence of a static \(H_{\rm IP}\) in a symmetry breaking multilayer system consists of both \({\bf M}_{\rm PMA}\) and \({\bf M}_{\rm IMA}\): i-DMI coupling and titled magnetic anisotropy. The two mechanisms, however, have drastically different dependences on the [Pt/Co]\({}_{\rm n}\) stacking number n. The i-DMI contribution diminishes while the tilted anisotropy gets more significant as increasing n, which suggests that the observed current-induced field-free switching of \({\bf M}_{\rm PMA}\) is governed by the i-DMI coupling between \({\bf M}_{\rm PMA}\) and \({\bf M}_{\rm IMA}\). We further quantify the current-induced SOT effective field \(H_{z}^{\rm eff}\) under various conditions, which exclude the existence of unconventional spins (SOP induced \(\sigma_{z}\)) and the possibility of field-free switching with conventional SOT assisted by the tilted anisotropy. Our results therefore point out the importance of precise analysis on (i) asymmetric interlayer exchange coupling such as i-DMI (**D** and \(H_{z,\,{\rm DMI}}^{\rm sat}\)), (ii) tilted magnetic anisotropy (\(\theta_{\rm ani}\)), as well as (iii) SOT contributions (\(H_{z}^{\rm eff}\)/\(I_{\rm DC}\)) in understanding the field-free switching in magnetic heterostructures with structural symmetry breaking. and the [Pt/Co]\({}_{\rm n}\) multilayers. (b) Experimental setup to quantify the effective field induced by i-DMI coupling with a micron-sized Hall bar device. Representative out-of-plane hysteresis loops of \(\mathbf{M}_{\rm PMA}\) in the sample \(\rm n=1\) with (c) no in-plane field applied (\(H_{\rm IP}\)= 0 Oe ) and (d) \(H_{\rm IP}\)= 100 Oe for \(\varphi_{H}\) = 90\({}^{\circ}\) and 270\({}^{\circ}\). \(H_{z}^{\rm shift}\) represents the magnitude of which the hysteresis loop center has shifted away from \(H_{\rm z}=0\). Figure 1: (a) Schematic illustration of the [Pt/Co]\({}_{\rm n}\)/Pt/Co multilayer system with broken symmetry, which shows the relative orientations of \(\mathbf{M}_{\rm IMA}\) (in-plane magnetized Co), \(\mathbf{M}_{\rm PMA}\) ([Pt/Co]\({}_{\rm n}\) with PMA), and i-DMI vector \(\mathbf{D}\). The dashed lines represent the obliquely-grown direction of the Ta buffer and the [Pt/Co]\({}_{\rm n}\) multilayers. (b) Experimental setup to quantify the effective field induced by i-DMI coupling with a micron-sized Hall bar device. Representative out-of-plane hysteresis loops of \(\mathbf{M}_{\rm PMA}\) in the sample \(\rm n=1\) with (c) no in-plane field applied (\(H_{\rm IP}\)= 0 Oe ) and (d) \(H_{\rm IP}\)= 100 Oe for \(\varphi_{H}\) = 90\({}^{\circ}\) and 270\({}^{\circ}\). \(H_{z}^{\rm shift}\) represents the magnitude of which the hysteresis loop center has shifted away from \(H_{\rm z}=0\). \(\mathbf{M}_{\mathrm{PMA}}\) ([Pt/Co]\({}_{\mathrm{n}}\)). The uncertainties originate from the standard errors in fittings. \begin{table} \begin{tabular}{c c c c c c} \hline n & \(H_{\mathrm{C}}\) (Oe) & \(H_{z,\,\mathrm{DMI}}^{\mathrm{sat}}\) (Oe) & \(\theta_{\mathrm{ani}}\) (deg) & zero-field switching percentage (\%) & zero-field \(H_{z}^{\mathrm{eff}}\)/\(J_{\mathrm{DC}}\) \\ \hline 1 & 88 & 101 & 2.4 \(\pm\) 0.13 & 70 & 0.7 \(\pm\) 1.30 \\ 2 & 115 & 82 & 4.4 \(\pm\) 0.20 & 40 & 4.3 \(\pm\) 0.52 \\ 3 & 143 & 61 & 5.5 \(\pm\) 0.36 & 6 & 2.6 \(\pm\) 0.67 \\ 4 & 151 & 34 & 5.6 \(\pm\) 0.50 & 2 & 0.8 \(\pm\) 1.01 \\ \hline \end{tabular} \end{table} Table 1: Summary of stacking number n dependence of the measured and the estimated quantities for \(\mathbf{M}_{\mathrm{PMA}}\) ([Pt/Co]\({}_{\mathrm{n}}\)). The uncertainties originate from the standard errors in fittings. all obliquely-grown samples. For obliquely-grown samples, \(H_{z}^{\rm shift}\) can be divided into two parts, the i-DMI coupling dominating regime (the shaded section with \(H_{\rm IP}\) < \(H_{\rm IP}^{\rm trans}\)) and the tilted anisotropy dominating regime (the white section with \(H_{\rm IP}\) > \(H_{\rm IP}^{\rm trans}\)). \(H_{z}^{\rm shift,\ sat}\) and \(H_{\rm IP}^{\rm trans}\) for the n = 1 sample are indicated by the black double headed arrows. (c) The i-DMI effective field \(H_{z,\rm{DMI}}^{\rm sat}\) and (d) the extracted values of \(H_{z}^{\rm eff/}H_{\rm IP}\) contributed by i-DMI coupling (\(H_{\rm DMI}\)) and tilted anisotropy (\(H_{\rm Tilt\ Anti}\)) as functions of the stacking repetition number n. measurements. (a) \(H_{z}^{\text{shift}}\) variation under different \(\varphi_{H}\) and \(H_{\text{IP}}\) for the sample with \(\text{n}=2\). (b) The ratios \(H_{\text{Tilt\,Ani}}^{\text{eff}}/H_{\text{IP}}\) extracted from \(\frac{H_{z}^{\text{shift}}}{H_{\text{IP}}}\) in the white regions (titled anisotropy dominated regime) under different \(\varphi_{H}\) for the \(\text{n}=2\) samples grown obliquely (wedged, blue squares) and uniformly (non-wedged, red circles), from which \(\theta_{\text{ani}}\) can be extracted using Eq. (1). (c) \(\theta_{\text{ani}}\) for all obliquely-grown samples. low switching percentage, current switching loop of \(\mathbf{M}_{\mathrm{PMA}}\) for \(\mathrm{n}=4\) could be measured repeatedly). The solid data points represent \(\mathbf{M}_{\mathrm{PMA}}\) measured through \(R_{H}\) and the open data points represent \(\mathbf{M}_{\mathrm{IMA}}\) sensed by the UMR (\(\Delta\,R\)). Stacking number dependence of (b) critical switching current \(I_{\mathrm{c}}\) and (c) switching percentage for both \(\mathbf{M}_{\mathrm{PMA}}\) and \(\mathbf{M}_{\mathrm{IMA}}\) of the four samples. Figure 4: Field-free switching of \(\mathbf{M}_{\mathrm{PMA}}\) and \(\mathbf{M}_{\mathrm{IMA}}\) coupled via i-DMI. (a) Current-induced magnetization switching loops for Hall bar samples with stacking number \(\mathrm{n}=1\) to \(4\) (While having a low switching percentage, current switching loop of \(\mathbf{M}_{\mathrm{PMA}}\) for \(\mathrm{n}=4\) could be measured repeatedly). The solid data points represent \(\mathbf{M}_{\mathrm{PMA}}\) measured through \(R_{H}\) and the open data points represent \(\mathbf{M}_{\mathrm{IMA}}\) sensed by the UMR (\(\Delta\,R\)). Stacking number dependence of (b) critical switching current \(I_{\mathrm{c}}\) and (c) switching percentage for both \(\mathbf{M}_{\mathrm{PMA}}\) and \(\mathbf{M}_{\mathrm{IMA}}\) of the four samples. (a) Out-of-plane hysteresis loops of \(\mathbf{M}_{\mathrm{PMA}}\) sensed by \(R_{H}\) for a Hall bar sample with n = 1 under \(I_{\mathrm{DC}}\) = \(\pm\) 2.5 mA and \(H_{x}\) = 600 Oe. (b) The SOT-induced \(H_{z}^{\mathrm{eff}}\) versus \(I_{\mathrm{DC}}\) for sample n = 1 under \(H_{x}\)= \(\pm\) 600 Oe. (c) \(H_{z}^{\mathrm{eff}}\)/\(I_{\mathrm{DC}}\) versus \(H_{\mathrm{IP}}\) with \(\varphi_{H}\) = 0\({}^{\circ}\) and 90\({}^{\circ}\) (\(H_{x}\) and \(H_{y}\)) for sample n = 1. (d) \(H_{z}^{\mathrm{eff}}\)/\(I_{\mathrm{DC}}\) vs. the stacking repetition number n under \(H_{\mathrm{IP}}\) = 0 and 1500 Oe with \(\varphi_{H}\) = 0\({}^{\circ}\). Figure 5: SOT characterization results measured by current-induced hysteresis loop shift measurements. ## Appendix A Relations between the Pt spacer thickness and i-DMI effective field In order to find an optimal Pt spacer layer thickness to serve as the basis for analyses and to investigate the i-DMI spacer thickness dependence, we have separately fabricated a series of samples which all have a single Pt(1)/Co(0.8) stack as the PMA layer, and the Pt spacer thickness stacked on top ranged from 1.8 nm to 2.5 nm (similar to the n = 1 sample in the main text). The i-DMI effective fields of the samples with different Pt spacer thicknesses are quantified using the identical field-sweep protocol described in the main text (as seen in Fig. 1(b)). Fig. A1 (a) shows the \(H_{z}^{\text{shift}}\) as a function of \(\varphi_{H}\), and Fig. A1 (b) compiles both the **D** vector's angle, \(\varphi_{D}\), and \(H_{\text{z, DMI}}^{\text{sat}}\) as functions of Pt space thickness. The **D** vector's direction shows great agreement with the representative data of Fig. 2(a) with \(\varphi_{D}\) universally close to 0\({}^{\circ}\). For Pt thickness \(\geq\) 2.0 nm, \(H_{\text{z, DMI}}^{\text{sat}}\) shows a quasi-monotonic decay with increasing the Pt thickness, that can be fitted by (Pt thickness)-1, in agreement with the nature of i-DMI being an interfacial effect. However, \(H_{\text{z, DMI}}^{\text{sat}}\) decreases drastically when Pt thickness is \(<\) 2.0 nm. Previous observations by Avci. _et al_[48] have shown a very similar trend with regard to the spacer thickness, and it was argued that the spacer thickness dependence deviates from a damped oscillation due to the total DMI being averaged out by all three-site interactions between PMA and IMA magnetizations. This deterioration at low Pt thicknesses could also be related to the inferior interfacial quality, such as the Pt layer becoming discontinuous islands rather than a continuous film, thus weakening the Pt/Co interface's spin orbit coupling and subsequently the i-DMI strength [64]. Aside from a sizable \(H_{\text{z, DMI}}^{\text{sat}}\), robust PMA characteristics are also required for current-induced hysteresis loop shift and current-induced magnetization switching measurements. \(H_{c}\) and the ratio of out-of-plane remnant/saturated magnetization (M\({}_{\text{R}}\)/M\({}_{\text{S}}\)) as functions of Pt thickness are shown in Fig. A1(c), with rapid deterioration of PMA evident when Pt thickness is less than 2.0 nm, which may also be related to the decreased interfacial spin orbit coupling [65]. Judging from the results in Fig. A1(b) and(c), we concluded that Pt thickness being 2.2 nm strikes a good balance between a sizable \(H_{\text{z, DMI}}^{\text{sat}}\) and robust PMA characteristics. Figure A1: (a) \(H_{z}^{\text{shift}}\) at different Pt spacer thicknesses with varying \(\varphi_{H}\), solid lines are sine fits to the data. (b) Compilations of both \(\varphi_{D}\) and \(H_{\text{z, DMI}}^{\text{sat}}\) as functions of Pt thicknesses, extracted from sine fits in (a). Red solid line denotes a (Pt thickness)-1 fitting to \(H_{\text{z, DMI}}^{\text{sat}}\), between Pt thickness = 2.0 nm to 2.5 nm. (c) Characterization of perpendicular direction (out-of-plane) coercivity and out-of-plane remnant/saturated magnetization as functions of Pt spacer thickness. ## Appendix B i-DMI induced AMR loop shift of the IMA layer From the i-DMI Hamiltonian described in the main text, it is suggested that a reciprocal i-DMI effective field originated from \(\mathbf{M}_{\rm PMA}\) is also exerted on \(\mathbf{M}_{\rm IMA}\) along \(\pm\)\(y\) direction. To prove the existence of such effective field, we obtained the \(\mathbf{M}_{\rm IMA}\) response under the influence of i-DMI by measuring the anisotropic magnetoresistance (AMR) response of the IMA Co layer. Since the i-DMI effective field exerted on \(\mathbf{M}_{\rm IMA}\) takes the form of \(\mathbf{H}_{\rm DMI}=\mathbf{\rm D}\times\mathbf{M}_{\rm PMA}\), the AMR loops should show measurable shifts in the \(\pm\)\(y\) direction (with \(\mathbf{\rm D}\) lies approximately toward 0\({}^{\circ}\) and \(\mathbf{M}_{\rm PMA}\) fixed toward \(\pm\)\(z\)). Fig. 11 (a) and (b) show the AMR shifts of \(\rm n=1\) and \(\rm n=4\) samples, with the measured i-DMI effective field exerted on \(\mathbf{M}_{\rm IMA}\)to be 120 Oe and 65 Oe, respectively. The decreasing trend agrees with the gradually decreasing \(H_{\rm z,\,DMI}^{\rm sat}\) when n is increased from 1 to 4, as reported in the main text for \(\mathbf{M}_{\rm PMA}\). Note that due to the sizable \(H_{\rm IP}\) used to properly align \(\mathbf{M}_{\rm IMA}\), the i-DMI effective field enacting on \(\mathbf{M}_{\rm IMA}\)does not modify the results in the main text. Figure 11: Representative shifted AMR loops due to i-DMI effective field acting on the \(\mathbf{M}_{\rm IMA}\) (in-plane Co layer) for (a) \(\rm n=1\) sample, and (b) \(\rm n=4\) sample. The \(H_{\rm DMI}\) acting on either samples is obtained by fixing \(\mathbf{\rm M}_{\rm PMA}\)toward \(\pm\)\(z\) when measuring the AMR loops.
2303.01902
Superconducting Diode Effect Sign Change in Epitaxial Al-InAs Josepshon Junctions
There has recently been a surge of interest in studying the superconducting diode effect (SDE) partly due to the possibility of uncovering the intrinsic properties of a material system. A change of sign of the SDE at finite magnetic field has previously been attributed to different mechanisms. Here, we observe the SDE in epitaxial Al-InAs Josephson junctions with strong Rashba spin-orbit coupling (SOC). We show that this effect strongly depends on the orientation of the in-plane magnetic field. In the presence of a strong magnetic field, we observe a change of sign in the SDE. Simulation and measurement of supercurrent suggest that depending on the superconducting widths, $W_\text{S}$, this sign change may not necessarily be related to 0--$\pi$ or topological transitions. We find that the strongest sign change in junctions with narrow $W_\text{S}$ is consistent with SOC-induced asymmetry of the critical current under magnetic-field inversion, while in wider $W_\text{S}$, the sign reversal could be related to 0--$\pi$ transitions and topological superconductivity.
Neda Lotfizadeh, William F. Schiela, Barış Pekerten, Peng Yu, Bassel Heiba Elfeky, William Strickland, Alex Matos-Abiague, Javad Shabani
2023-03-03T12:58:17Z
http://arxiv.org/abs/2303.01902v3
# Superconducting Diode Effect Sign Change in Epitaxial Al-InAs Josephson Junctions ###### Abstract **There has recently been a surge of interest in studying the superconducting diode effect (SDE). The SDE could be observed in systems where the time reversal and inversion symmetries are broken. Here, we observe the SDE in epitaxial Al-InAs Josephson junctions (JJs) with strong Rashba spin-orbit coupling (SOC), and show that this effect strongly depends on the orientation of the in-plane magnetic field. In the presence of strong magnetic field, we observe a change of sign in the SDE. Simulation and measurement of supercurrent suggests that depending on the superconducting widths, \(W_{S}\), this sign change may not necessarily be related to \(0-\pi\) or topological transitions. We find the strongest sign change in junctions with narrow \(W_{S}\) is consistent with SOC-induced asymmetry of the critical current under magnetic-field inversion, while in wider \(W_{S}\), the sign reversal could be related to \(0-\pi\) transitions and topological superconductivity.** Nonreciprocity in non-centrosymmetric quantum systems has been well studied in semiconductors as they are essential for the rectification function in electrical diodes and solar cells. A recent breakthrough is the discovery of nonreciprocity in superconductors, which implies a progress towards designing superconducting diodes and has attracted vigorous interests due to its possible application in modern electronic circuits, sensors, and detectors [1; 2; 3; 4; 5; 6; 7; 8]. Nonreciprocal critical currents in superconductors occur when the magnitude of the critical supercurrent, \(I_{c}\), depends on the direction in which the current is swept. Theoretically, the so-called diode effect can occur when both inversion and time reversal symmetries are broken, where the latter can be achieved by magnetic proximity effect, in magnetic Josephson junctions, or by applying an external magnetic field. This effect has been attributed to the presence of finite-momentum Cooper pairs and the change in the nature of superconductivity [7; 9; 10; 11; 12]. Recent studies have suggested the existence of the SDE in JJs with large Rashba SOC [1; 2; 5; 6; 13; 14; 15]. The magnitude of the supercurrent in JJs with SOC depends on the direction of the magnetic field, as the Rashba and Dresselhaus effects can have different contributions [16; 17]. Therefore, investigating the SDE through a JJ can provide information about the SOC in its semiconductor. Planar JJs fabricated on epitaxial Al-InAs heterostructures are great candidates to study SDE due to their strong SOC [2; 3; 18]. Such devices have also shown signatures of topological phase transition when their time reversal symmetry is broken by an in-plane magnetic field [19; 20; 21]. Recently, Costa _et al._[13] have reported a sign reversal of the AC SDE in multi-channel JJs based on Al-InAs with strong SOC subjected to a magnetic field, and related it to a \(0-\pi\)-like transition induced by the Zeeman interaction in the device. Conversely, Banerjee _et al._[21] have proposed a SDE originating from finite-momentum Cooper pairing solely due to orbital effects, without invoking SOC or Zeeman interaction. In this work, we study epitaxial Al-InAs JJs with various superconductive contact widths, \(W_{S}\). By applying a magnetic field perpendicular to the current and parallel to the junction, we observe nonreciprocal critical currents due to the finite-momentum Cooper pairing enabled by the coexistence of strong Rashba SOC and the Zeeman interaction. We observe a SOC-induced shift, \(B_{*}\), of the magnetic field yielding the maximum of the critical current amplitude and use it to estimate the Rashba SOC strength in the JJ. In the absence of the magnetic field, time-reversal symmetry is restored and the SDE vanishes. However, the SDE can also vanish at certain finite magnetic fields and changes sign below the superconductor critical field, \(B_{c}\). We consider JJs with various superconducting widths, \(W_{S}\), and observe zeros of the SDE, across which the current difference \(\Delta I_{c}=I_{c}^{+}-|I_{c}^{-}|\) characterizing the SDE exhibits sign reversals at finite values of the magnetic field. We attribute the sign reversals to i) \(0-\pi\)-like jumps of the ground-state superconducting phase difference for wide \(W_{S}\) and ii) SOC-induced asymmetry of the critical current under magnetic-field inversion with respect to the field-shift, \(B_{*}\) for narrow \(W_{S}\). Our junctions are based on epitaxial superconducting Al thin films grown in-situ on InAs heterostructures by molecular beam epitaxy on a InP substrate followed by a graded buffer layer [18; 22; 23]. Typically, the critical field of thin film of Al is greater than 1 T. Fig. 1(a) shows a general schematic of our planar JJs. We study junctions with varying superconducting widths from \(W_{S}=0.15\,\mathrm{\SIUnitSymbolMicro m}\) to \(W_{S}=1\,\mathrm{\SIUnitSymbolMicro m}\). All the junctions are \(W=4\,\mathrm{\SIUnitSymbolMicro m}\) wide and are fabricated using a transgene selective wet etching of Al. Fig. 1(b) shows a false colored scanning electron microscopy (SEM) image of a typical \(L=150\,\mathrm{nm}\) long junction with superconducting width of \(W_{S}=1\,\mathrm{\SIUnitSymbolMicro m}\). The Al induced gap in our junctions is about \(\Delta=220\mu\)eV estimated from critical temperature, \(T_{C}\). The semiconductor-superconductor transparency of our junctions are reported in our previous works [24, 25, 26] and can host modes with near unity transparency. We present the data from junctions with \(W_{S}=0.6\,\mathrm{\SIUnitSymbolMicro m}\) (JJ1), and \(W_{S}=0.15\,\mathrm{\SIUnitSymbolMicro m}\) (JJ2) in the main text and provide the additional measurements on three more junctions in the Supporting Information (SI). All the measurements in this study are performed at T \(\sim 30\,\mathrm{mK}\) in a dilution refrigerator equipped with a three-axis vector magnet. As shown in Fig. 1(b), the \(z\)-axis of the magnet is perpendicular to the sample plane, while \(x\) and \(y\)-axes are in-plane components aligned parallel to the current and junction, respectively. Fig. 1(c) presents the differential resistance as a function of the bias current and applied out-of-plane magnetic field for JJ1 with \(W_{S}=0.6\,\mathrm{\SIUnitSymbolMicro m}\) when the in-plane field is set to zero. The observed Fraunhofer pattern shows a hysteresis due to heating effects when bias is swept through zero [24, 27]. The critical current of the hot electrons branch, where the bias goes from high bias to zero, is clearly smaller than the critical current of the cold electrons branch going from zero to high bias. This is due to the difference between the effective electronic temperature of the hot and cold electrons branches before the transition to or out of the superconducting state. Such a hysteretic behavior leads to different values of critical current on each side, as can be observed in Fig. 1(d) for JJ1, and has to be avoided for accurate SDE measurements. For the rest of this study, we only derive the values of the critical current from the hot electrons branch, going from high bias to zero bias as shown in Fig. 1(e). These two values are expected to be equal for both directions in reciprocal measurements of a conventional device without presence of in-plane magnetic field. By carefully aligning the magnet directions to Josephson junction and eliminating unwanted out-of-plane component of magnetic field (\(B_{z}\)), we measure the critical current in the presence of an in-plane magnetic field. Fig. 2(a) and (b) show the measured magnitude of the critical current \(|I_{c}|\) for JJ1 and JJ2 when \(B_{z}=0\) T and the in-plane magnetic field with strength \(B_{x}\) is parallel to Figure 1: [Color online] (a) A schematic of a junction of length \(L\), width \(W\) and superconducting width of \(W_{S}\) fabricated on the Al-InAs heterostructure. The superconducting contacts are made of Al and the quantum well (QW) consists of a layer of InAs grown between two layers of In\({}_{0.81}\)Ga\({}_{0.19}\)As. (b) False colored SEM image of a typical junction showing Al (blue) and QW (green) regions. The dashed line between the superconducting contacts is the \(W=4\,\mathrm{\SIUnitSymbolMicro m}\) wide and \(L=150\,\mathrm{nm}\) long etched gap. The magnetic field can be applied in three direction independently as shown on the SEM image. (c) Differential resistance as a function of the bias current and out-of-plane magnetic field of JJ1 with \(W_{S}=0.6\,\mathrm{\SIUnitSymbolMicro m}\) at zero in-plane magnetic field. A hysteresis due to the thermal effects can be seen. White dashed line indicates the position of the maximum of the critical current. (d) A line cut of (c) showing hysteresis in voltage versus current when the bias is swept from negative to positive. The values of supercurrent on each side are different due to the thermal effects. (e) Voltage versus current when the bias is swept from negative to zero (blue) and positive to zero (red). The values of supercurrent on two sides are expected to be equal in a conventional JJ. Figure 2: [Color online] Absolute value of the critical currents as a function of the in-plane magnetic field for samples JJ1 (left column) and JJ2 (right column). Top and bottom rows correspond to an in-plane magnetic field parallel and perpendicular to the current, respectively. The blue circles represent the magnitude of supercurrent when the bias is swept from negative to zero while red cross marks are for bias from positive to zero. The critical current amplitudes when the magnetic field is parallel to the current (\(\mathbf{B}\parallel\mathbf{x}\)) are nearly equal in both directions, indicating a vanishing SDE [(a) and (b)]. When the magnetic field is perpendicular to the current (\(\mathbf{B}\parallel\mathbf{y}\)), the amplitudes of the forward and reverse critical currents are different, signaling the presence of the SDE [(c) and (d)]. Note that nonreciprocity can be observed in both devices. the current. Blue circles and red cross marks correspond to measurement of the magnitude of the critical current when the bias is swept from negative high bias to zero and from positive high bias to zero, respectively. We find that the magnitude of the critical current in both directions is the same and there is no sign of nonreciprocity when the applied in-plane magnetic field is parallel to the current. The absence of SDE when the field is parallel to the current indicates that the dominant SOC in the junctions is of Rashba type, which is in agreement with our previous works [22; 28]. When the in-plane magnetic field is applied perpendicular to the current, in the \(y\)-direction, we find a difference between the forward and reverse critical currents. Fig. 2(c) and (d) show the dependence of the absolute value of critical current \(|I_{c}|\) on \(B_{y}\) for JJ1 and JJ2. We observe a clear nonreciprocal behavior, where the critical current is larger for positive than for negative bias when \(B_{y}>0\). This behaviour is reversed when the in-plane field direction is flipped to \(B_{y}<0\), in agreement with the theoretically expected symmetry relation \(I_{c}^{+}(B_{y})=|I_{c}^{-}(-B_{y})|\). Details of the experimental measurements and analyses are given in SI. We extract \(I_{c}\) at each in-plane magnetic field from the maximum of the Fraunhofer pattern at that field. The same measurements were done on three additional devices with \(W_{S}=0.4\), \(0.8\) and \(1\,\mu\)m and showed the same results, as presented in Fig. S5 and S6 of the SI. Those devices exhibit the same behavior as JJ1 and JJ2. In all cases we observe a shift, \(B_{*}\) in the magnetic fields at which the critical currents reach their maximum values when \(B_{y}\perp I\). The shift is positive (\(B_{*}>0\)) for the critical current corresponding to positive bias and negative (\(B_{*}<0\)) for the case of negative bias. This behavior is captured by our numerical tight-binding simulations (see details in SI). The result of the tight-binding simulation in Fig. 3(a) for a junction with \(W_{S}=0.15\,\mu\)m clearly shows the superconducting diode effect in the splitting of \(I_{c}^{\pm}\), as well as the symmetry with respect to the sign of \(B_{y}\). Moreover, \(B_{*}\) indicates the presence of SOC in the junction and obeys the symmetry relation, \(B_{*}(\alpha)=-B_{*}(-\alpha)\), where \(\alpha\) denotes the strength of the Rashba SOC. The \(\alpha\)-dependence of \(B_{*}\), obtained from the numerical simulations, is shown in Fig. 3(b). The black, dashed line is just a linear fit to guide the eye. Using the field-shift value extracted from the experimental data from our devices, the Rashba SOC strength is in range of 8-12 meV nm in our junctions. These values are in overall agreement with values of \(\alpha\) in InAs extracted through weak antilocalization measurements [22; 28]. Complementary to the numerical simulations, we provide approximate analytical expressions for the normalized critical currents at low field, \[\frac{|I_{c}^{\pm}|}{I_{0}}=1-b\left[1\pm c\;\mathrm{sgn}(B_{y}\mp B_{*}) \right](B_{y}\mp B_{*})^{2}, \tag{1}\] where \(I_{0}\) is the maximum absolute value of the critical current, \(b=(g^{*}\mu_{B}/4E_{T})^{2}\), \(c=k_{so}/k_{F}\), and \(B_{*}\) is the magnitude of the field at which \(I_{c}\) is maximum. Here \(g^{*}\) is the effective g-factor, \(\mu_{B}\) the Bohr magneton, \(k_{F}\) the Fermi wavevector, \(E_{T}=\hbar v_{F}/(2L)\) the Thouless energy, \(v_{F}\) the Fermi velocity, and \(k_{so}=\alpha m^{*}/\hbar^{2}\), with \(m^{*}\) representing the effective mass. Equation (1) was obtained in the limits \(L\ll\xi_{0}\) where \(\xi_{0}\) is the superconducting coherence length and \(W_{s}\rightarrow\infty\). Therefore it is not in quantitative agreement with finite \(W_{S}\) in experimental devices. However, Eq. (1) can provide a qualitative description of the main trends exhibited by the critical currents. In fact, Eq. (1) reproduces well the functional behavior of the experimental data at low field. Fig. S7 shows the experimental data of all the junctions (red and blue marks) fitted to Eq. (1) (dash lines) using \(b\), \(c\) and \(B_{*}\) as fitting parameters. Our experimental data together with numerical simulation suggest that the observed SDE originates from the finite-momentum Cooper pairing induced by the shift of Figure 3: [Color online] (a) Tight-binding simulation of the critical currents at low magnetic field for a JJ with \(W_{S}=0.15\,\mu\)m. The interplay between Rashba SOC and Zeeman interaction shifts the maximum of the critical current to occur at \(B_{y}=B_{*}\neq 0\). This is in agreement with the experimental data shown in Fig. 2(c) and (d). (b) Tight-binding simulation of the shifting field, \(B_{*}\), as a function of the Rashba parameter. Using the experimentally extracted value for JJ1 and JJ2, \(B_{*}\approx 15\)mT, we can estimate the Rashba parameter to be around 10 meV nm in these devices. (c - e) Effects of magnetic field and Rashba SOC on the Fermi contours. Arrows with the same color (blue or red) indicate the spin of electrons in a Cooper pair. (c) Cooper pairs with total finite momentum (\(q\neq 0\)). In the absence of Rashba SOC, the Cooper pair wave function is invariant under the inversion of the magnetic field direction (i.e., \(|\psi(B_{y})\rangle=|\psi(-B_{y})\rangle\)) and the superconducting diode effect is suppressed. (d) In the presence of Rashba SOC, the total momentum of the Cooper pairs is zero when \(B_{y}=0\). (e) In the presence of both Rashba SOC and a magnetic field component perpendicular to the current, the magnetic-field inversion symmetry of the wave function breaks down and the finite-momentum Cooper pairs yield a non-reciprocal response. the Fermi contours when the Zeeman interaction and the Rashba SOC coexist as shown in Fig. 3(c)-(e). In the regime \(E_{Z}\ll\alpha k_{F}\), where \(E_{Z}\), \(\alpha\), and \(k_{F}\) denote the Zeeman energy, the Rashba SOC strength and the Fermi wave vector, respectively, the Fermi contours in the N region can be approximated as, \[k_{\lambda}=-\lambda k_{so}+\sqrt{k_{F}^{2}+k_{so}^{2}+\lambda\kappa^{2}\sin( \varphi-\theta)}, \tag{2}\] where \(\lambda=\pm 1\), \(\kappa=\sqrt{2m^{*}E_{Z}/\hbar^{2}}\), and \(\theta\) and \(\varphi\) determine the directions of the wave vector and magnetic field with respect to the \(x\)-axis, respectively. The \(x\)-component of the total momentum of the pairs is, \[q\approx\frac{\kappa^{2}\sin\varphi}{\sqrt{k_{F}^{2}+k_{so}^{2}}} \tag{3}\] and the Cooper pair wave function across the junction can be approximated as, \[|\psi\rangle=|\uparrow\downarrow\rangle\ e^{iqx}+|\downarrow\uparrow\rangle\ e^{-iqx} \tag{4}\] and can be rewritten in terms of singlet, \(|S\rangle=|\uparrow\downarrow\rangle+|\downarrow\uparrow\rangle\) and triplet, \(|T\rangle=|\uparrow\downarrow\rangle-|\downarrow\uparrow\rangle\) components [29], \[|\psi\rangle=\cos(qx)|S\rangle+i\sin(qx)|T\rangle. \tag{5}\] For \(E_{Z}\ll\alpha k_{F}\), an inversion of the magnetic field orientation reverses the direction of the Fermi contours shift without affecting the spin orientation. Therefore, the coexistence of the singlet and triplet components in the presence of SOC breaks the inversion symmetry of the wave function with respect to the magnetic field direction, resulting in a non-reciprocal response with distinct forward and reverse critical currents. However, the SDE vanishes when the magnetic field is oriented along the \(x\)-axis (see upper row of Fig. 2) for in this case \(\varphi=0\) and \(q=0\) in Eq. (3). To further study the SDE in our junctions, we investigate the nonreciprocity of the critical currents at higher in-plane magnetic fields perpendicular to the current (\(B_{y}\)) in the devices JJ1 and JJ2. Fig. 4(a) and (b) show the absolute value of the critical currents for each junction as a function of \(B_{y}\). A dip and peak in \(|I_{c}|\) of JJ1 is observed around \(B_{y}\sim 0.6\,\mathrm{T}\). Previous studies have suggested such a behavior can be related to the closing and reopening of the superconducting gap [30; 20] and a topological phase transition. Our numerical simulations exhibit a phase transition at magnetic field near \(0.6\,\mathrm{T}\) for \(W_{S}=0.6\mu\mathrm{m}\) as shown in Fig. S9. In contrast, JJ2 data does not show a suppression of supercurrent in Fig. 4(b). Numerical simulations also do not show a phase transition for the ground state of JJ2 with \(W_{S}=0.15\mu\mathrm{m}\) below \(1\) T as shown in Fig. S10.. Fig. 4(c) plots the difference between the absolute values of the critical currents for positive and negative biases \(\Delta I_{c}=I_{c}^{+}-|I_{c}^{-}|\), as a function of \(B_{y}\). The results evidence the anti-symmetric character of \(\Delta I_{c}\), which for both junctions changes its sign when the magnetic field \(B_{y}\) is inverted. However, \(\Delta I_{c}\) also exhibit zeros at certain values of \(B_{y}\), across which sign reversals not related to magnetic field inversion are observed. This is particularly apparent for the device JJ2 (blue symbols) at fields \(B_{y}\approx\pm 0.35\) T, as shown in Fig. 4(c). From a comparison between the experimental results and the numerical simulations, we identify two possible mechanisms responsible for the zeros of \(\Delta I_{c}\) and their associated SDE sign reversals. According to Eq. (1), the SOC induces an asymmetry in the critical currents under the magnetic field inversion with respect to \(B_{*}\), with \(|I_{c}^{\pm}(B_{*}+\delta B)|\neq|I_{c}^{\pm}(B_{*}-\delta B)|\). This asymmetry is apparent in Fig. 2(c)-(d) and Fig. 4(a)-(b). The coexistence of a finite magnetic shift, \(B_{*}\), and a strong SOC-induced critical current asymmetry can cause \(|I_{c}^{+}|\) and \(|I_{c}^{-}|\) to cross at a finite magnetic field and produce a sign reversal in the SDE without involving \(0-\pi\)-like transitions. This situation is apparent in JJ2 from Fig. 4(b) and (c), where a critical current crossing and corresponding sign reversal of \(\Delta I_{c}\) at \(B_{y}\approx 0.35\) T are observed, respectively. The numerical simulations are in good agreement with the experimental data of JJ2, predicting a critical Figure 4: [Color online] Absolute value of supercurrent as a function of in-plane magnetic field perpendicular to the current in (a) JJ1, and (b) JJ2 at high magnetic fields. (c) Difference \(\Delta I_{c}=I_{c}^{+}-|I_{c}^{-}|\) between the absolute value of the critical currents measured under positive and negative biases as a function of \(B_{y}\). Red squares and blue triangles correspond to JJ1, and JJ2, respectively. The insets are zoom-ins of the data in range of \(0.4\) T to \(0.8\) T. current crossing at \(B_{y}=0.4\) T, which is unrelated to the \(0-\pi\) transition at \(B_{y}\approx 1\) T (see Fig. S10 in SI). As discussed above, the SDE originates from finite-momentum Cooper pairing qualitatively described by a wave function lacking inversion symmetry with respect to \(B_{y}\) when both Rashba SOC and Zeeman interaction are present. However, it follows from Eq. (5) that the inversion symmetry with respect to \(B_{y}\) is reestablished when either the singlet or triplet component vanishes at the S/N interfaces located at \(x=0\) and \(x=L\), i.e., when \(|q|L=n\pi/2\), where \(n\) is an integer number. Therefore, junctions with \(L\ll\xi_{0}\ll W_{S}\) exhibit zeros of \(\Delta I_{c}\) when, \[B_{y}\approx n\frac{\pi}{g^{*}\mu_{B}}E_{T}\left(1+\frac{k_{so}^{2}}{2k_{F}^{2 }}\right)\,r_{l}. \tag{6}\] The re-scaling factor \(r_{l}=L/(2W_{S}+L)\) has been introduced to account for the fact that the Zeeman field is likely present over the whole system and not only in the semiconductor region. The zeros (and their associated SDE sign reversals) corresponding to odd integers in Eq. (6), say \(n=(2m+1)\) (with \(m\) an integer), can be associated with \(0-\pi\)-like transitions the junction would experience close to equilibrium. Indeed, in the absence of currents, the superconducting phase difference self-tunes to a value \(\phi_{GS}\) (referred to as the ground-state phase difference) that minimizes the free energy of the system. For \(\cos(qL)>0\) the singlet component of the wave function at the two superconducting leads has the same sign, indicating that \(\phi_{GS}=0\). However, when \(\cos(qW_{L})=0\) [i.e., \(qL=(2m+1)\pi/2\)], the ground-state phase jumps from \(0\) to \(\pi\) and the singlet at the two superconducting leads acquire opposite signs for \(\cos(qL)>0\). Therefore, SDE sign reversals corresponding to odd values of \(n\) in Eq. (6) are associated to \(0-\pi\)-like (or \(\pi-0\)-like) transitions, while additional sign reversals are expected to occur between \(0-\pi\) and \(\pi-0\)-like transitions, when \(n\) is even. The \(0-\pi\)-like ground-state phase jump has been identified as a possible signature of topological phase transitions in planar JJs [20; 30]. Hence, the nodes of \(\Delta I_{c}\) corresponding to odd \(n\) may indirectly signal a transition into the topological superconducting state. However, such a signature is not conclusive, especially in JJs with narrow superconducting leads, where ground-state phase jumps are not necessarily associated to topological phase transitions [31; 32]. The numerical simulations for devices JJ1 and JJ2 reveal \(0-\pi\)-like jumps of the ground-state phase at \(B_{y}\approx\pm 0.6\) T and \(B_{y}\approx\pm 1\) T, respectively [see Figs. S9 and S10(b) in SI], suggesting that if the \(\Delta I_{c}\) changes sign at higher fields [see insets in Fig. 4(c)] they could be associated to \(0-\pi\)-like transitions with \(n=1\). However, we find that the measured current difference, \(\Delta I_{c}\), is too small in experiment and is difficult to conclusively establish the existence of these sign reversals in range of \(0.6\) T to \(1\) T. In summary, we have studied the superconducting diode effect in epitaxial InAs/Al Josephson junctions with different superconducting width and showed that the SDE depends on the orientation of the applied in-plane magnetic field in the system. By measuring the supercurrent of the junction, we observe SDE only when the in-plane field is perpendicular to the current. We observe a shift in magnetic field yielding the maximum critical current and obtain an analytical expression describing the critical current behavior at low magnetic field. We propose a method for estimating the Rashba parameter from the measurement of the magnetic field shift of the SDE and numerical simulations. The results are in good agreement with values previously reported for our system. We also measure the SDE at high magnetic fields and observe a sign change in the \(\Delta I_{c}\) of the \(W_{S}=0.15\,\mathrm{\SIUnitSymbolMicro m}\) junction at \(B_{y}\approx\pm 0.35\) T. Using our Tight binding simulation, we conclude that this sign change is not necessarily an indicator of \(0-\pi\) or topological transitions in the system. This work is supported in part by DARPA Topological Excitations in Electronics (TEE) program under grant No. DP18AP900007, the U.S. Office of Naval Research (ONR) through Grants No. N000142112453 and MURI No. N000142212764.
2303.08098
Single Event Effects Assessment of UltraScale+ MPSoC Systems under Atmospheric Radiation
The AMD UltraScale+ XCZU9EG device is a Multi-Processor System-on-Chip (MPSoC) with embedded Programmable Logic (PL) that excels in many Edge (e.g., automotive or avionics) and Cloud (e.g., data centres) terrestrial applications. However, it incorporates a large amount of SRAM cells, making the device vulnerable to Neutron-induced Single Event Upsets (NSEUs) or otherwise soft errors. Semiconductor vendors incorporate soft error mitigation mechanisms to recover memory upsets (i.e., faults) before they propagate to the application output and become an error. But how effective are the MPSoC's mitigation schemes? Can they effectively recover upsets in high altitude or large scale applications under different workloads? This article answers the above research questions through a solid study that entails accelerated neutron radiation testing and dependability analysis. We test the device on a broad range of workloads, like multi-threaded software used for pose estimation and weather prediction or a software/hardware (SW/HW) co-design image classification application running on the AMD Deep Learning Processing Unit (DPU). Assuming a one-node MPSoC system in New York City (NYC) at 40k feet, all tested software applications achieve a Mean Time To Failure (MTTF) greater than 148 months, which shows that upsets are effectively recovered in the processing system of the MPSoC. However, the SW/HW co-design (i.e., DPU) in the same one-node system at 40k feet has an MTTF = 4 months due to the high failure rate of its PL accelerator, which emphasises that some MPSoC workloads may require additional NSEU mitigation schemes. Nevertheless, we show that the MTTF of the DPU can increase to 87 months without any overhead if one disregards the failure rate of tolerable errors since they do not affect the correctness of the classification output.
Dimitris Agiakatsikas, Nikos Foutris, Aitzan Sari, Vasileios Vlagkoulis, Ioanna Souvatzoglou, Mihalis Psarakis, Ruiqi Ye, John Goodacre, Mikel Lujan, Maria Kastrioto, Carlo Cazzaniga, Chris Frost
2023-02-21T11:56:03Z
http://arxiv.org/abs/2303.08098v1
# Single Event Effects Assessment of UltraScale+ MPSoC Terrestrial applications ###### Abstract The AMD UltraScale+ XCZU9EG device is a Multi-Processor System-on-Chip (MPSoC) with embedded Programmable Logic (PL) that excels in many Edge (e.g., automotive or avionics) and Cloud (e.g., data centres) terrestrial applications. However, it incorporates a large amount of SRAM cells, making the device vulnerable to Neutron-induced Single Event Uspets (NSEUs) or otherwise soft errors. Semiconductor vendors incorporate soft error mitigation mechanisms to recover memory upsets (i.e., faults) before they propagate to the application output and become an error. But how effective are the MPSoC's mitigation schemes? Can they effectively recover upsets in high altitude or large scale applications under different workloads? This article answers the above research questions through a solid study that entails accelerated neutron radiation testing and dependability analysis. We test the device on a broad range of workloads, like multi-threaded software used for pose estimation and weather prediction or a software/hardware (SW/HW) co-design image classification application running on the AMD Deep Learning Processing Unit (DPU). Assuming a one-node MPSoC system in New York City (NYC) at 40k feet, all tested software applications achieve a Mean Time To Failure (MTTF) greater than 148 months, which shows that upsets are effectively recovered in the processing system of the MPSoC. However, the SW/HW co-design (i.e., DPU) in the same one-node system at 40k feet has an MTTF = 4 months due to the high failure rate of its PL accelerator, which emphasises that some MPSoC workloads may require additional NSEU mitigation schemes. Nevertheless, we show that the MTTF of the DPU can increase to 87 months without any overhead if one disregards the failure rate of tolerable errors since they do not affect the correctness of the classification output. Neutron radiation testing, Single Event Effects, MPSoC terrestrial applications ## I Introduction Multi-Processor System-on-Chip (MPSoC) devices with embedded Field Programmable Gate Array (FPGA) logic are used in many applications such as avionics, automotive, medical, telecommunication, and data centres due to their software flexibility and computational efficiency [1]. Although semiconductor vendors provide a rich set of MPSoC models, each having different computing capabilities, their fundamental architecture consists of two subsystems integrated into a single chip; the Processing System (PS) and the Programmable Logic (PL), both tightly connected through on-chip high-speed interfaces. The PS subsystem commonly integrates one or more multiprocessors and all kinds of System on Chip (SoC) peripherals like DDR and DMA memory controllers and high-speed (e.g., SATA, PCIe) and general (e.g., Ethernet, CAN, SPI) connectivity interfaces. The PL part is an FPGA, providing the means to implement application-tailored hardware accelerators to offload PS workloads and improve the performance-to-watt ratio metrics of the system. Modern MPSoCs store their configuration in Static Random Access Memory (SRAM) cells, which allows them to be reprogrammed practically unlimited times, implementing new or modifying existing hardware accelerators. This is especially useful in data centres, where the processing system of the MPSoC can be reprogrammed multiple times depending on the needs of an application without wearing out the device. However, MPSoCs pose a unique challenge compared to multiprocessor SoCs that do not integrate an FPGA; they are more vulnerable to radiation effects. Unfortunately, reprogramability comes at the expense of a large amount of SRAM cells to store the state and configuration of the device, making it vulnerable to Single Event Effects (SEEs) [2, 3]. Specifically, highly-engrised particles like protons originating from deep space (i.e., cosmic rays) and our Sun (i.e., solar rays) collide with nitrogen and oxygen atoms of our Earth's upper atmosphere, producing secondary particles like neutrons and muons [4, 5]. In turn, neutrons with an energy \(\geq\) 10 MeV interact with the atoms of the device's semiconductor material, causing SEEs, especially Neutron-induced Single Event Uspets (NSEUs) or otherwise soft errors. NSEUs can upset (i.e., corrupt) the on-chip SRAM memories of the PS, like the multiprocessor's register file and caches [6], as well as the application (e.g., flip-flops state) and configuration (e.g., programmable routing resources) memory of the PL [2]. NSEUs are not permanent, but their effects can threaten system dependability if not well understood and handled in an MPSoC. The failure modes caused by an NSEU range from _unresponsive_ errors, for example, an operating system (OS) or program process crash, to _Single Data Corruption (SDC)_ errors [7]. SDCs or otherwise an erroneous program output that goes undetected can have catastrophic consequences in an application. For instance, to put an airliner in an uncontrolled steep dive [8]. To fight against the effects of soft errors in MPSoCs, semiconductor vendors incorporate various NSEU mitigation mechanisms in their devices. This brings the following questions: Can the MPSoC's embedded mechanisms effectively mitigate soft errors under all environmental conditions and workloads? For example, what is the Mean Time To Failure (MTTF) of an MPSoC application when it operates at high altitude where the radiation flux can be 500x greater than at sea level [8] or when a system uses multiple MPSoCs in large-scale infrastructures like data centres? Are there any types of MPSoC applications that can achieve high MTTF despite an increased rate of memory upsets? This article aims to answer the above research questions with a solid methodology that entails accelerated neutron radiation testing and dependability analysis. Accelerated radiation testing is the standard and accurate way to trigger neutron-induced SEEs in Integrated Circuits (ICs) to measure their cross-section or otherwise their vulnerability to radiation-induced events. In this work, we exposed a popular MPSoC, the AMD UltraScale+ XCZU9EG, to an accelerated radiation source closely resembling Earth's neutron spectrum for high energies (e.g., \(\geq 10MeV\)) to characterise its sensitivity to SEEs. The measured data were then projected and scaled to the expected neutron flux of a target environment to estimate dependability metrics like MTTF of the IC under different workloads and configurations. The radiation experiments were performed at ChipIr [9], an ISIS neutron and muon facility instrument at the Rutherford Appleton Laboratory, UK. Compared to previous works that have performed accelerated radiation testing on the XCZU9EG [10, 11, 12, 13], we make the following contributions: * The MPSoC is tested on a broader range of workloads that exercise the device more exhaustively to reveal more accurate FIT rates than those reported in the literature. We evaluate the cross sections of single-threaded software-only (SW-only) benchmarks that run bare to the metal and complex SW-only Linux-based multi-threaded applications used in weather prediction and pose estimation algorithms. Finally, we irradiated a software-hardware (SW/HW) co-design application, specifically the AMD Deep-learning Processing Unit (DPU) running image classification. * The measured cross-sections of each application are examined under the lens of MTTF and average upset rate, assuming a one-node MPSoC system operating at sea level (e.g., automotive) or 40k feet (airliner's avionics) as well as a 1000-node MPSoC system (e.g., data centre). This helps us understand how well the embedded soft error mitigation mechanisms of the XCZU9EG cope with radiation effects in various terrestrial environments, workloads, and device deployments. * We evaluate the MTTF of the MPSoC for workloads that are inherently resilient to errors. * A fine-grain cross-section characterisation of the PS's Cortex-A53 processor caches and PL memories is provided. For example, we report cross-sections of L1 data and L1 instruction caches, while previous works provide only their average cross-section. Our results show that the MPSoC will experience, on average, one upset in its PS or PL memories every 24k and 904 months when operating as a one-node MPSoC system in New York City (NYC) at sea level. However, the average upset rates of the PL and PS memories increase to 1.81 and 48 months per upset, respectively, when the same system operates at 40k altitude and doubles in the 1000-node MPSoC system at sea level. Notably, most of the PS upsets are successfully recovered by the soft error mitigation mechanisms of the MPSoC, ensuring a reliable execution of the SW-only workloads without many SDCs or processor crashes. For instance, all tested SW-only applications achieve MTTF \(\geq\) 148 months, assuming the one-node MPSoC system at 40k feet. However, the SW/HW co-design in the same system has MTTF = 4 months due to the high FIT rate of its PL DPU accelerator. This points out that some SW/HW MPSoC applications operating at high altitudes or on a large scale may need additional soft error mitigation techniques (e.g., hardware redundancy) to improve reliability. Nevertheless, we show that the MTTF of the DPU application can be improved by 22\(\times\) if one omits the FIT rate of tolerable output errors since these do not play any role in the correctness of the final classification result. The rest of the paper is organised as follows. Section II provides background on the effects of neutron radiation in ICs, and related work of previous accelerated radiation tests of the AMD UltraScale+ MPSoC. Section III outlines the experimental methodology, radiation test facility, and target boards we used during the experiments. Sections IV and V detail the experimental setup, methodology and results of the MPSoC designs and applications we evaluated under accelerated neutron radiation testing. Section VI accesses the reliability of the applications in various environmental conditions and device deployments. Section VII presents concluding remarks. ## II Background and Related work In this section, we provide the necessary background to understand how atmospheric neutrons can reduce the reliability of MPSoC terrestrial applications. We also report results from previous works in atmospheric-like neutron radiation experiments for AMD 16nm FinFET MPSoCs. ### _AMD 16nm FinFET XCZU9EG MPSoC_ The AMD 16nm FinFET XCZU9EG MPSoC is a computing platform that incorporates highly-reconfigurable processing elements to excel in many Edge and Cloud applications. As mentioned, the device integrates the following: 1) a Processing System (PS) that incorporates a quad-core Arm CortexTM-A53 Application Processing Unit (APU) running up to 1.5GHz, 2) a dual-core Arm CortexTM-R5F real-time processor, 3) an Arm MaliTM4-400 MP2 graphics processing unit and 4) Kintex-7 Programmable Logic (PL). The PS is the heart of the MPSoC, including on-chip memory, external memory interfaces, and a rich set of peripheral connectivity interfaces. The XCZU9EG features NSEU mitigation schemes in 1) the PS, e.g., parity check and Single Error Correction Double Error Detection (SECDED) in the APU caches and the on-chip memory (OCM), and 2) the PL configuration and application memories via SECDED mechanisms and layout interleaving schemes to mitigate the effects of multi-bit upsets (MBUs). ### _Cross-section and failure rate of digital integrated circuits_ Many Integrated Circuits (ICs) operating in large-scale or high-reliability systems are tested with accelerated radiation experiments to characterise their static and dynamic cross-section under various types of highly-energised particles, like alpha or neutrons. The static cross-section quantifies the probability of a Single Event Effect (SEE) occurring when highly-energised particles like neutrons collide with the nucleus of semiconductor material. Mathematically stated: \[\text{Cross-section}=\frac{\text{Number of Events}}{\text{Particle Fluence}}=\frac{\#\text{events}}{\Phi}, \tag{1}\] where fluence (represented by the upper-case symbol \(\Phi\)) defines the number of particles incident on a surface in a given period divided by the area of the surface. The larger the static cross-section, the more likely a particle will react with the semiconductor material of the device and the more vulnerable it will be to radiation-induced events like memory upsets. Once one characterises the static cross-section of a target device, say the NSEU cross-section, it is easy to calculate the expected SER of a device for a given particle flux. For example, the average neutron particle flux in NYC at sea level is approximately 13 neutrons per cm\({}^{2}\) per hour [4], which yields the following Failures In Time (FIT) rate: \[\text{FIT}=\text{Static cross-section}\times\frac{13\text{ neutrons}}{\text{cm}^{2}\times\text{hour}}\times 1 0^{9}\text{ hours}, \tag{2}\] that is, the average number of failures (e.g., number of memory upsets) that occur within one billion hours of operation [4]. However, not all radiation effects cause an observable error or a system crash in an MPSoC application [14]. For example, a configuration upset in an unused Look Up Table (LUT) of the PL will probably not affect the operation of a hardware accelerator [15]. A memory upset in a register of the APU that is not read but re-written by a new value during the execution of an application will likely not introduce an error [7]. In a nutshell, not all radiation-induced events (e.g., memory upsets) lead to an application error (e.g., SDC). The dynamic cross-section captures the likelihood of application errors (i.e., only faults that resulted in an output error) for a given particle fluence. It can be calculated with (1) by substituting the number of events with the number of application errors. Practitioners that want to assess their reliability in terms of Mean Time To Upset (MTTU) or Mean Time To Failure (MTTF), in other words, the average rate at which memory upsets or application errors occur, can apply the following simple conversion: MTTU or MTTF [ hours ] = 1E9 / FIT. ### _Neutron-induced failures in MPSoC-based terrestrial applications_ Fortunately, most MPSoC terrestrial applications would not experience failures due to atmospheric neutron radiation. The sensitivity per device to NSEUs is extremely low [2]. However, the radiation effects increase dramatically when MPSoCs are used on large-scale applications (e.g., data centres) or when operating in high-altitude (e.g., airliner's avionics). Specifically, the rate of NSEU increases for the following reasons. _The number of utilised devices in the application increase:_ Deploying large-scale data centre applications on hundreds of thousands of MPSoCs, collectively increases the total susceptibility of radiation-induced errors over all utilised devices in the system. In other words, if the FIT rate of one ICs is \(X\), the overall FIT rate of a system incorporating \(N\) such ICs will be FIT\({}_{\text{overall}}\) = \(X\times N\). In [2], the authors estimated that the MTTF due to neutron-induced errors on a hypothetical one-hundred-thousand-node FPGA system in Denver, Colorado, would be 0.5 to 11 days depending on the workload. Indeed, projections from technology evolution roadmaps indicate that the MTTF of data centre computing systems may reach a few minutes [16]. Given that the demand for FPGAs in cloud and data centre facilities will increase in the upcoming decade, and the likelihood of NSEU-related failures may become a significant problem [17]. _The device operates at high altitudes:_ For example, an avionics system at a flight path above 60 deg latitude at 40k feet altitude would experience approximately 500 times larger neutron flux than if the same system was operating in NYC sea level [8]. As we show in section VI, the average upset rate (i.e., MTTU) of PL memories in an XCZU9EG MPSoC at NYC sea level is 75 years when using the static cross-sections measured in this work. However, using the same device at 60 deg latitude and 40k feet altitude will increase the upset rate of the memories to one upset per 1.8 months. As mentioned, not all upsets will lead to an error since practical designs commonly do not utilise 100% of their resources, and some upsets are logically masked during circuit operation [7, 14, 15]. Nevertheless, given the tens of thousands of flights per day, the possibility of an SRAM cell upset impacting the safety of a flight is high if the necessary soft error mitigation schemes on the MPSoC design are not in place. ### _Characterisation of the AMD XCZU9EG MPSoC under acceleated atmospheric-like radiation testing_ Previous works have tested the AMD XCZU9EG MPSoC with highly-energised (\(\geq\)10 MeV) neutron and 64 MeV mono-energetic proton accelerated radiation experiments. A 64 MeV mono-energetic protons source approximates the atmospheric neutrons spectrum well and has a lower beamtime cost than neutron beam [18]. However, highly-energised neutrons model more precisely the atmospheric radiation environment and are generally preferred for characterising the cross-section of ICs. AMD characterised the XCZU9EG MPSoC under neutron at Los Alamos Neutron Science Center (LANSCE) weapons neutron research facility and mono-energetic-protons at Crocker Nuclear Laboratory [18]. The PS and PL components of the XCZU9EG were exercised with the Xilinx proprietary System Validation Tool (SVT) [18], which executed hundreds of tests per second, resulting in high test coverage. The authors concluded that the CRAM and BRAM static cross-section per bit of the XCZU9EG was reduced by 20x and 16x, respectively, compared to the AMD Kintex-7 FPGA that uses 28nm TSMC's HKMG process technology. In terms of MBUs, 99.99% of the events were correctable due to the interleaving layout of the MPSoC. The PS was very reliable, with an overall 1 FIT calculated by projecting the measured cross-sections during the radiation tests to the neutron flux of NYC at sea level. Interestingly, no unrecoverable event in the PS's SRAM structures was reported. All accelerated radiation tests conducted by AMD are officially reported in their UG116 device reliability user guide [11]. Christian Johanson et al. performed neutron radiation experiments on the XCZU9EG MPSoC at ChipIR [12]. The authors instantiated the AMD Soft Error Mitigation (SEM) IP [19] to collect and post-analyse reports regarding upsets in the device's configuration memory. The BRAMs were initialised with predefined patterns and compared with a golden reference to detect application memory upsets. The most comprehensive accelerated neutron radiation testing results for the XCZU9EG have been reported in [20] and [13] by the _Configurable Computing Laboratory_ of Brigham Young University (BYU). Specifically, Jordan D. Anderson et al. conducted neutron radiation experiments at LANSCE facility to characterise the NSEU cross-sections of 1) PL memories (i.e., CRAM and BRAM), 2) baremetal single-threaded and Linux-based multi-threaded benchmarks running on the APU (each core run a Dhrystone benchmark - see Lnx/Dhr in Table I), and 3) APU memories (i.e., OCM and caches). Notably, the authors did not identify any SDC or processor hang errors during the tests of the APU benchmarks but stated that more beamtime (i.e., fluence) might have been required to obtain statistically significant results [13]. David S. Lee et al. from the same group characterised the single-event latch-up (SEL) [4] cross-section of the XCZU9EG MPSoC under neutrons at LANSCE. The authors tested a technique to detect and recover SELs by monitoring the PMBUS-interfaced power regulators of the ZCU102 board that hosted the device. SELs were observed on the device's VCCAUX and the core supply VCCINT power rails, which were successfully detected and recovered by power cycling the device [20]. Table I summarises the PS and PL cross-sections of the XCZU9EG MPSoC collected by accelerated atmospheric-like radiation tests. Please note that although the authors in [13] did not observe any SDC or crash during the software tests, they calculated the cross sections by assuming a single error. This is why the dynamic cross-sections for AES, MxM, and Lnx/Dhr in Table I are not zero even though no errors were observed. Also, note that [18] does not provide a detailed characterisation of the PS, e.g., SDC or cache cross sections, as is done in [13] and this work. As mentioned, except for the detailed NSEU characterisation of the embedded memories of the PS and PL, this paper also studies the behaviour of complex SW-only and SW/HW applications under the presence of NSEUs to analyse: 1) the reliability of UltraScale+ MPSoC-based systems at the application level in terrestrial environments, 2) the effectiveness of the soft error mitigation approaches embedded in the UltraScale+ devices, 3) the reliability of emerging error resilient applications, e.g., deep neural network (DNN) inference or pose estimation. ## III Experiments Overview ### _Experimental Methodology Overview_ It is challenging to perform accelerated radiation testing on a complex computing platform like the XCZU9EG MPSoC as it contains multiple components, each affecting the application differently. To overcome the mentioned challenge, we executed a bottom-up experimental methodology. Initially, we tested the PL and PS parts of the device separately and then gradually moved to experiments that tested the PS and PL parts in cooperation. Specifically, we first conducted some basic tests to measure the baseline NSEU and Single Event Functional Interrupt (SEFI) [4] cross-sections of all PL memories and to evaluate the SDC and crash (i.e., processor hung) cross-section of SW-only single-threaded baremetal benchmarks. After the basic tests, we moved to access higher-complexity applications. In detail, we evaluated the SDC and crash cross-sections of several multi-threaded SW-only High-Performance Computing (HPC) applications and one popular software/hardware (SW/HW) co-design for DNN acceleration. In summary, we performed accelerated neutron radiation testing on the following applications. * Basic tests: * A HW-only PL synthetic benchmark that utilises 100% of the device's PL resources [21]. * Several SW-only single-threaded baremetal benchmarks, each one having a different computational and memory footprint. * Complex tests: * Two complex SW-only multi-threaded applications running under Linux OS. Specifically: * LFRiC, which is a compute-intensive kernel for weather and climate prediction [22]. * Semi-direct Monocular Visual Odometry (SVO), which is used in automotive and robotic systems for pose estimation [23]. \begin{table} \begin{tabular}{c|c|c|c c|c} & & & & \multicolumn{2}{c}{Cross-section} \\ \cline{5-6} Ref. & \multirow{2}{*}{Source} & \multirow{2}{*}{Flume} & \multirow{2}{*}{PL} & \multirow{2}{*}{PS} \\ \cline{5-6} & & & & \multicolumn{1}{c|}{[cm\({}^{2}\)/bit]} & \multicolumn{1}{c}{[cm\({}^{2}\)/device]} \\ \cline{5-6} & & & & CRAM & BRAM \\ \hline [10] & p (64 MeV) & 1.00E+11 & 3.30E-16 & 1.10E-15 & 6.60E-11 \\ [10] & n (\(\geq\)10MeV) & 1.00E+11 & 3.40E-16 & 1.10E-15 & 5.40E-11 \\ [11] & n (\(\geq\)10MeV) & - & 2.67E-16 & 8.82E-16 & - \\ [12] & n (\(\geq\)10MeV) & 1.00E+10 & 1.10E-16 & 4.10E-16 & - \\ [13] & n (\(\geq\)10MeV) & 3.00E+11* & 2.52E-16 & 3.02E-15 & See * and * \\ \end{tabular} \end{table} TABLE I: Summary of accelerated atmospheric-like radiation experiments for the AMD XCZU9EG MPSoC One SW/HW multi-threaded co-design application running under Linux OS. Specifically, the AMD Vitis DPU [24], which is a popular Convolution Neural Network (CNN) accelerator. ### _Radiation test facility_ We performed the radiation tests at ChipIr at the Rutherford Appleton Laboratory in Oxfordshire, UK. ChipIR is designed to deliver a neutron spectrum as similar as possible to the atmospheric one to test radiation effects on electronic components and devices [9, 25]. The ISIS accelerator provides a proton beam of 800 MeV at 40 \(\mu\)A at a frequency of 10 Hz, impinging on the tungsten target of its target station 2, where ChipIr is located. The spallation neutrons produced illuminate a secondary scatterer, which optimises the atmospheric-like neutron spectrum arriving at ChipIr with an acceleration factor of up to \(10^{9}\) for ground-level applications. With a frequency of 10 Hz, the beam pulses consist of two 70 ns wide bunches separated by 360 ns. The beam fluence at the position of the target device was continuously monitored by a silicon diode, while the average flux of neutrons above 10 MeV during the experimental campaign was 5.6E+6 neutrons/cm\({}^{2}\)/seconds. The beam size was set through the two sets of the ChipIr jaws to 7cm x 7cm. Irradiation was performed at room temperature. Fig. 1 depicts the target boards we irradiated at ChipIr. The cross-section calculations in this work assume a Poisson distribution of the NSEUs, a confidence level of 95%, and 10% uncertainty on the measured fluence. ### _Target boards_ We conducted the radiation experiments on two AMD ZCU102 evaluation boards (revision 1.1), each hosting the XCZU9EG chip. One board was modified to disconnect a few onboard switching voltage regulators and power the board with an external multichannel Power Supply Unit (PSU). We modified the board to protect it from Single Evert Latch-ups (SELs) that cause radiation-induced high-current events. The second board was used _out-of-the-box_ for the complex tests. In other words, it was not modified. _Modified ZCU102 board:_ Previous neutron radiation experiments on a ZCU102 board (revision - engineering sample 1) showed that some onboard voltage regulators are vulnerable to high-current events [26]. To protect the board from these anticipated events, we adopted the solution of David S. Lee et al. [26]. Specifically, we 1) removed all onboard voltage regulators for 3.3V (VCC3v3, UTIL_3V3), 0.85V (VCCBRAM, VCCINT, VCCPSINTFP, VCCPSINTLP), 1.2V (DDR4_DIMM_VDDQ) and 1.8V (VCCAUX, VCCOPS) power rails and 2) provided voltage to the mentioned power rails via a multichannel PSU. A Python script running on a PC (see Control-PC in Fig. 2) monitored the current drawn from each PSU channel to power cycle (i.e., turn off and on) the board during high-current events. Fig. 1(a) shows the ZCU102 board with its voltage rails (0.85V, 1V2, 1V8 and 3V3) powered by an external PSU. _Out-of-the-box ZCU102 board:_ During the preparation of the tests, before the radiation experiments, we observed that the modified board often crashed during the boot time of the Linux OS (i.e., for testing the LFRiC, SVO and AMD DPU applications). The crashes were caused by voltage droops due to an instantaneous (fast) increase of the current at the 0.85V and 1.2V power rails when the Linux kernel was performing the initialisation of the PS DDR memory. Our external PSU setup could not sustain a stable 0.85V and 1.2V power supply during these current spikes. To overcome the mentioned problem, we ran the Linux-based applications (i.e., complex tests) on the _out-of-the-box_ board. We used the PMBUS Maxim Integrated PowerTool as suggested by [26] to detect SELs. Please note that depending on the target IC, a SEL can cause a rapid increase in the current of a power rail that is difficult to detect on time and power of the device before it is damaged. However, as shown in [26], the rate at which current increases in the XCZU9EG power rails during an SEL is slow. This gives plenty of time (commonly a few minutes) to detect and recover a high-current event by power cycling the target board. Although detecting and recovering a high-current event is faster with an external PSU, the experience we gained from these experiments indicates that the PMBUS Maxim Integrated PowerTool is a sufficient solution to protect the board. Fig. 1(b) shows the unmodified ZCU102 board we used for the complex tests. ## IV Basic Tests This section presents the experimental methodology and results of all basic tests. The objectives of these tests are the following: 1) characterise the NSEU and SEFI static cross-sections of all PL memories using synthetic HW benchmarks and 2) evaluate the dynamic SDC and crash cross-sections of several SW-only single-threaded baremetal applications running on the APU. Fig. 1: Neutron beam experiment at the ChipIr facility of RAL, UK. Fig. 1(a) shows the modified ZCU102 board with its voltage rails (0.85V, 1V2, 1V8 and 3V3) powered by an external multichannel power supply unit. Fig. 1(b) illustrates the _out-of-the-box_ ZCU102 board, which uses its onboard voltage regulators. ### Experimental setup and overview for all basic tests: Fig. 2 presents the setup for the basic tests, which are conducted on the modified ZCU102 board (see section III-C). Specifically, a computer, namely the Control-PC, is located in the control room and orchestrates the tests by performing the following tasks: * Configures, controls and monitors the execution of benchmarks on the target board. * Resets the board during benchmark timeouts (i.e., radiation-induced events that make the device unresponsive) by electrically shorting the board's SRTS_B and POR_B reset buttons via a USB-controlled relay. * Monitors an Ethernet-interfaced multichannel PSU to power cycle the board during, if any, high-current events. Note that all USB connections are transferred from the beam room to the control room via an Ethernet-based USB extender. ### _HW-only PL synthetic benchmark tests_ #### Benchmark details We performed the PL tests on a highly utilised and densely routed design, which instantiates all slice, Block-RAM (BRAM), and Digital Signal Processor (DSP) primitives of the XCZU9EG device. The design has the following characteristics: * All PL slices are combined into multiple long register chain structures. In detail, the LUTs of SLICEL and SLICEM tiles are configured as route-through and 32-bit Shift Register LUT (SRL), respectively. The LUT outputs of all PL slices are connected with their corresponding slice Flip-Flops (FFs) to form long register chains. Each SRL in the device is initialised with predefined bit patterns. * All BRAMs are cascaded through their dedicated data bus horizontally (i.e., raw) or vertically (i.e., column) and initialised with address-related bit patterns. * Clock and clock-enable signals of all BRAM are set to '0' (i.e., disabled) to reduce the likelihood of BRAM upsets caused by Single Event Transients (SETs) on the clock tree and BRAM data bus signals of the device. We aim to reduce transient upsets since we focus on characterising the NSEU and SEFI cross-section of the device. * All DSP primitives are connected in cascade mode and configured to implement Multiply and Accumulate (MAC) operations. Detailed information for the tested synthetic benchmark can be found in our previous work [21], where we used the same benchmark to characterise the PL memories of an AMD Zynq-7000 device under heavy ions. #### Testing procedure The Control-PC downloads via JTAG the bitstream of the PL synthetic benchmark into the XCZU9EG device. In turn, it performs readback capture via JTAG [27] for 50 consecutive times, each time logging the state of all CRAM and Application RAM (ARAM) (e.g., FFs and BRAM contents) bits of the device in a readback file. This test procedure cycle (i.e., one device configuration and 50 readbacks) is continuously performed until the end of the test. In case of an unrecoverable error, the Control-PC performs the following tasks: 1) power cycles the ZCU102 board via the Ethernet-controlled PSU, 2) reconfigures the device and 3) continues readback capture from where it was left before the radiation-induced event occurred. All events that make the XCZU9EG device unresponsive are classified as unrecoverable. For example, a radiation-induced upset in the JTAG circuitry of the target device may result in a connection loss and make the device unresponsive to all JTAG queries made by the Control-PC. We should make two notes for the testing procedure of the PL synthetic benchmark: * Accumulated upsets are cleared in the device on average every 1400 seconds, i.e., by downloading the bitstream into the device after 50 continuous readbacks, which last _50 readbacks \(\times\) 28 seconds per readback = 1400 seconds_. * All JTAG transactions with the target device are performed by our open-source FREtZ tool [28, 29]. FREtZ provides a rich set of high-level Python APIs and application examples to readback, verify and manipulate the bitstream and the device state of all AMD 7-series and UltraScale/UltraScale+ MPSoC/FPGAs. Specifically, FREtZ increases the productivity of performing fault-injection and radiation experiments by hiding low-level Vivado TCL/JTAG commands that are executed behind the scenes to access the PS and PL memories of the target device. * The results of the basic tests are obtained by post-analysis of the collected data (i.e., readback files). Each readback file consists of 1) configuration bits that specify the functionality of the design and device, 2) flip-flop and slice LUTRAM contents, and 3) BRAM contents. Configuration bits are static bits because they do not change during circuit operation, while the flip-flop, LUTRAM, Fig. 2: Experimental setup to collect results for the basic, i.e., NSEU and SEFI static cross-section of all PL memories, and SDC dynamic cross-section of several single-threaded baremetal benchmarks running on the APU. and BRAM contents are dynamic bits, i.e., change during circuit operation, assuming a clock provision. AMD Vivado design suite produces a mask file during bitstream generation that FREtZ applies on each readback file to distinguish the static from the dynamic bits when analysing our experimental data and results. _Results - NSEU cross-section of the PL memories:_ Table II shows the neutron static cross-section and the number of SEFI occurrences of the target device. Each PL memory type (CRAM, BRAM and SRL) was exposed to radiation for approximately six hours with 5.6E+6 neutrons/cm\({}^{2}\)/seconds flux, thus accumulating 1.2E+11 neutrons/cm\({}^{2}\) fluence on average (see 2\({}^{\text{nd}}\) column of the table). The 1.2E+11 fluence is equivalent to exposing the device to the radiation environment of NYC at sea level for more than 1.3 million hours. In detail, the 3\({}^{\text{rd}}\) column of the table shows the number of upsets for each memory type, while 4\({}^{\text{th}}\) and 5\({}^{\text{th}}\) columns illustrate the cross-section per device and bit, respectively. The CRAM static cross-section that we measured (1.84E-16 cm\({}^{2}\)/bit) is in the range 1.10 E-16 cm\({}^{2}\)/bit - 3.40 E-16 cm\({}^{2}\)/bit as reported in previous studies and summarised in Table I. The cross-section of BRAM and SRL per cm\({}^{2}\) per bit is one order of magnitude higher than CRAM, which matches with the findings of AMD [10] and BYU [13]. The last column of Table II shows the number of SEFIs per memory type, which is analysed in the following paragraphs. _Results - SBU, MBU and MCU events in the PL memories:_ We adopted the statistical analysis approach of [30] to distinguish NSEUs that caused Single-Bit Upsets (SBUs), Multi-Bit Upsets (MBUs) and Multi-Cell Upsets (MCUs). JEDEC refers to MBUs as multiple upsets occurring in one configuration frame and MCUs expanding in one or more (usually neighbouring) configuration frames [4]. In general, recovering MBUs with classic Error Correction Code (ECC) based CRAM scrubbing [31] is challenging because each configuration frame of the XCZU9EG embeds ECC information that can only support the correction of an SBU. However, ECC scrubbing can successfully correct MCUs (i.e., multiple SBUs in different configuration frames). Table III presents the percentage of NSEUs that caused an SBU or an MCU, as well as their shapes (i.e., upset patterns). The x-axis of the shapes represents consecutive frames (i.e., frames with consecutive logical addresses), while the y-axis represents consecutive bits in a frame. Our results show that approximately 96% of NSEUs resulted in SBUs and the remaining 4% in MCUs. The MCUs appear in five shapes as shown in Table III and extend from 2 to 8 frames, while the bit multiplicity reaches up to 3 bits. Finally, we did not observe any MBU, which can be justified by the memory interleaving features of UltraScale/+ MPSoC devices. This is to say, memory cells belonging to the same logically addressed frame are physically separated, thus mitigating MBUs commonly caused in neighbouring physical cells. The NSEU shape results suggest that SECDED scrubbing is an adequate CRAM error recovery mechanism for XCZU9EG MPSoCs used in terrestrial applications since no MBUs were observed during our accelerated radiation tests. _Results - SEFIs in the PL memories:_ As shown in Table II we observed two SEFIs during the basic PL tests; _BRAM SEFI:_ The SEFI exhibited as a multi-bit upset affecting almost all the words of a BRAM. Specifically, all the even-numbered addresses (i.e., 0, 2,..., 1022) of a 36Kb BRAM (i.e., 1024 \(\times\) (32 data bits + 4 parity bits)) were written with the predefined value of the \(1022^{nd}\) word due to the SEFI, while all the odd-numbered addresses (i.e., 1, 3,..., 1023) were written with the value of the \(1023^{rd}\) word. This BRAM SEFI resulted in 10.5 kb (instead of 36 kb) upsets since many memory addresses were written with their initial value, i.e., the upsets were logically masked. We excluded the upsets caused by the SEFI when calculating the NSEU cross-section of the BRAMs in Table II. _SRL SEFI:_ We found that a SET on the clock signal in one CLB slice of an SRL caused the SEFI. Specifically, all the 256 SRL bits located in the eight LUTMs of the same slice (each SLICEM consists of eight 32-bit SRLs, and each SRL occupies a 64-bit LUTM in a master/slave arrangement) were corrupted by the SET on their clock signal. Similarly to the BRAM SEFI, the upsets caused by the SRL SEFI are removed from the NSEU cross-section calculations in Table II. _Results - High-current events in the MPSoC:_ During the PL tests we observed two high-current events; one occurred at the 1.8V power rail of the MPSoC and one at the 3.3V. The high-current events were successfully recovered by power cycling the device. We did not detect any high-current event in the SW-only single-threaded baremetal benchmarks basic tests and all complex tests. The results of SEFIs and high-current events show that the probability of such phenomena is extremely low; the device may experience, on average, a BRAM SEFI, a SRL SEFI or two high current events after 1.3 million hours, assuming operation in NYC at sea level. In other words, the equivalent time of natural neutron exposure in NYC to achieve the fluence of the accelerated radiation tests. ### _SW-only single-threaded baremetal benchmarks basic tests_ _Benchmarks details:_ We executed the following six embedded microprocessor benchmark kernels used in many real-world applications: CRC32, FFT, Qsort, BasicMath, SHA, and MatrixMul. All benchmarks were sourced from the MiBench \begin{table} \begin{tabular}{l|l|l|l|l|l} & \multirow{2}{*}{**Flence**} & \multicolumn{3}{c|}{**NSEU**} & **SEFIs** \\ \cline{3-6} \multirow{2}{*}{**Type**} & \multirow{2}{*}{**Upsets**} & \multicolumn{3}{c|}{**Cross section**} & \\ \cline{3-6} & & & **Device** & **Bit** & **\#** \\ \cline{3-6} & & & \([cm^{2}]\) & \([cm^{2}/bit]\) & \\ \hline CRAM & 1.20E+11 & 2.417 & 2.01E-08 & 1.84E-16 & 0 \\ BRAM & 1.20E+11 & 10,118 & 8.42E-08 & 1.21E-15 & 1 \\ SRL & 1.20E+11 & 1,462 & 1.22E-08 & 1.32E-15 & 1 \\ \end{tabular} \end{table} TABLE II: NSEU cross-section of the PL memories \begin{table} \begin{tabular}{c|c|c|c|c} \hline **SBUs [\%]** & \multicolumn{3}{c|}{MCUs [\%]} \\ \cline{2-5} & \multirow{2}{*}{**\(\blacksquare\)**} & \multirow{2}{*}{**\(\blacksquare\)**} & \multirow{2}{*}{**\(\blacksquare\)**} & \multirow{2}{*}{**\(\blacksquare\)**} \\ \cline{2-5} & & & & \\ \hline 93.80 & 4.07 & 0.84 & 0.57 & 0.35 & 0.09 \\ \end{tabular} \end{table} TABLE III: NSEU shapes in the CRAM suite [32], except MatrixMul, which was developed in-house. MiBench programs were adapted to run on the ARM CPU as baremetal single-threaded applications. We selected or modified the benchmark's input data sets to compose programs with different memory footprints, i.e., different data memory segment lengths. In this way, we were able to evaluate the impact per cache level on the SDC and crash rates under different cache utilisation conditions. The memory footprints of the benchmarks are shown in Table IV. The data segment includes global and static variables, while Read Only (RO) data includes constant data. One note should be made for the data segment usage of SHA and MatrixMul benchmarks; the SHA and MatrixMul benchmarks have been developed as functions and do not use global and static variables as other benchmarks do. Therefore, all computations for SHA and MatrixMul are performed in local variables. The data segments (stored temporarily in the stack) of the SHA and MatrixMul benchmarks are less than 32 KB and are not reported in Table IV. In summary, the benchmarks have the following characteristics: * The data segments of the FFT, BasicMath, SHA and MatrixMul fit into the L1 data cache (32 KB) of the APU core. Thus cache conflict misses are unlikely to happen. * The data segment of Qsort does not fit into the L1 data cache (32 KB), but it does fit into the L2 cache (1 MB); this means that during the execution of QSort, several conflict cache misses and thus cache replacements may occur in the L1 cache but not in the L2 cache. * The data segment of CRC32 does not fit into the L2 cache; this means that during the execution of CRC32, several replacements in L2 may occur. _Testing procedure:_ The Control-PC shown in Fig. 2 communicates with the PS through the PL JTAG interface. The PS stores the benchmark output results in the PS DDR memory, and the Control-PC collects the results through the JTAG interface. In more detail, a JTAG-to-AXI bridge is instantiated into the PL to access the DDR memory through a high-performance AXI port. The Control-PC uses the same JTAG-to-AXI bridge interface to configure the PS and initiate the execution of the benchmarks. To guard these auxiliary components (e.g., JTAG-to-AXI bridge) against radiation-induced errors during the tests: 1) we instantiated the AMD SEM IP core [19] to correct CRAM upsets, and 2) triplicated all components (including the SEM IP) in the PL with Synopsis Symplify Premier [33]. _Results - SDC and crash cross-sections of the SW-only single-threaded baremetal benchmark basic tests:_ Table V shows the estimated SDC cross-sections of the single-threaded baremetal benchmarks. Each benchmark ran more than 67k times, resulting in 3 hours of irradiation time per benchmark. The total beam time and fluence for all benchmarks were 18 hours and 6.12E+10 \(n/cm^{2}\), respectively. Please note that we discarded the overhead time required to configure and initialise the MPSoC and collect the results from the DDR memory. As expected, all benchmarks with a small memory footprint have either zero (see FFT, BasicMath, MatrixMul) or very low (see SHA) dynamic cross-sections. In contrast, the benchmarks with a large memory footprint (see QSort, CRC32) have the highest cross-section. We observe that Qsort is more vulnerable to SDCs than CRC32 despite its lower data segment size. This can be explained by the higher residence time of its data in the L2 cache. The data segment of Qsort fits in the 1 MB L2 cache of the APU and thus is not updated frequently from the off-chip DDR memory during execution, as done in the case of the CRC32 benchmark. In contrast to the results of [13], we report on average one order of magnitude higher dynamic cross-section for the single-threaded baremetal benchmarks; we tested the MPSoC on a broader range of benchmarks than [13], which exercised the APU caches in a more exhaustive way, thus revealing more errors. As mentioned, the authors in [13] did not observe any SDC or crash but assumed one single error when calculating the dynamic cross-sections of single-threaded baremetal benchmarks running on the APU. However, we did not observe any processor crash, i.e. our findings in regards to the crash dynamic cross-section of the APU are the same as in [13]. ## V Complex Tests This section presents the experimental methodology and results of the complex tests. These tests include two SW-only multi-threaded applications and one HW-SW co-design executing a CNN model, all running on top of the Linux OS. _Experimental setup:_ The setup of the complex tests is the same as for the basic tests (see Fig. 2). However, the target board is not modified but instead powered by its onboard voltage regulators. In other words, we used the _out-of-the-box_ board (see Sec. III-C) for the complex tests. _Testing procedure:_ The Control-PC runs an in-house developed software, namely the Experiment Control Software (ECS), to orchestrate the test procedure of the target benchmarks through TCP/IP Ethernet. The ECS software coordinates the tests of the applications via a shared Network File System (NFS) folder as follows: 1) the ECS initially resets the board and waits for it to boot, 2) after a successful OS boot, a bash script running on the MPSoC, namely, the run.sh, executes the following sub-tasks: 3a) connects on the shared NFS folder located on the \begin{table} \begin{tabular}{l c c c c c} \hline **Benchmark** & **Execution** & **Fluence** & **Total** & **SDC** & **SDCs** \\ & **time (s)** & (\(n/cm^{2}\)) & **runs** & & **cross-section** \\ \hline FFT & 2,27.95 & 6,96.49 & 67,509 & 0 & - \\ SHA & 1,239.14 & 7,02E+09 & 67,787 & 2 & 2.85E-10 \\ BasicMath & 1,266.74 & 7,18E+09 & 67,940 & 0 & - \\ MatrixMul & 1,556.26 & 8,82E+09 & 67,406 & 0 & - \\ Qsort & 1,237.92 & 7,01E+09 & 67,487 & 38 & 5.42E-09 \\ CRC32 & 4,269.89 & 2,42E+10 & 67,572 & 18 & 7.44E-10 \\ \hline **Total** & 10,797.90 & 6,12E+10 & 407,701 & 58 & 9.48E-10 \\ \hline \end{tabular} \end{table} TABLE V: CPU benchmarks – SDC cross-sections \begin{table} \begin{tabular}{l c c c} \hline **Benchmark** & **Code Segment** & **RO Data** & **Data Segment** \\ \hline FFT & 2.81 KB & 0.20 KB & 2.09 KB \\ SHA & 2.14 KB & 2.32 KB & 0.00 KB \\ BasicMath & 2.74 KB & 0.10 KB & 6.09 KB \\ MatrixMul & 0.77 KB & 23.74 KB & 0.00 KB \\ Qsort & 0.25 KB & 512.00 KB & 156.25 KB \\ CRC32 & 0.57 Kb & 0.00 KB & 2675.56 KB \\ \hline \end{tabular} \end{table} TABLE IV: CPU benchmarks – Memory footprints Control-PC, 3b) updates a sync.log file in the NFS folder to notify the ECS of a successful OS boot, 3c) executes an initial run of the target benchmark to warm-up the CPU caches, 3d) notifies the ECS software via the sync.log file that it is ready to start running the benchmark, 3e) enters an infinite loop where it continuously runs the benchmark and stores the results in the NFS folder to be checked by the ECS. The execution and result checking (i.e., by the ECS) of each benchmark is synchronised with the ECS via a shared mutex.log file stored in the NFS folder. The ECS resets the board when it detects: 1) a boot timeout, 2) a critical error (classifying an error as critical depends on the benchmark characteristics, as shown in the next section), or 3) a result query timeout. It is worth noting that for each benchmark execution, the run.sh script saves the Linux dmesg.log of the target board for post-analysis to identify system-level errors, such as L1 and L2 cache errors (see section V-B). ### _SW-only multi-threaded applications running under Linux OS_ _Benchmark details:_ We tested two SW-only multi-threaded applications, namely the LFRic [22] and the SVO [23], both running on top of the 4.19 Linux kernel, which was configured and compiled with PetaLinux 2019.2. The LFRic is a weather and climate model and one of the H2020 EuroEXA project ([http://euroexa.eu](http://euroexa.eu)) target applications being developed by the UK's Met Office and its partners [22]. Much of the LFRic model's runtime consists of compute-intensive operations suitable for acceleration using FPGAs. The LFRic weather and climate model is based on the GungHo dynamical core with its PSyclone software technology [34]. In our experiments, we exploited an essential computation kernel among the entire LFRic code, the matrix-vector product, to assess the overall dependability (i.e., dynamic cross-section) of the MPSoC. Specifically, this kernel supports 40-bit double-precision floating-point matrix-vector multiplications with an \(8\times 6\) matrix and contributes significantly to the execution time of the Helmholtz solver that is used to compute atmospheric pressure [22]. The SVO (Semi-direct Monocular Visual Odometry) processes raw data captured from visual sensors (e.g., camera) and conducts a probabilistic state estimation [23]. In particular, in the probabilistic state estimation, the algorithm calculates the camera's pose (i.e., motion estimation) and maps it to the surrounding, unknown environment (i.e. mapping). Both operations, the motion estimation and mapping are executed in parallel. SVO is used in many applications such as robotics and automotive applications to implement algorithms involving tasks like ego-motion or pose estimation of objects [23]. _Results - Error cross-sections of the SW-only multi-threaded applications:_ Table VI summarises the experimental results of the SW-only multi-threaded Linux-based benchmarks, which were collected during an 11-hour beam session. We categorise radiation-induced errors as crashes and SDCs. Crashes are further classified into _soft-persistent_ and _recoverable_ errors. Soft-persistent errors require several resets or a device power cycle to bring the MPSoC to a functional state. Recoverable errors require only one device reset to regain functionality. Similarly, SDC errors are classified into critical and tolerable as done in [35]. Critical errors lead to a result out of application specifications. Tolerable errors do not affect the final application result. Opposite to [13], which did not identify any SDC or processor hang (i.e., crash) when the APU was running multithreaded Linux-based benchmarks, our results showed that the MPSoC can experience radiation-induced errors. In detail, 5.11% and 7.46% of the total runs resulted in a crash for LFRic and SVO, respectively. From the total crashes of LFRic, 23% were soft-persistent, and 77% were recoverable. For SVO, 29% were soft-persistent and the remaining recoverable. Regarding SDC errors, 0.39% and 2.86% of the total LFRic and SVO runs resulted in SDCs, respectively. However, our findings show that all SDCs of the SVO were tolerable and did not affect the correctness of the final application result. This can be justified by the inherent error resilience nature of computer vision algorithms like SVO, which commonly tolerate most SDCs. In other words, most SDCs cause a small deviation from the ground truth and, therefore, can be ignored. Fig. 3 shows the absolute trajectory error of an SVO run under a tolerable SDC error. Although the result (i.e., estimated trajectory) deviated from the ground truth, it did not impact the in-field operation of SVO. On the contrary, all SDCs for the LFRic application affected its final result and therefore were classified as critical. Commonly, the algorithmic nature of LFRic cannot tolerate any SDC. ### _SW/HW multi-threaded co-design application running under Linux OS_ This section includes results for the SW/HW co-design DPU from our previous study [36]. We extend the study by providing the dynamic cross-section of crashes (i.e., hung) as well as the MTTF (see section VI) of the DPU application for different environments and device deployments. _Benchmark details:_ We tested the MPSoC when running the resnet50 image classification CNN model on the SW/HW Vitis AI DPU co-design. AMD has introduced a rich ecosystem of tools and IP accelerator cores to ease the development of AI applications. In more detail, AMD provides the Vitis AI development environment that encompasses 1) AI frameworks (e.g., Tensorflow), 2) pre-optimised AI models, 3) quantisation and model compression tools, and 4) the DPU \begin{table} \begin{tabular}{l c c} \hline **Benchmark** & LFRic & SVO \\ \hline Total runs & 509 & 1,784 \\ Exec. time (hours) & 4.3 & 6.5 \\ Soft-persistent crashes & 6 & 39 \\ Recoverable crashes & 20 & 94 \\ Total crashes & 26 & 133 \\ Tolerable SDCs & 0 & 51 \\ Critical SDCs & 2 & 0 \\ Total SDCs & 2 & 51 \\ Fluenee (n/cm\({}^{2}\)) & 9.35E+10 & 1.29E+11 \\ Total crash cross-section & 2.78E-10 & 1.03E-09 \\ Total SDC cross-section & 2.14E-11 & 3.96E-10 \\ \hline \end{tabular} \end{table} TABLE VI: SW-only multi-threaded Linux-based benchmark results with all necessary Linux drivers to seamlessly deploy a CNN application on AMD Zynq-7000 SoC and Zynq UltraScale+ MPSoC devices [37]. The DPU accelerator is implemented with PL and is tightly interconnected via AXI interfaces to the PS, as shown in Fig. 4. The DPU executes special instructions that are generated by the Vitis AI compiler. A typical Vitis AI development flow involves 1) the optimisation and compilation of a CNN model to DPU instructions and 2) the compilation of software running on the APU. The APU pre- and post-processes DNN data controls the DPU and orchestrates the movement of instructions and data between the DPU, the APU and the off-chip DDR memory. The DPU consists of an instruction scheduler and up to three on-chip BRAM buffers and computing engines. The instruction scheduler fetches and decodes DPU instructions from off-chip memory and controls the on-chip memories and computing engines. The DPU is available in eight architecture configurations, i.e., B512, B800, B1024, B1152, B1600, B2304, B3136, and B4096. Each configuration utilises a different number of computing engines and on-chip memories to target different-size devices and support various DPU functionalities, e.g., ReLU, RELU6, or Leaky-ReLU. We implemented the Vivado DPU targeted reference design (TRD) [24] provided by Vitis AI v1.3.1 with Vivado 2020.2 for our target board (i.e., ZCU102). The DPU was synthesised with default settings, i.e., B4096 convolution architecture with RAM_USAGE_LOW, CHANNEL_AUGMENTATION_ENABLE, DWCV_ENABLE, POOL_AVG_ENABLE, RELU_LEAKYRELU_RELU6, and Softmax. The parallelism of the DPU can be defined in three dimensions, input channel parallelism (ICP), output channel parallelism (OCP), and pixel parallelism (PP). The B4096 architecture has ICP and OCP equal to 16, PP equal to 8, and can achieve up to 4096 operations per clock cycle. The RAM_USAGE_LOW configuration utilises 257 BRAM36 primitives for buffering weights, bias and intermediate features. Channel augmentation improves the DPU utilisation when the number of input channels is much lower than the available channel parallelism. DepthwiseConv (DWCV), AveragePool, LEAKYRELU and RELU6 are standard CNN parameters that are described in [37]. The design was implemented with Vivado's Performance_ExplorePostRoutePhysOpt run strategy because Vivado's default run strategy resulted in time violations for the default operating frequencies of the implemented TRD. Table VII shows the resource utilisation and operating frequency of the DPU TRD. Vivado reported that 41.45% (i.e., 59,281,993 bits) of the device's configuration bits were essential. Please recall that _essential bits_ are configuration bits that, when corrupted, can potentially cause functional errors in the application. Two important notes can be made for Table VII. First, all resources in the DPU operate at 325 MHz except for the DSPs, which run at 2 \(\times\) 325 MHz = 650 MHz. This is because the DPU design applies a double data rate technique on DSP resources. Since DSPs can operate at a much higher \begin{table} \begin{tabular}{l c c c c} \hline Resource & Utilisation & Available & Utilisation & Frequency \\ \hline LUT & 108,208 & 274,080 & 39.48 \% & 325 MHz \\ LUTRAM & 11,960 & 144,000 & 8.31 \% & 325 MHz \\ FF & 203,901 & 548,160 & 37.20 \% & 325 MHz \\ BRAM & 522 & 912 & 57.24 \% & 325 MHz \\ DSP & 1,395 & 2,520 & 55.36 \% & 650 MHz \\ IO & 7 & 328 & 2.13 \% & 325 MHz \\ BUFG & 6 & 404 & 1.49 \% & 325 MHz \\ MMCM & 1 & 4 & 25.00 \% & 325 MHz \\ PLL & 1 & 8 & 12.50 \% & 325 MHz \\ APU & 1 & 1 & 100.00 \% & 1200 MHz \\ DDR ctrl. & 1 & 1 & 100.00 \% & 533 MHz \\ \hline \end{tabular} \end{table} TABLE VII: Resource utilisation and operating frequency of the DPU SW/HW co-design application Fig. 4: Deep-learning acceleration with the AMD Deep Processing Unit (DPU) on Zynq®-7000 SoC and Zynq® UltraScale+TM MPSoC devices. Fig. 3: 2D representation of the absolute trajectory error of an SVO run. frequency than other PL resources, one can perform \(N\) times more computation by running the DSPs with \(N\) times the frequency of the surrounding logic while multiplexing and demultiplexing their input and output data, respectively. Second, the design utilises 319, 55, 405, 4 and 1 LUT, LUTRAM, FF, BRAM and DSP more primitives than the baseline TRD design. This is because we included the AMD SEM IP in the design to perform fault injection and validate our experimental setup before the radiation experiments. However, we turned scrubbing off (configured SEM IP to IDLE mode) during beamtime to allow the DPU to accumulate at least one CRAM upset per image classification. Otherwise, the DPU would have performed almost all classifications without a CRAM upset. The SEM IP operating at 200MHz would have recovered much faster CRAM upsets (1700 upsets per minute) than they occurred (8 upsets per minute - estimated for the 5.6E+6 neutrons/cm\({}^{2}\)/seconds neutron flux at ChipIR facilities). Instead of scrubbing the device, all CRAM upsets recovered after a device reset when the DPU reported a tolerable or non-tolerable error or a crash (i.e., timeout). We used Petalinux 2020.2 to generate a Linux OS image for the ZCU102 by using the default Board Support Package (BSP) provided by the DPU-TRD, except 1) the nfs_utils package, which was additionally enabled to mount an NFS folder on Linux, and 2) the u-boot bootloader configuration which mounted an EXT4 file system on an SD card instead of an INITRD RAM disk on the DDR memory. The CNN application that ran on the DPU was the 8-bit quantised, not pruned resnet50.xmodel, provided by the Vitis AI TRD. _Results - Neutron error (SDC and crash) cross-sections of AMD Vitis DPU running image classification:_ Table VIII shows the dynamic cross-section of the DPU running the resnet50 image classification CNN for a total fluence of 5.5x10\({}^{10}\) neutrons/cm\({}^{2}\) during a 3-hour radiation test session. The DPU accelerator performed 5985 classification runs in total, from which 50% of the runs resulted in an SDC, 1.5% in a crash, and 49.5% were correct. Only 1.57% of the total SDCs resulted in image misclassification or, in other words, were critical. The experimental results show a reliable operation of the DPU even though it did not incorporate any soft error masking scheme in its PL logic like triple modular redundancy (TMR) [38] or ECC in its utilised BRAMs [39]. However, the dynamic cross-section of the DPU is not only affected by soft errors in its PL part but also due to errors in the APU. As mentioned, the DPU is an SW/HW co-design, which means that both the APU and PL logic should cooperate in a reliable manner to successfully classify an image when running the resnet50 model. In the following, we measure the effectiveness of all soft-error mitigation schemes embedded in the APU to cope with upsets in the L1 and L2 caches of the processor. _Results - MPSoC APU L1 and L2 cache cross-section when running image classification with the AMD Vitis DPU:_ We post-processed the Linux dmesg.log files captured during the AMD DPU tests to analyse the NSEUs observed in the MPSoC APU caches. We report the cross-sections of Level-1 Data (L1-D) and Instruction (L1-I) caches, Translation Lookaside Buffer (TLB), Snoop Control Unit (SCU), and Level-2 cache. Moreover, the upsets in the data and tag arrays in both the L1 and L2 caches have been separately identified. In detail, Table IX shows the dynamic cross-sections of the 32 KB L1-D cache, the 32 KB L1-I cache, and the TLB - a two-level TLB with 512 entries that handles all translation table operations of the APU. Table X presents the cross-sections of the 1 MB Level-2 cache (L2) and the SCU. The SCU has duplicate copies of the L1 data-cache tags. It connects the APU cores with the device's accelerator coherency port (ACP) to enable hardware accelerators in the PL to issue coherent accesses to the L1 memory space. The cross-sections of the tag arrays have been calculated based on the tag sizes of the caches, e.g., a 16-bit tag in the 16-way set associative, 64-byte line, 1 MB L2 cache. As mentioned, the cross sections have been calculated for a total fluence of 5.55x10\({}^{10}\) neutrons/cm\({}^{2}\). The results show that the cross-sections of the tag arrays are slightly lower than those of the data arrays. The average cross-section calculations for all caches (i.e., L1 and L2) in the MPSoC are close to those reported by Jordan D. Anderson et al. in [13]. Fig. 5 presents the number of detected upsets per cache per APU core. The upsets in the L1 caches are balanced between the four cores, while in the L2 cache, more upsets were observed in the 3\({}^{nd}\) APU core of the MPSoC. We assume that the Linux OS utilised more Core-3, and thus more cache upsets were detected for Core-3 in the L2 cache. The private L1-I caches are protected against NSEUs with parity checking (i.e., only error detection is supported), while \begin{table} \begin{tabular}{|l|r|r|r|r|r|} \hline & Size & \begin{tabular}{c} Upsets \\ (bit) \\ \end{tabular} & \begin{tabular}{c} Cross-sec. \\ (cm\({}^{2}\)bit) \\ \end{tabular} & \begin{tabular}{c} Conf. Level 95\% \\ Lower \\ \end{tabular} \\ \hline \hline L2 Data & 8,386,608 & 293 & 6.29E-16 & 5.59E-16 & 7.06E-16 \\ L2 Tag & 4,194,304 & 20 & 8.59E-17 & 5.25E-17 & 1.33E-16 \\ L2 Total & 12,582,912 & 313 & 4.48E-16 & 4.00E-16 & 5.01E-16 \\ SCU & 155,648 & 4 & 4.63E-16 & 1.26E-16 & 1.19E-15 \\ \hline \end{tabular} \end{table} TABLE X: L2 Cache Cross-Section \begin{table} \begin{tabular}{|l|r|r|r|r|r|} \hline & \multicolumn{2}{c|}{Classification} & \multicolumn{2}{c|}{Cross} & \multicolumn{2}{c|}{Conf. Level} \\ & \multicolumn{2}{c|}{runs} & \multicolumn{2}{c|}{Section} & \multicolumn{2}{c|}{95\%} \\ & \# & \% & \% & (cm\({}^{2}\)) & Lower & Upper \\ \hline \hline Correct runs & 2964 & 49.52\% & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ Crashes & 89 & 1.49\% & 1.60E-09 & 1.26E-09 & 2.02E-09 \\ Critical (C) & 46 & 0.77\% & 8.29E-10 & 6.07E-10 & 1.11E-09 \\ Tolerable (T) & 2886 & 48.22\% & 5.20E-08 & 5.01E-08 & 5.39E-08 \\ C+T errors & 2932 & 49.99\% & 5.28E-08 & 5.09E-08 & 5.48E-08 \\ \hline \end{tabular} \end{table} TABLE VIII: Neutron SDC cross-section of AMD Vitis DPU running image classification \begin{table} \begin{tabular}{|l|r|r|r|r|r|} \hline & Size & \begin{tabular}{c} Upsets \\ (bit) \\ \end{tabular} & \begin{tabular}{c} Cross-sec. \\ (cm\({}^{2}\)bit) \\ \end{tabular} & \begin{tabular}{c} Conf. Level 95\% \\ Lower \\ \end{tabular} \\ \hline \hline L1-D Data & 262,144 & 32 & 2.20E-15 & 1.50E-15 & 3.11E-15 \\ L1-D Tag & 155,648 & 3 & 3.47E-16 & 7.16E-17 & 1.02E-15 \\ L1-D Total & 417,792 & 35 & 1.51E-15 & 1.05E-15 & 1.20E-15 \\ L1-I Data & 262,144 & 25 & 1.72E-15 & 1.11E-15 & 2.54E-15 \\ L1-I Tag & 147,456 & 4 & 4.89E-16 & 1.33E-16 & 1.25E-15 \\ L1-I Total & 409,600 & 29 & 1.28E-15 & 8.54E-16 & 1.83E-15 \\ L1 TLB & 16,384 & 9 & 9.90E-15 & 4.53E-15 & 1.88E-14 \\ \hline \end{tabular} \end{table} TABLE IX: L1 Cache Cross-Section the private L1-D caches and the shared L2 cache feature SECDED via ECC. However, we observed crashes and SDCs during image classifications with the DPU (and also in the SW-only basic and complex tests) despite the soft error mitigation mechanisms incorporated in the APU caches. We reason that the application errors occurred due to uncorrectable errors in the APU caches (e.g., double-bit errors within a memory word slice of the L1 or L2 caches protected by the same parity bits) or due to upsets in the configuration bits of the PL in case of the DPU. For example, SBUs in L1-D and L2 caches are successfully detected and corrected through SECDED mechanisms, while SBUs in L1-I caches are detected through parity checking and repaired by invalidating and reloading the cache. Similarly, double-bit upsets in L2 are detected by the SECDED scheme and corrected with cache invalidation to force a cache update from a lower memory hierarchy, e.g., DDR. However, if a double-bit error affects a "dirty" line of a write-back L1-D and L2 cache, its data is lost, resulting in data corruption. In case of double-bit upsets in the parity-protected L1-I caches, these cannot be detected. ## VI Accessing the reliability of the MPSoC In sections IV&V, we calculated the static and dynamic cross sections of the XCZU9EG in various scenarios under neutron accelerated radiation testing, e.g., when executing a simple SW-only baremetal single-threaded benchmark or complex Linux-based SW/HW co-design application for image classification. In this section, we project the measured cross-sections of the XCZU9EG at different terrestrial radiation environments and device deployments and examine the reliability of the MPSoC-based computing system under the lens of the MTTU and MTTF dependability metrics as described in section II-B. Fig. 6 (a) shows the MTTU of the MPSoC's PL memories assuming 1) a computing system that uses one MPSoC and operates at NYC sea level (e.g., an automotive application), 2) at 40k feet altitude (e.g., avionics), and 3) a system that uses 1k MPSoC devices and operates at the NYC sea level (e.g., a 1000 MPSoC node data centre). On average, the system consisting of one MPSoC and operating at sea level will experience a neutron-induced upset in the CRAM, BRAM or SRL memories of the device every 904 months (i.e., 75 years). However, the MTTU (i.e. upset rate) of the PL memories of the same system operating at 40k feet altitude drops to 1.81 months (i.e., 500x reduction). On the other hand, a system consisting of 1k MPSoC computing nodes will collectively encounter one upset in PL memories every 0.9 months on average. The MTTU results show that fault-tolerance techniques such as configuration memory scrubbing and ECC in BRAMs should be considered in MPSoC systems that operate at high altitudes or on a large scale (i.e., data centres) to avoid the accumulation of upsets in its PL memories. Fig. 6 (b) illustrates the MTTU of the L1-D, L1-I and L2 caches of the MPSoC's APU when running the SW/HW DPU co-design. In other words, the cache upset rates of the APU were calculated by using the dynamic cross-section of caches in the DPU application. As expected, the MTTU of the APU caches is 26.5x higher than the PL memories due to their much smaller size. We calculated that the MTTU of caches in the one- and 1k-node(s) system could drop to 48 and 24 months, respectively, which points out that the parity and SECDED mechanisms of the APU are a necessary feature in the MPSoC, especially when used in large scale systems. The effectiveness of these embedded soft-error mitigation mechanisms is evaluated in the following sections, where we measure the dynamic cross-section of various MPSoC applications, i.e., report the rate at which memory upsets could not be recovered, thus resulting in an SDC or processor crash. Our analysis shows that the MPSoC has a low upset rate in PL memories and even lower in APU caches when operating in a single node computing system in NYC at sea level and increases in systems operating at high altitudes or on a large scale. In the following, we present the MTTF of MPSoC applications operating in a relatively high neutron flux to Fig. 5: Detected cache upsets per APU Core. Fig. 6: (a) MTTU in PL memories measured for the simplex tests, (b) MTTU of the APU L1 data (L1-D), L1 instruction (L1-I) and L2 caches when running the DPU SW/HW co-design. The MTTU metrics have been calculated for a system with one MPSoC operating in NYC at sea level or 40k altitude and a system using 1000 MPSoCs in NYC at sea level. understand how an increased upset rate can affect reliability at the application level. In detail, Fig. 7 presents the MTTF of the MPSoC when running the SW-only multi-threaded applications (i.e., LFRic and SVO) and the SW/HW DPU co-design. The MTTF of all applications is calculated assuming operation in NYC at 40k feet altitude. However, the MTTF figures for operation at the sea level or for the 1000-node MPSoC system can be calculated by dividing and multiplying the MTTF figures of Fig. 7 by 500, respectively. As mentioned in section V, errors of the complex tests have been categorised into critical SDCs (C), tolerable SDCs (T), and processor hang (H) or otherwise crash. An application failure occurs during an SDC or a processor hang event. In this case, the overall FIT rate of the system is \[\text{FIT}_{\text{all}}=\text{FIT}_{\text{critical}}+\text{FIT}_{\text{tolerable}}+ \text{FIT}_{\text{hang}} \tag{3}\] However, in error-resilient applications, we can omit the \(\text{FIT}_{\text{tolerable}}\) from our calculations since tolerable SDCs do not affect output correctness. Thus, the overall FIT can be calculated as follows: \[\text{FIT}_{\text{C+H}}=\text{FIT}_{\text{critical}}+\text{FIT}_{\text{hang}} \tag{4}\] In Fig. 7 the MTTF of \(\text{FIT}_{\text{all}}\) is refered as All and for \(\text{FIT}_{\text{C+H}}\) as C+H. Regarding the MTTF results, we see that the failure rate of the SW-only LFRic and SVO applications is, on average, one order of magnitude lower than the rate of upsets in APU L2 caches. This shows that the embedded SECDED mechanisms in the APU are effective even for a high upset rate in caches. Although the upset rate in the caches has been calculated for the DPU SW/HW co-design, we believe similar figures would hold for the LFRic and SVO applications. All complex tests share the same operating system and use the same software to send and receive data from the control PC. Therefore we expect that the caches would be exercised similarly in all benchmarks and thus have the same dynamic cross-section. However, the MTTFAll of SVO is 79% lower than LFRic, because SVO is more vulnerable to cache upsets due to its larger memory footprint. On the other hand, as mentioned in section V-A, all SDCs in LFRic are critical, while in SVO tolerable. Thus, the reliability degradation of SVO w.r.t. to LFRic can be limited to 70% if we omit the FIT rate of tolerable SDCs from SVO, i.e. if we consider the MTTF\({}_{\text{C+H}}\) of the applications. Comparing the SW/HW co-design (i.e., DPU) with the SW-only applications (i.e., BareC, LFRic, and SVO), we observe that the DPU has, on average, 90\(\times\) lower MTTF\({}_{\text{All}}\). This can be justified due to the high FIT rate (low MTTF) of the PL accelerator, which deteriorates the total MTTF of the SW/HW co-design application. In contrast, BareC, LFRic, and SVO do not integrate any PL accelerator and therefore have an overall higher MTTF than the DPU. However, the MTTFAll of the DPU is very low due to the increased rate of tolerable SDCs. Omitting the FIT rate of tolerable SDCs yields an MTTF\({}_{\text{C+H}}\) = 87 months, which is 4x lower than the MTTF\({}_{\text{C+H}}\) of the SW-only applications. The MTTF results of the DPU show that deploying SW/HW co-design applications at high altitudes or on a large scale requires some form of soft error mitigation like configuration memory scrubbing or even hardware redundancy in high-reliability systems. ## VII Conclusions This article evaluated the neutron Single Event Effect (SEE) sensitivity of the AMD UltraScale+ XCZU9EG MPSoC through accelerated neutron radiation testing and dependability analysis. The cross sections of the device's Programmable Logic (PL) and Processing System (PS) memories were characterised under the following workloads: 1) a synthetic design that utilised all PL resources, 2) several single-threaded baremetal SW-only benchmarks, 3) two SW-only multi-threaded Linux-based applications for weather prediction and pose estimation, and 4) a SW/HW DPU co-design running the resnet50 image classification model. The device's neutron CRAM static cross-section was measured to be 1.84E-16cm\({}^{2}\)/bit, which is in the range of previous studies (1.10E-16 cmcm\({}^{2}\)/bit \(-\) 3.40E-16 cmcm\({}^{2}\)/bit). The cross-sections of BRAM and SRL memories were one order of magnitude higher than CRAM. No NSEU in the CRAM resulted in a Multi-Cell Upset (i.e., two or more upsets in one configuration frame), concluding that SECDED scrubbing is adequate to recover PL upsets in XCZU9EG devices when used in terrestrial applications. We observed only one BRAM SEFI, one SRL SEFI and two SELs during the accelerated radiation tests, which exposed the MPSoC to more than 1.3 million hours of equivalent natural neutron fluence at NYC sea level. We conclude that the probability of SEFIs and SELs in MPSoC terrestrial applications is extremely low. To put the cross-section measurements into context, we conducted a dependability analysis assuming a one-node MPSoC system operating at NYC sea level (e.g., automotive) or 40k altitude (e.g., avionics) and a 1000-node MPSoC system at NYC sea level. All SW-only benchmarks achieved a MTTF higher than 148 months in the one-node system at 40k altitude, which points out that the PS can operate reliably despite a relatively high rate of cache upsets (MTTU = 48 months). Thus, we conclude that the embedded SECDED mechanisms of the PS can effectively recover NSEUs even Fig. 7: MTTF of 1) the SW-only multi-threaded applications (LFRic, SVO), and 2) the SW/HW multi-threaded co-design application (DPU). The MTTF metrics have been for one MPSoC-based computing system operating in NYC at 40k feet. in high altitude or large-scale MPSoC systems. However, the DPU application was more prone to neutron-induced errors than the SW-only workloads. The MTTF of the DPU was estimated to be 4 months, assuming it runs on the same one-node system at sea level. Thus, we conclude that SW/HW applications require extra soft error mitigation, e.g., hardware redundancy, to improve reliability in particular environments and device deployments. Finally, we showed that error-resilient applications like the DPU image classification can ignore tolerable errors to improve MTTF since these do not affect the final system result.
2304.04605
Prediction of Planet Yields by the PRime-focus Infrared Microlensing Experiment Microlensing Survey
The PRime-focus Infrared Microlensing Experiment (PRIME) will be the first to conduct a dedicated near infrared (NIR) microlensing survey by using a 1.8m telescope with a wide field of view of 1.45 ${\rm deg^{2}}$ at the South African Astronomical Observatory (SAAO). The major goals of the PRIME microlensing survey are to measure the microlensing event rate in the inner Galactic bulge to help design the observing strategy for the exoplanet microlensing survey by the {\it Nancy Grace Roman Space Telescope} and to make a first statistical measurement of exoplanet demographics in the central bulge fields where optical observations are very difficult owing to the high extinction in these fields. Here we conduct a simulation of the PRIME microlensing survey to estimate its planet yields and determine the optimal survey strategy, using a Galactic model optimized for the inner Galactic bulge. In order to maximize the number of planet detections and the range of planet mass, we compare the planet yields among four observation strategies. Assuming {the \citet{2012Natur.481..167C} mass function as modified by \citet{2019ApJS..241....3P}}, we predict that PRIME will detect planetary signals for $42-52$ planets ($1-2$ planets with $M_p \leq 1 M_\oplus$, $22-25$ planets with mass $1 M_\oplus < M_p \leq 100 M_\oplus$, $19-25$ planets $100 M_\oplus < M_p \leq 10000 M_\oplus$), per year depending on the chosen observation strategy.
Iona Kondo, Takahiro Sumi, Naoki Koshimoto, Nicholas J. Rattenbury, Daisuke Suzuki, David P. Bennett
2023-04-10T14:21:48Z
http://arxiv.org/abs/2304.04605v1
# Prediction of Planet Yields by the PRime-focus Infrared Microlensing Experiment Microlensing Survey ###### Abstract The PRime-focus Infrared Microlensing Experiment (PRIME) will be the first to conduct a dedicated near infrared (NIR) microlensing survey by using a 1.8m telescope with a wide field of view of 1.45 deg\({}^{2}\) at the South African Astronomical Observatory (SAAO). The major goals of the PRIME microlensing survey are to measure the microlensing event rate in the inner Galactic bulge to help design the observing strategy for the exoplanet microlensing survey by the _Nancy Grace Roman Space Telescope_ and to make a first statistical measurement of exoplanet demographics in the central bulge fields where optical observations are very difficult owing to the high extinction in these fields. Here we conduct a simulation of the PRIME microlensing survey to estimate its planet yields and determine the optimal survey strategy, using a Galactic model optimized for the inner Galactic bulge. In order to maximize the number of planet detections and the range of planet mass, we compare the planet yields among four observation strategies. Assuming the Cassan et al. (2012) mass function as modified by Penny et al. (2019), we predict that PRIME will detect planetary signals for \(42-52\) planets (\(1-2\) planets with \(M_{p}\leq 1M_{\oplus}\), \(22-25\) planets with mass \(1M_{\oplus}<M_{p}\leq 100M_{\oplus}\), \(19-25\) planets \(100M_{\oplus}<M_{p}\leq 10000M_{\oplus}\)), per year depending on the chosen observation strategy. Gravitational microlensing (672) -- Gravitational microlensing exoplanet detection (2147) -- Galactic bulge(2041) -- Galactic center (565) -- Galaxy structure (622) -- Near infrared astronomy(1093) ## 1 Introduction The number of the detection of exoplanets has exceeded 5,000. Most of these have been discovered via transit and radial velocity methods and have orbital radii and masses different from those of the solar system planets. The microlensing method, in contrast, is complementary to the other methods because it is sensitive to Earth-mass planets (Bennett & Rhie, 1996) beyond the snow-line (Gould & Loeb, 1992), as well as to free floating planets that are not orbiting a host star (Sumi et al., 2011; Mroz et al., 2017; Gould et al., 2022). The snow-line represents the boundary in the protoplanetary disk where H\({}_{2}\)O becomes ice, outside of which planet formation is predicted to be most active according to the core accretion model (Lissauer & Stewart, 1993; Pollack et al., 1996). Currently, there are three optical microlensing survey projects; the Microlensing Observations in Astrophysics (MOA; Bond et al., 2001; Sumi et al., 2003), the Optical Gravitational Lensing Experiment (OGLE; Udalski et al., 2015) and the Korea Microlensing Telescope Network (KMTNet; Kim et al., 2016). Thanks to these survey observations and other follow-up observations, the total number of planets detected via microlensing is 141 as of 2022 November 21. Statistical analyses using microlensing planets provide important findings such as cold planet frequency (Suzuki et al., 2016) and constraints on the dependence of cold planet frequency on the Galactic location (Koshimoto et al., 2021). Suzuki et al. (2016) measured the mass-ratio function of planets beyond the snow-line using 29 planets discovered by the MOA and other optical microlensing surveys. They found a break, and likely peak in the mass-ratio function near a Neptune mass for the first time. However, there is still a large degree of uncertainty in the location of the break (or peak) in the planet mass-ratio distribution owing to the lack of low-mass planets in their analysis. Recently Zang et al. (2022) have suggested a possibility that low-mass planets are more abundant than previous results. Their analysis used 13 planets including small mass-ratio planets detected by KMTNet, but did not correct for detection efficiencies. Koshimoto et al. (2021) used the statistical samples in Suzuki et al. (2016) and showed that there is no strong dependence of the cold planet frequency on the Galactocentric distance. The inner bulge (\(|b|\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}2^{\circ}\)) regions including the Galactic center have remained hidden for the current microlensing survey owing to high extinction. However, these regions are interesting because this is where we expect to find microlensing events in large quantities because of the high stellar density (Gould, 1995). In the near infrared (NIR), light can penetrate through the dust in this region. Comparing the measurements of the planet frequency using an NIR microlensing survey with that determined by the present optical survey, the dependency of planet occurrence on the Galactic structure can be measured, which provides key insights into planetary formation and its history in the Galaxy. So far, hundreds of microlensing events were discovered in the inner bulge region by the two NIR surveys, VISTA Variables in the Via Lactea Survey (VVV; Minniti et al., 2010) and the United Kingdom Infrared Telescope (UKIRT) Microlensing Survey (Shvartzvald et al., 2017, 2018). The VVV survey conducted an NIR survey toward the inner Galactic bulge including the Galactic central region and adjacent region of the Galactic plane by using the Visible and Infrared Survey Telescope for Astronomy (VISTA), a 4 m telescope with the 1.6 deg\({}^{2}\) field of view (FOV) VISTA InfraRed Camera (VIRCAM; Emerson and Sutherland, 2010) at ESO's Cerro Paranal Observatory in Chile. Although there are multiple epochs in \(K_{S}\)-band, the survey is not designed for microlensing and the observation cadence was irregular (1/day at best), which is generally inadequate to detect microlensing light curves with features due to planets. However, their survey is sufficient to reveal the number of microlensing events as a function of Galactic longitude and Galactic latitude. They found the Galactic longitude distribution (\(-10.0^{\circ}<l<10.44^{\circ}\)) by using 630 microlensing events discovered during \(2010-2015\)(Navarro et al., 2018) and the Galactic latitude distribution (\(-3.7^{\circ}<b<3.9^{\circ}\)) using 360 microlensing events (Navarro et al., 2020). From 2015 to 2018, the UKIRT Microlensing Survey (Shvartzvald et al., 2017) conducted a microlensing exoplanet survey toward the inner Galactic bulge by using the UKIRT 3.8 m telescope on Mauna Kea, Hawaii with a 0.8 deg\({}^{2}\) FOV infrared camera, Wide Field Camera (WFCAM). The UKIRT microlensing survey observed in \(H\)- and \(K\)-band filters. UKIRT-2017-BLG-001Lb (Shvartzvald et al., 2018) is the first planet that was found near the Galactic center at \((l,b)=(-0.12^{\circ},-0.33^{\circ})\) with a high extinction of \(A_{K}=1.68\). The discovery of UKIRT-2017-BLG-001Lb demonstrated that an NIR survey enables the detection of planets close to the Galactic center with high extinction. Although the above observations have been made, there are still no measurements of microlensing event rates and planet frequency in the inner Galactic bulge. The _Nancy Grace Roman Space Telescope_ is NASA's next flagship mission (Spergel et al., 2015), which is planned to launch in late 2026. It will be placed in a halo orbit around the second Sun-Earth Lagrange Point (L2). The main uses of \(Roman\) are to study dark energy and to conduct a statistical census of exoplanets by conducting a microlensing survey. \(Roman\) comprises a 2.4 m telescope with a 0.281 deg\({}^{2}\) wide FOV camera. The \(Roman\) Galactic Exoplanet Survey (Bennett and Rhie, 2002; Bennett et al., 2010) will comprise 15 minute cadence observations over a few square degrees toward the inner Galactic bulge with a wide W149 filter (\(1-2\)\(\mu\)m). Thanks to the high photometric accuracy and continuous observations during \(\sim 72\) days in each of six seasons over five years, \(Roman\) will detect \(\sim 1400\) cold exoplanets with masses greater than that of Mars (\(\sim 0.1M_{\oplus}\)) including 300 planets with mass of less than \(3M_{\oplus}\)(Penny et al., 2019). In addition, Johnson et al. (2020) shows that \(Roman\) would detect \(\sim 250\) free floating planets. Prior to the microlensing survey by \(Roman\), the PRIME-focus Infrared Microlensing Experiment (PRIME) will start its NIR microlensing survey toward the inner Galactic bulge in 2023. PRIME will conduct a high-cadence wide FOV survey by using a 1.8m telescope (f/2.29) with 1.45 deg\({}^{2}\) (0.5"/pix) FOV at Sutherland Observatory operated by the South African Astronomical Observatory (SAAO). Half of the observation time will be used for the microlensing planet survey towards the inner Galactic bulge. The other half will be used for other sciences, such as the transit surveys for M-dwarfs and the transient search for counterparts of high-z gamma-ray bursts and gravitational-wave events. Here we present results of our simulations that compare four observation strategies for the PRIME microlensing survey and predict the planet yields. In Section 2, we introduce the PRIME microlensing survey. Then we explain the methodology of our simulations in order to calculate the detection efficiency of microlensing events and planets in Section 3. Next, we calculate star counts, microlensing event rate, detection efficiencies, and detection number of microlensing events and planets for each line of sight over the inner Galactic bulge in Section 4. We present microlensing and planet yields depending on four observation strategies in Section 5. Finally, we discuss our results and summarize our conclusions in Section 6 and Section 7. ## 2 PRIME-Focus Infrared Microlensing Experiment (PRIME) ### The PRIME Microlensing Survey PRIME will be the first dedicated NIR microlensing experiment for the inner Galactic bulge. PRIME will use a NIR camera called PRIME-Cam, consisting of four Teledyne HgCdTe 4K x 4K photodiode array (H4RG-10) detectors with 10-micron pixels. The primary passband for the microlensing survey is \(H\)-band and \(Z\)-, \(Y\)-, \(J\)-band filters are also used for color measurements. The current plan, which is assumed in our simulations, is that each observation epoch will be composed of twelve 9-second co-added dithered exposures and take 160 sec including overheads (readout time per exposure, 3 sec, slew time for dithering, 1 sec, and slew time for the next field, 4 sec) per exposure. Parameters for the PRIME telescope and PRIME-Cam are summarized in Table 1. We note that some parameters in Table 1 are current assumptions and are subject to change. ### The Goal of the PRIME Microlensing Survey The main goals of the PRIME microlensing survey are to measure the microlensing event rate in the inner Galactic bulge to help design the observing strategy for \(Roman\)'s exoplanet microlensing survey and to make a first statistical measurement of exoplanet demographics in the central bulge fields where optical observations are very difficult owing to the high extinction in these fields. By comparing with the planet frequency measured by visible observation, PRIME will reveal the Galactic distribution of planet frequency. PRIME also helps to provide insight into the performance of the H4RG-10 detectors that \(Roman\) will use. Moreover, after the \(Roman\) telescope begins to observe, the simultaneous observations of PRIME and \(Roman\) enable us to measure the microlensing parallax which gives us the mass and distance of lens systems. In particular, observations where the baseline between the Earth and L2 is \(\sim 0.01\) au have a sensitivity to the parallax measurements in the timing of a caustic crossing (Wyrzykowski et al., 2020), which is just as sharp a feature as planetary signals, and the parallax measurements down to the free-floating planets regime (Bachelet et al., 2022). ## 3 Simulations Although an expected microlensing event rate of each field in the inner bulge can be calculated by a model of our Galaxy, we need a survey simulation to obtain detection efficiencies of (i) microlensing events and (ii) planetary events to calculate how many microlensing events and planets are expected to be found by PRIME. In this section, we present procedure of a Monte Carlo simulation for one year of PRIME observations toward the inner Galactic bulge with 16, 32, 48, and 96 minute \begin{table} \begin{tabular}{l r} \hline \hline Mirror diameter(m) & 1.8 \\ Field of View (\(\rm deg^{2}\)) & 1.45 \\ Detectors & 4 \(\times\) H4RG-10 \\ Pixel Scale (\(\arcsec\)/pixel) & 0.5 \\ Plate Scale (\(\mu\)m/pixel) & 10 \\ Primary bandpass (\(\mu\)m) & 1.64\(\pm\)0.30 (\(H\)-band) \\ \hline Exposure time (s) & 9 \\ Readout number & 3 \\ Stack number & 12 \\ Readout noise(counts/pixel)1 & 12.12 \\ Dark(counts/pixel/s)2 & 0.130 \\ \hline QE & 0.88 \\ Throughput, \(\eta\) & 0.78 \\ Thermal background (counts/pixel/s) & 500 \\ Sky background (counts/pixel/s) & 3400 \\ Limiting magnitude (mag) & 18.5 (\(H\)-band) \\ Saturation limit (mag) & 11.0 (\(H\)-band) \\ \hline \end{tabular} \end{table} Table 1: Adopted Parameters of PRIME microlensing survey cidence observations to estimate the detection efficiencies as a function of field coordinate and observation cadence. ### Simulation overview Figure 1 shows a schematic view of our simulation. For each Galactic coordinate and for each observation cadence, a Monte Carlo simulation is performed to calculate the detectability of one hundred thousand microlensing events. A brief explanation of each procedure is presented in the following. First, we randomly select source and lens objects from each star catalog at specific Galactic coordinates, \((l,b)\), generated from a stellar population synthesis model in our Galaxy. We then assign parameters for single-lens microlensing and binary-lens microlensing with planetary mass-ratios. Synthetic light curves are generated. Each light curve is then modified according to the observation cadence, the parameters of PRIME-Cam and telescope, and observation conditions at Sutherland. Finally, based on the detection criteria, we will examine whether the microlensing events and the planetary signatures can be detected. ### Simulation of planetary microlensing events In this section, we describe how to simulate planetary microlensing light-curves. First, we generate a microlensing event by randomly drawing lens and source stars from catalogs of lens and source stars created by the Galactic model and adding a planet to the lens. Then we compute the parameters of single-lens and binary-lens models which are associated with the physical parameters assigned to the combination of the source and lens. Then, we calculate the magnification of that event as a function of time. #### 3.2.1 Galactic model and Catalogs of source and lens Koshimoto et al. (in prep). developed a stellar population synthesis tool, genstars2, which uses a modified version of the Galactic model by Koshimoto et al. (2021). The modified model is applicable for the inner bulge region because it has a nuclear stellar disk (NSD) structure based on the NSD model by Sormani et al. (2022). The NSD is not included in other population synthesis tools such as the Besancon model (Robin et al., 2003, 2012) or Galaxia (Sharma et al., 2011). Thus, genstars is currently only the public population synthesis tool suitable for our simulation toward the inner bulge region. Footnote 2: The software is available via Zenodo (Koshimoto, 2022) or [https://github.com/nkoshimoto/genstars](https://github.com/nkoshimoto/genstars). Note that we use a slightly different version of genstars from the public version, where the center of our Galaxy is at \((l,b)=(0,0)\) rather than at Sgr A* at \((l,b)=(-0.056^{\circ},-0.046^{\circ})\)(Reid & Brunthaler, 2004). The central shift slightly affects our simulation results in the inner NSD region or central \(\sim 0.5\) deg\({}^{2}\). However, the influence is negligible compared to other issues such as the underestimation of extinction in the Galactic central region which is shown in Koshimoto et al. (in prep). This version of their Galactic model will hereafter be referred to as KGM. In order to simulate the combination of a source and a lens for microlensing events, we use two star catalogs. The first list, the list of sources, is selected by specifying a range of magnitudes, \(10.5<H_{S}<22\) in the Vega magnitude system within 16 kpc from the Sun. The source list includes stars fainter than PRIME's limiting magnitude, \(H_{\rm lim}\sim 18.5\), because they can become bright enough to be detected if sufficiently magnified. The second list, the list of lenses, is selected without magnitude limitations (\(-\infty<H_{L}<\infty\)), i.e., including dark objects such as brown dwarfs, white dwarfs, neutron stars, and black holes. Each list contains the following physical parameters of sources or lenses: the magnitude, mass, radius, distance, and proper motions. #### 3.2.2 Microlensing parameters A microlensing event occurs when a foreground lens star passes close to the line of sight between an observer and a background source star. The gravity of the lens star bends the light from the source star and magnifies its brightness. The angular Einstein ring radius is given by, \[\theta_{\rm E}=\sqrt{\kappa M_{L}\pi_{\rm rel}}, \tag{1}\] where \(M_{L}\) is the mass of the lens object, and \(\kappa=4G(c^{2}\ {\rm au})^{-1}=8.14\ {\rm mas}M_{\odot}^{-1}\). When the distance from the observer to the lens and source are represented by \(D_{L}\) and \(D_{S}\), respectively, the lens-source relative parallax is \(\pi_{\rm rel}=1\ {\rm au}(D_{L}^{-1}-D_{S}^{-1})\). The magnification of the single-lens light-curve model depends on three parameters: the time of lens-source closest approach \(t_{0}\), the impact parameter in units of the Einstein radius \(u_{0}\), and the Einstein radius crossing time \(t_{\rm E}\). We also include the finite source effects and introduce one parameter: the ratio of the angular source size to the angular Einstein radius, \(\rho\). We assume uniform distributions of \(t_{0}\) and \(u_{0}\): \[0\leq t_{0}\leq T_{\rm obs}, \tag{2}\] \[0\leq u_{0}\leq u_{0,{\rm max}}, \tag{3}\] where we adopt the survey duration \(T_{\rm obs}=365.25\) day. We also adopt the maximum value of impact parameter \(u_{0,\rm max}=1.0\). The events with \(u_{0,\rm max}>1.0\) do not significantly affect the final result because the detection efficiency is lower owing to the low magnification. \(t_{\rm E}\) and \(\rho\) are derived from the physical parameters assigned to the combination of the source and lens, \[t_{\rm E}=\frac{\theta_{\rm E}}{\mu_{\rm rel}} \tag{4}\] \[\rho=\frac{\theta_{*}}{\theta_{\rm E}}, \tag{5}\] where \(\mu_{\rm rel}\) is the lens-source relative proper motion drawn from the velocity distribution in the Galactic model. The angular radius of the source star \(\theta_{*}=R_{*}/D_{S}\), where \(R_{*}\) is the radius of the source star estimated from the source magnitude from genstars. Note that the microlensing event rate is not equal among all the source-lens pairs picked up from the catalogs because it is \(\propto\mu_{\rm rel}\theta_{\rm E}\). We will later add this weight when considering the statistics of simulated events. The magnification of the binary-lens model requires three additional parameters; the planet-host mass ratio, \(q\), the planet-host separation in units of the Einstein radius, \(s\), the angle between the trajectory of the source and the planet-host axis, \(\alpha\). The mass ratio and the planet-host separation are given by \[q=\frac{M_{p}}{M_{h}} \tag{6}\] \[s=\frac{a_{\perp}}{D_{L}\theta_{\rm E}}, \tag{7}\] where \(M_{p}\) and \(M_{h}\) are the mass of the planet and the host star, respectively. Assuming a circular orbit, the projected orbital separation \(a_{\perp}=a\sqrt{1-\cos^{2}\zeta}\), where \(a\) is semi major axis and \(\zeta\) is the angle between the plane of the sky and the binary-axis at a given time. We use a Figure 1: Schematic view of our simulation to estimate the detection efficiency of both microlensing events and planets at specific \((l,b)\). For each Galactic coordinate and for each observation cadence, a Monte Carlo simulation is performed to calculate the detectability of one hundred thousand microlensing events. uniform distribution of \(\cos\zeta\) assuming a circular planetary orbit that is inclined randomly to the line of sight. We use 21 fixed values of planetary mass distributed logarithmically in the range \(0.1<M_{p}<10^{4}M_{\oplus}\) (0.10, 0.18, 0.32,..., 10000 \(M_{\oplus}\)) and 15 fixed values of semi major axis in the range \(0.3<a<30\) au (0.3, 0.42, 0.58,..., 30 au). We also assume a uniform distribution of \(0<\alpha<360\). #### 3.2.3 Magnification calculation We calculate the magnification of the single-lens model as a function of time, using either the Yoo et al. (2004) or the Lee et al. (2009) method depending on the value of \(\rho\) for the calculation of the finite source with limb darkening as implemented in MulensModel(Poleski and Yee, 2019). In order to calculate the magnification of the binary-lens model, we use the advanced contour integration method as implemented in VBBinaryLensing(Bozza, 2010; Bozza et al., 2018). In our simulations, we do not consider higher-order effects such as parallax, xallarap, or lens orbital motion. We note that the magnification of the binary-lens model are calculated to generate synthetic data points in Section 3.3 and to examine the validity of planetary signatures in Section 3.4.1. The magnification of the single-lens model are calculated to investigate the detectability of microlensing events and planetary signatures by the \(\chi^{2}\) value of the single-lens model in Section 3.4.1. ### Generate synthetic data points After generating the microlensing models, the next step is to model how the microlensing events are observed by PRIME. We generate the synthetic data points with 16, 32, 48, and 96 minute cadences. #### 3.3.1 Exposure list First of all, we make an exposure list of observational parameters such as seeing and airmass for each exposure time (\(\sim 160\) sec). In order to reproduce actual observations, we consider the visibility of the Galactic center, weather, and the days of full moon at Sutherland. The observation toward the inner Galactic bulge is assumed to be conducted when the Sun's altitude is more than 12 degrees below the horizon and when the Galactic center's altitude is more than 20 degrees. Then, we remove the days of the bad weather and three days across the full moon from the set of observable times, based on observation statistics3 and online data4 over \(2016-2018\). The simulated observable time accounts for \(\sim 55-60\) % of the whole night time of the bulge season. Footnote 3: [https://kmtnet.kasi.re.kr/kmtnet-eng/observing-statistics-of-three-sites/](https://kmtnet.kasi.re.kr/kmtnet-eng/observing-statistics-of-three-sites/) Footnote 4: [https://kmtnet.kasi.re.kr/ulens/](https://kmtnet.kasi.re.kr/ulens/) After making the exposure list of the epochs when the Galactic center is visible, we assign the value of airmass and seeing to each exposure time. We calculate airmass from the altitude of the Galactic Center, \(\rm{airmass}=sec(z)\), where \(z\) is the zenith angle. We draw the seeing values from the log normal distribution presented in Kato et al. (2007). That work provides an observational seeing distribution under certain airmass conditions obtained observations of the Large Magellanic Cloud from Sutherland with the InfraRed Survey Facility (IRSF). We also consider the airmass dependence of the seeing, airmass\({}^{0.6}\), given by Woolf (1982). #### 3.3.2 Flux determination Now we have the exposure list, where the observational parameters such as exposure epoch, seeing, and airmass are assigned. Then we calculate the flux for each observable data point of a microlensing event. The PRIME photometry will be reduced by using an implementation of the MOA Difference Imaging Analysis (DIA) pipeline (Bond et al., 2001). Since the microlensing survey is conducted toward the inner Galactic bulge, where the surface density of stars is expected to be high, aperture photometry and point-spread function fitting photometry are known to be less effective in these crowded fields. With the magnification of the source flux as a function of time, \(A(t,\mathbf{x})\), which is defined the microlensing parameters, \(\mathbf{x}=(u_{0},t_{0},t_{\rm E},\rho,q,s,\alpha)\) described in Section 3.2.2, the total flux of the magnified source, \(F(t)\), is given by \[F(t)=A(t,\mathbf{x})F_{s}+F_{b}, \tag{8}\] where \(F_{s}\) is the baseline flux of the source star, and \(F_{b}\) is the blend flux which can, in principle include the lens flux. When we simulate data points for each microlensing event, data points are generated during \(T_{\rm min}<t<T_{\rm max}\), where \(T_{\rm min}=t_{0}-5t_{\rm E}\) and \(T_{\rm max}=t_{0}+5t_{\rm E}\). If \(T_{\rm min}<0\), we use \(T_{\rm min}=0\) and if \(T_{\rm max}>365.25\), we use \(T_{\rm max}=365.25\). We calculate the source flux, \(F_{s}\), by combining the \(H\)-band magnitude of the source star, \(H_{S}\) generated from genstars with the throughput, \(\eta\) in Table 1. To estimate the blending flux \(F_{b}\), we calculate the lens flux from the \(H\)-band magnitude of the lens star, \(H_{L}\), and the total flux of stars brighter than the limiting magnitude within the PSF, \(F_{\rm bright}\). We derive \(F_{\rm bright}\) by using the \(H\)-band images taken by the VVV survey fourth data release (DR4) (Minniti et al., 2010). We evaluate \(F_{\rm bright}\) by subtracting the smooth background flux from the total flux in the region within the typical \(H\)-band seeing disc at Sutherland (\(\sim 1.4^{\prime\prime}\)). Then, the blending flux, \(F_{b}\), can be obtained by adding the lens flux and \(F_{\rm bright}\) contaminated in the event. We evaluate the flux uncertainty \(F_{\rm err}\) by quasi-smooth backgrounds such as sky backgrounds and faint unresolved stars, and instrumental backgrounds such as thermal background and dark current. These sources of error and their magnitudes are summarized in Table 1. In ground-based observations, the brightness of the sky background is higher in the NIR wavelength than in the optical wavelength. In particular, intensities of the OH emission lines significantly dominate the sky background in the \(H\)-band. OH lines are known to fluctuate not only within the FOV but also throughout the night. In our simulation, we simulate those variations by randomly taking the sky brightness from a uniform range of \(13.0-14.2\) mag/arcsec\({}^{2}\) for each observation epoch because there is no measurement of the specific distribution of \(H\)-band sky brightness and its dependence on the observation conditions at Sutherland. We also consider that variations in the sky background due to changes in the moonlight are almost negligible because we exclude observations across a three day interval across the time of full moon. This is a conservative assumption because the moon's contribution to the sky background is minimal in the \(H\)-band when the separation angle between the target and the moon is more than 10 degrees (Pedani, 2014). Although there may be systematic errors due to insufficient sky subtractions, DIA will deal with slight variations in sky background in actual observation. Thus, we do not take them into account in our simulations assuming that the sky background is successfully subtracted. The average flux of quasi-smooth background produced by faint unresolved stars, \(F_{\rm faint}\) is estimated by the smooth background light in the region within the resolution of the simulation, \(0.25\arcdeg\times 0.25\arcdeg\) using the \(H\)-band images in VVV DR4. We consider both Poisson noise from the total flux, \(F(t)\), quasi-smooth backgrounds and instrumental backgrounds, and Gaussian noise from the readout noise. It is known that the true photometric errors are underestimated owing to the crowded stellar fields, nearby bright stars, scintillation, and flat-fielding, etc. In order to include a fractional systematic uncertainty, we also add 0.3 % of the magnitude in quadrature to each error. The resultant photometric precision for each observation epoch as a function of \(H\)-band magnitude is shown in Figure 2, assuming no blending and typical seeing. Each observation epoch will be composed of twelve 9-second co-added dithered exposures and take 160 sec including overhead. As the gray area shows, photometric precision varies by up to 20% with respect to the black line, depending on the value of sky brightness. The typical photometric accuracies are \(\sigma_{K_{S}}=0.01\) mag and \(\sigma_{J,H}=0.03\) mag for the VVV survey (Navarro et al., 2020), and \(\sigma_{Y,J,H}<0.02\) mag for the UKIRT Microlensing Survey (Lawrence et al., 2007; Lucas et al., 2008). The photometric precision of the PRIME microlensing survey is \(\sigma_{H}<0.03\) mag for bright sources with \(H<16.5\) mag. Moreover, the limiting magnitude of PRIME is \(H_{\rm lim}\sim 18.5\) mag, which is brighter than limiting magnitudes5 of those surveys. This is reasonable considering PRIME's smaller aperture than these two telescopes. Compared with those NIR surveys, PRIME has a comparable performance to those other NIR surveys, but will conduct the microlensing survey with much higher observation cadences, which is essential for the detection of planetary signals due to low-mass planets. Footnote 5: The limiting magnitudes are \(H_{\rm lim}\sim 19.5\) mag for the VVV survey (Zhang & Kainulainen, 2019) and \(H_{\rm lim}\sim 19.0\) mag for the UKIRT Microlensing Survey (Lawrence et al., 2007). ### Detection Criteria Figure 2: Photometric precision of PRIME as a function of \(H\)-band magnitude when each observation epoch will be composed of twelve 9-second co-added dithered exposures and take 160 sec including overhead, assuming no blending and typical seeing. The black line shows the photometric precision assuming the sky background is \(13.6\) mag/arcsec\({}^{2}\). The gray region shows the photometric precision assuming the sky background is \(13.0-14.2\) mag/arcsec\({}^{2}\). The red and blue dotted lines indicate the saturation limit in a single read and the faint magnitude limit for a \(5\sigma\). #### 3.4.1 Microlensing event In order to detect planets via microlensing, it is required to detect both the microlensing event itself and to distinguish the planetary perturbations from the single-lens event. We defined five criteria for the detection of microlensing events, which are summarized in Table 2. The first criterion is as follows, \[\Delta\chi^{2}_{\rm ML}\equiv\chi^{2}_{\rm const}-\chi^{2}_{\rm ML}>\Delta\chi^ {2}_{\rm ML,th}, \tag{9}\] where \(\chi^{2}_{\rm const}\) and \(\chi^{2}_{\rm ML}\) is the \(\chi^{2}\) of the best-fit constant flux and best-fit single-lens model, respectively. We use \(\Delta\chi^{2}_{\rm ML,th}=500\). The second criterion is that there must be more than 100 data points to guarantee modeling accuracy. The third criterion is that there must be data points before and after the peak time of the event, which enhances the accuracy of the parameters measured from the light-curves. The fourth criterion is that the maximum value of the source brightness must be \(>5\) times larger than the flux error at the time. We note that this criteria is more conservative than criteria that is used in the analysis by KMTNet (Zang et al., 2022, 2022). The fifth criterion is that there are at least three consecutive points with the observed flux deviating from the constant baseline by more than \(5\sigma\). This requirement is intended to reduce the occasional artifacts on the baseline, like cosmic ray hit. Note that some events passed these criteria thanks to their planetary perturbation. Thus, even events with weak signals from the microlensing event itself have not been missed in our simulations if its planetary signature is sufficiently strong. #### 3.4.2 Planetary Signature To estimate the expected yields of the planet detection by the PRIME microlensing survey, we need to set the planet detection criteria. Our criterion for the detection of planetary signature is as follows, \[\Delta\chi^{2}_{\rm PL}\equiv\chi^{2}_{\rm ML}-\chi^{2}_{\rm PL}>\Delta\chi^ {2}_{\rm PL,th}, \tag{10}\] where \(\chi^{2}_{\rm ML}\) and \(\chi^{2}_{\rm PL}\) is the \(\chi^{2}\) of the best-fit single-lens model and binary-lens model, respectively. We use \(\Delta\chi^{2}_{\rm PL,th}=160\) following previous microlensing simulations (e.g., Bennett and Rhie, 2002; Penny et al., 2013; Henderson et al., 2014). Although Suzuki et al. (2016) conducted their statistical analysis using a \(\Delta\chi^{2}_{\rm PL}\) threshold of 100 from only MOA survey data, we use \(\Delta\chi^{2}_{\rm PL,th}=160\) as a conservative assumption in order to consider uncertainties in our simulation. We investigate the impact of changing \(\Delta\chi^{2}_{\rm PL,th}\) on our simulation results. When we use \(\Delta\chi^{2}_{\rm PL,th}=100\), the detection efficiency of planetary signatures averaged over the planetary masses becomes \(\sim 12\%\) higher than that of \(\Delta\chi^{2}_{\rm PL,th}=160\). As the result, the change of threshold slightly increases the planet detections described in Section 5.2. We also estimate the detection efficiency of planetary signatures averaged over the planetary masses for \(\Delta\chi^{2}_{\rm PL,th}=300\) and find that the detection efficiency is \(\sim 16\%\) lower than that of \(\Delta\chi^{2}_{\rm PL,th}=160\). Despite the lower detection rate, the number of Earth-mass planets to be detected is still more than one. Although the change of threshold affects the planet yields slightly, there is no significant change in the trend in the number of planet detections depending on observation strategies and our results in Section 5.2. ### Simulated light-curves Figure 3 shows examples of simulated microlensing events in which the planetary signature can be detectable by PRIME. Although the duration of the significant deviation due to the low mass planet is only a few hours (top panels in Figure 3), the planetary signature is detectable if there are sufficient observation data. The detection efficiency for high mass planets is high because the duration of the planetary perturbation is typically a few days (bottom panels in Figure 3). On the contrary, Figure 4 shows examples of planetary events whose planetary signatures are missed in \begin{table} \begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{ level} & \multicolumn{1}{c}{criteria} & \multicolumn{1}{c}{comments} \\ \hline Microlensing & \(\Delta\chi^{2}_{\rm ML}>\Delta\chi^{2}_{\rm ML,th}=500\) & \(\Delta\chi^{2}\) between the constant flux and single-lens models must be \(>500\) \\ & \(N_{\rm data}>100\) & Number of data points must be \(>100\) \\ & \(N_{\rm data,(t<t_{0})}\geq 1\ \&\ N_{\rm data,(t>t_{0})}\geq 1\) & Data point(s) must exist before and after the peak time of the event \\ & \(A(t_{\rm max})F_{s}/F_{\rm err}(t_{\rm max})>5\) & Maximum value of the source brightness at \(t_{\rm max}\) must be \(>5\) times larger than \\ & \(N_{5\sigma}>3\) & \(>3\) consecutive points with \(>5\sigma\) deviation from the baseline must exist. \\ \hline Planet & \(\Delta\chi^{2}_{\rm PL}>\Delta\chi^{2}_{\rm PL,th}=160\) & \(\Delta\chi^{2}\) between the single-lens and binary-lens models must be \(>160\) \\ \hline \end{tabular} \end{table} Table 2: Detection criteria Figure 3: Examples of simulated microlensing events whose planetary perturbation are detectable with the PRIME microlensing survey. The insets show the zoom-in of planetary signatures. The red dots show the synthetic data points with a 16 minute cadence. The planetary model for each event is shown in the orange line. The gray dotted lines show the best-fit single lens models. our simulation. The artificial event in the top panel is located in a field observed with a 32 minute cadence. The duration of the signature due to a planet with mass of 1 \(M_{\oplus}\) is too short to be detected. The event in the bottom panel of Figure 4 has a longer planetary signature due to a 10000 \(M_{\oplus}\) planet. However, the planetary signature is missed because the there are no data points during the period of perturbation. ## 4 Statistics of observable microlensing events By repeating the steps described in the previous section as illustrated in Figure 1, we conduct a Monte Carlo simulation of microlensing events and probe their detectability for each specified Galactic longitude and latitude, so that we obtain the expected number of microlensing events and planets. In the first four subsections, we calculate the number of detections of microlensing events. The yields of microlensing events for each Galactic coordinate per square degree during the survey duration \(T_{\rm obs}\), \(N_{\rm ML}(l,b)\), are derived by multiplying the number of source stars, \(N_{\rm source}(l,b)\), the event rate, \(\Gamma_{\rm source}(l,b)\), and the detection efficiency of microlensing events, \(\epsilon_{\rm ML}(l,b)\), \[N_{\rm ML}(l,b)=\Gamma_{\rm source}N_{\rm source}T_{\rm obs}\epsilon_{\rm ML}. \tag{11}\] We show the distribution of \(N_{\rm source}\) and \(\Gamma_{\rm source}\) for each field at first in Figure 5 and Figure 7. Then we show the results of the estimation of the detection efficiency and the number of detections of microlensing events as a function of field coordinate and observation cadence in Figure 9 and Figure 13. In the last two subsections, we also calculate the number of detections of planets per square degree per year, \(N_{\rm PL}(l,b)\) in Figure 15, as follows, \[N_{\rm PL}(l,b)\] \[=\int_{a=0.3\rm au}^{a=30\rm au}\int_{M_{p}=0.1M_{\oplus}}^{M_{p} =10^{5}M_{\oplus}}N_{\rm ML}\epsilon_{\rm PL}f_{p}d\log(a)d\log(M_{p}). \tag{12}\] where \(\epsilon_{\rm PL}(l,b,a,M_{p})\) is the detection efficiency of planets and \(f_{p}[\log(a),\log(M_{p})]\) is the cool-planet mass function. We conduct our simulation for 875 fields over \(-4.25^{\circ}<l<4.5^{\circ}\) and \(-3.25^{\circ}<b<3^{\circ}\) with a resolution of \(0.25^{\circ}\times 0.25^{\circ}\). The Surot et al. (2020) extinction map used in genstars has up to \(0.0025^{\circ}\times 0.0025^{\circ}\) resolution. To reduce the computational time without losing the extinction variation, the number of the sources and lenses in catalogs are reduced by a scaling factor \(f_{\rm sim}\) in genstars. The source and lens catalogs for each grid are created by giving the grid size of \(0.25^{\circ}\times 0.25^{\circ}\), where the extinction variation with the resolution of \(0.0025^{\circ}\times 0.0025^{\circ}\) are taken into account. Then we use the scaling factor \(f_{\rm sim}=0.0032\) that reduces uniformly to \(0.0032\) times the number of stars in the given grid. Along each grid and each observation cadence, we randomly generate one hundred thousand microlenisng events by using the source and lens catalogs. ### Source star counts Figure 5 shows the KGM stellar density map for stars with \(10.5<H_{S}<22\); \(N_{\rm source}(l,b)\), calculated from Figure 4: Same as Figure 3, but for the planetary microlensing events that do not pass the detection criteria of the planetary signatures. Observation cadence is 32 minutes in these examples. the source catalogs. The star counts per square degree, \(N_{\rm source}(l,b)\), along the line of sight is calculated as, \[N_{\rm source}(l,b)=\frac{N_{\rm sim}}{f_{\rm sim}\delta\Omega_{S}}, \tag{13}\] where \(N_{\rm sim}\) is the number of source stars generated by \(\mathtt{genstars}\), \(\delta\Omega_{S}=0.25^{\circ}\times 0.25^{\circ}\) is the solid angle within which each source is drawn from \(\mathtt{genstars}\), and \(f_{\rm sim}=0.0032\) is the scaling factor that we specified to limit the number of output stars by \(\mathtt{genstars}\). Star counts depend on the combination of stellar number density and extinction. Most of stars in the region \(|b|<0.5^{\circ}\) and \(|l|<1.5^{\circ}\) belong to the NSD component, yielding a relatively high stellar density. However, owing to high extinction, the number of sources is few in the Galactic center and the Galactic plane. Therefore, according to Figure 5, the mean number of stars in the region \(-0.75<b<0.5\) is \(\sim 5.3\times 10^{7}\) stars per square degree, which is \(\sim 23\%\) and \(\sim 12\%\) lower than that in the region \(-2.0<b<-0.75\) and \(-3.25<b<-2.0\), respectively. We also compare the bulge star counts by KGM with that by observation for validation. Figure 6 shows a comparison between luminosity functions in the Stanek window (\(l,b=[0.25^{\circ},-2.15^{\circ}]\)) predicted by the KGM and as observed by the _Hubble Space Telescope_ (\(HST\)) (Terry et al., 2020). Terry et al. (2020) distinguished between foreground stars and bulge stars by accurate measurement of the longitudinal proper motion. Although we should use same cut of the proper motion as Terry et al. (2020), here we plot the counts for stars labeled bulge stars in the output catalog by \(\mathtt{genstars}\). Figure 6 shows that stars with \(H\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}19.5\) mag are underestimated in KGM. However, this discrepancy is not expected to affect simulation results for two reasons. First, at the Galactic center and in the Galactic plane (\(|l|<2^{\circ},|b|<1^{\circ}\)), owing to the high extinction \(A_{H}\)6\(\sim 1.5-3.5\) compared to the extinction in the Stanek window \(A_{H,\rm{stanek}}\sim 0.68\), the underestimated faint stars are expected to be almost undetectable by PRIME even if the magnification is high. Second, at fields away from the Galactic center (e.g. \(|l|<2^{\circ},-2^{\circ}<b<-1^{\circ}\)), although the extinction (\(A_{H}\sim 0.4-1.0\)) is almost the same as that at the Stanek window, we expect little effect on the total re Figure 5: Map of star counts with \(10.5<H_{s}<22\) mag, \(N_{\rm source}(l,b)\), in our source catalogs generated by \(\mathtt{genstars}\). Most of stars in the region \(|b|<0.5^{\circ}\) and \(|l|<1.5^{\circ}\) belong to the NSD component, yielding high stellar density. However owing to the high extinction, the number of source is few in the Galactic center and the Galactic plane. Figure 6: Comparison of star counts in Stanek window (\(l,b=[0.25^{\circ},-2.15^{\circ}]\)) in KGM (blue line) for the bulge population as a function of \(H\)-band magnitude to those by \(HST\) observation in Terry et al. (2020) (red points). Stars with \(H>19.5\) mag are underestimated in the Galactic model. sult because of the small percentage of detectable events with \(H_{S}\ \lower 2.0pt\hbox{$\buildrel>\over{\sim}$}\ 19.5\) owing to the low detection efficiency for faint source stars. ### Event Rate The microlensing event rate, \(\Gamma_{\rm source}(l,b)\), is the probability that a source star is magnified by a foreground lens star per unit time. The event rate per source is calculated via Monte Carlo integration of the event rate using source and lens catalogs as follows (Awiphan et al., 2016; Penny et al., 2013), \[\Gamma_{\rm source}(l,b)\] \[=\frac{\Omega_{\rm los}}{f_{\rm sim}\delta\Omega_{S}}\frac{1}{N_{ \rm sim}}\ \sum^{\rm sources}\left(\frac{1}{f_{\rm sim}\delta\Omega_{l}}\sum^{\rm Lenses }_{D_{L}<D_{S}}2\theta_{\rm E}\mu_{\rm rel}\right), \tag{14}\] where \(\Omega_{\rm los}\) is the solid angle of each grid, and \(\delta\Omega_{S}\) and \(\delta\Omega_{L}\) are the solid angle of the source and lens catalogs, respectively. In our simulation, we use \(\Omega_{\rm los}=\delta\Omega_{S}=\delta\Omega_{L}=0.25^{\circ}\times 0.25^{\circ}\). Figure 7 shows the KGM map of event rate per source, \(\Gamma_{\rm source}(l,b)\), derived using our source and lens catalogs. According to Figure 7, at the NSD region (\(|b|<0.5^{\circ},|l|<1.5^{\circ}\)) the event rate is highest among all fields. This is because \(\Gamma_{\rm source}(l,b)\) is mainly determined by stellar density. The mean event rate per source in the region \(-0.75<b<0.5\) is \(\sim 2.5\times 10^{-5}\), which is \(\sim 15\%\) and \(\sim 73\%\) higher than that in the region \(-2.0<b<-0.75\) and \(-3.25<b<-2.0\), respectively. Figure 8 compares the model event rate values with the observational values by Mroz et al. (2019). Mroz et al. (2019) shows the optical depth and event rate maps by using the largest sample of 8000 events from the optical survey of OGLE-IV during \(2010-2017\). Owing to the high extinction around the Galactic center, there is no measurement of event rate at \(|b|<1^{\circ}\) by OGLE. Outside of the Galactic plane, the two values of event rate are almost coincident, thus we conclude that there is no need of correction for the model event rate values as was done in Penny et al. (2019). ### Detection efficiency for microlensing events We estimate the detection efficiencies for microlensing events, \(\epsilon_{\rm ML}(l,b)\) along each line of sight of the inner Galactic bulge. Using the detection criteria described in Section 3.4.1, detection efficiency of microlensing events, \(\epsilon_{\rm ML}(l,b)\) is defined as the ratio of the number of detected events to the number of all simulated events and calculated as \[\epsilon_{\rm ML}(l,b)=\frac{\Sigma_{i,\rm microlensing}\ 2\mu_{\rm rel,i}\theta_{\rm E,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, center than away from the Galactic center. The mean number of detection efficiencies with a 16 minute cadence in the region \(-0.75<b<0.5\) is \(\sim 0.07\), which is \(\sim 29\%\) and \(\sim 43\%\) lower than that in the region \(-2.0<b<-0.75\) and \(-3.25<b<-2.0\), respectively. There are two reasons why the mean detection efficiency of microlensing events is lower at the Galactic center. The first reason is the large fraction of short \(t_{\rm E}\) events at the Galactic center. The top panels in Figure 10 show \(t_{\rm E}\) distributions for all simulated events (red histogram) and detected events (blue histogram) at two Galactic coordinates. The median value of \(t_{\rm E}\) at \((l,b)=(0.125^{\circ},-0.125^{\circ})\), is \(\sim 5.1\) days, which is smaller than \(\sim 9.7\) days at \((l,b)=(0.125^{\circ},-2.625^{\circ})\), because the majority of events toward the former direction comprise a source and a lens located in the bulge, yielding the small lens-source relative parallax, \(\pi_{\rm rel}\) and small angular Einstein ring radius \(\theta_{\rm E}\) (Equations (1) and (4)). Microlensing events with short \(t_{\rm E}\) are detected less efficiently by the survey as indicated by the green lines in Figure 10. Therefore the mean detection efficiency, \(\epsilon_{\rm ML}\), at \((l,b)=(0.125^{\circ},-0.125^{\circ})\), is lower than that at \((l,b)=(0.125^{\circ},-2.625^{\circ})\). The second reason is the large fraction of faint stars owing to the high extinction at the Galactic center. The top panels in Figure 11 show the luminosity functions for both the all simulated events (red histogram) and detected events (blue histogram) in the same Galactic coordinates as Figure 10. The estimated extinction values are \(A_{H}\sim 4.4\) and \(A_{H}\sim 0.7\) for at \((l,b)=(0.125^{\circ},-0.125^{\circ})\) and at \((l,b)=(0.125^{\circ},-2.625^{\circ})\), respectively. The detection efficiency as a function of \(H_{S}\) is lower for faint stars than for bright stars as indicated by the green lines in Figure 11. The fraction of faint sources with \(H_{S}>17.5\) in all Figure 9: Mean detection efficiency of microlenisng events along each line of sight, \(\epsilon_{\rm ML}(l,b)\). Each plot shows the detection efficiency for different cadences. With the same observation cadence, the detection efficiency is lower at the Galactic center than away from the Galactic center. See the text for an explanation of these trends. At the same field, the lower the observation cadence, the lower the detection efficiency. events, which are lower \(\epsilon_{\rm ML}\), is \(\sim 30\%\) and \(\sim 6\%\), at \((l,b)=(0.125^{\circ},-0.125^{\circ})\) and \((l,b)=(0.125^{\circ},-2.625^{\circ})\), respectively. Therefore, owing to high extinction, the large fraction of faint stars, whose detection efficiency is low, also results in low mean detection efficiency at the Galactic center. Figure 9 also shows that, at the same field, the lower the cadence, the lower the detection efficiency. Compared to the mean detection efficiency in the same region with a 16 minute cadence, the detection efficiencies are \(\sim 9\%,~{}17\%,~{}33\%\) lower with 32, 48, 96 minute cadences, respectively. In Figure 12, we plot the detection efficiency of microlensing events depending on the Einstein crossing time, \(t_{\rm E}\). As expected, the detection efficiency becomes lower near the Galactic center and/or with lower cadence. It is difficult to detect microlensing events with \(t_{\rm E}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}~{}0.3\), 0.6, 1, and 3 days when the observation cadence is 16, 32, 48, and 96 minutes, respectively. Figure 10: The Einstein ring crossing time, \(t_{\rm E}\), distribution at \((l,b)=(0.125^{\circ},-0.125^{\circ})\) (left panels) and at \((l,b)=(0.125^{\circ},-2.625^{\circ})\) (right panels). Top panels show the distribution of all simulated events (red) and detected microlensing events (blue) with a 16 minute cadence by the assumed PRIME survey. Bottom panels show the distribution of detected microlensing events (blue) and detected planetary events (black). The vertical lines show the median value of each histogram. The dashed green and orange lines show the detection efficiency of microlensing events and planetary events depending on \(t_{\rm E}\), respectively. Figure 11: The source magnitude, \(H_{S}\), distribution at \((l,b)=(0.125^{\circ},-0.125^{\circ})\) (left panels) and at \((l,b)=(0.125^{\circ},-2.625^{\circ})\) (right panels). Top panels show the distribution of all simulated events (red) and detected microlensing events (blue) with a 16 minute cadence. Bottom panels show the distribution of detected microlensing events (blue) and detected planetary events (black). The dashed green and orange lines show the detection efficiency of microlensing events and planetary events depending on \(H_{S}\), respectively. ### The Number of Detected Microlensing events Figure 13 shows the yields of microlensing events for each Galactic coordinate per square degree for one year, \(N_{\rm ML}(l,b)\), calculated by Equation (11). According to Figure 13, the mean number of microlensing yields with a 16 minute cadence in the region \(-0.75<b<0.5\) is \(\sim 93\) events per square degree, which is \(\sim 41\%\) and \(\sim 18\%\) lower than that in the region \(-2.0<b<-0.75\) and \(-3.25<b<-2.0\), respectively. Compared to the microlensing yields in the same region with a 16 minute cadence, the yields are \(\sim 10\%,~{}18\%,~{}35\%\) lower with 32, 48, 96 minute cadences, respectively. ### Detection efficiency for planetary signatures We also estimate the detection efficiencies of the planetary signatures \(\epsilon_{\rm PL}(l,b,a,M_{p})\) along each line of sight. Following the detection criteria of planetary signatures described in Section 3.4.2, detection efficiency of a planetary signature is defined as the ratio of the number of detected planets' events to the number of detected events as microlensing \[\epsilon_{\rm PL}(l,b,a,M_{p})=\frac{\Sigma_{i,\rm planet}~{}2\mu_{\rm rel,i} \theta_{\rm E,i}}{\Sigma_{i,\rm microlensing}~{}2\mu_{\rm rel,i}\theta_{\rm E,i}}. \tag{16}\] Figure 14 shows the detection efficiency of planetary signatures, \(\epsilon_{\rm PL}(M_{p})\), as a function of planet mass, which are obtained by averaging over all 875 fields and are summed across semi-major axis, \(0.3<a<30\) au. With a 16 minute cadence, the detection efficiencies of Jupiter mass planet, Neptune mass, and Earth mass planet are \(\sim 0.05\), \(\sim 0.007\), and \(\sim 0.0006\), respectively. Compared to the detection efficiency with a 16 minute cadence, the detection efficiency is \(\sim 15-20\%\), \(\sim 30-50\%\), and \(\sim 50-70\%\) lower with 32, 48, and 96 minute cadences, respectively. In addition, the degree of decrease in detection efficiency with observation cadence is greater for low-mass planets. We note that detection efficiency of the planetary signature can be regarded as almost the same over all fields simulated, owing to the combination of \(t_{\rm E}\) distributions and luminosity functions. Firstly, at the Galactic center, the fraction of short \(t_{\rm E}\) events is larger than that away from the Galactic center. The bottom panels in Figure 10 show \(t_{\rm E}\) distributions for both the detected microlensing events (blue histogram) and detected planetary events (black histogram) at two Galactic coordinates. The median values of \(t_{\rm E}\) for microlensing events at \((l,b)=(0.125^{\circ},-0.125^{\circ})\), is \(\sim 8.9\) days, which is smaller than \(\sim 13.2\) days at \((l,b)=(0.125^{\circ},-2.625^{\circ})\). Planetary events with short \(t_{\rm E}\) are detected less efficiently by the survey, see the lines in Figure 10 describing \(\epsilon_{\rm PL}\), as well as the detection efficiency of microlensing events, \(\epsilon_{\rm ML}\). Secondly, the fraction of bright stars at \((l,b)=(0.125^{\circ},-0.125^{\circ})\) is larger than that at \((l,b)=(0.125^{\circ},-2.625^{\circ})\). The bottom panels in Figure 11 show the luminosity functions for both the detected microlensing events (blue histogram) and detected planetary events (black histogram). The detection efficiency of planetary signatures, \(\epsilon_{\rm PL}\) as a function of \(H_{S}\) changes little for faint stars with \(H_{S}>16\), but are higher for bright stars with \(H_{S}<16\) as indicated by the lines in Figure 11. The fraction of bright sources with \(H_{S}<16\) in microlensing events, which are higher \(\epsilon_{\rm PL}\), is \(\sim 20\%\) and \(\sim 7\%\), at \((l,b)=(0.125^{\circ},-0.125^{\circ})\) and \((l,b)=(0.125^{\circ},-2.625^{\circ})\), respectively. Therefore, the dependence of \(\epsilon_{\rm PL}\) on Galactic coordinates is minimized by the combination of the large fraction of short \(t_{\rm E}\) events, which work to decrease mean detection efficiency, and the large fraction of bright stars, which work to increase mean detection efficiency, in microlenisng events at the Galactic center. ### The Number of Detected Planets We calculate the number of the detectable planets per square degree per year, \(N_{\rm PL}(l,b)\), by Equation (12). We use the Cassan et al. (2012) mass function of planets beyond snow-line as modified by Penny et al. (2019), which shows planet frequency per decade of mass and semi-major axis by using planets detected via microlensing. Because Cassan et al. (2012) did not detect any planets with a mass less than 5 \(M_{\oplus}\), we decided to use a Figure 12: The detection efficiency of microlensing events depending on \(t_{\rm E}\). The solid and dotted lines show the detection efficiency away from the Galactic center, \((l,b)=(0.125,-2.625)\), and at the Galactic center, \((l,b)=(0.125,-0.125)\), respectively. The detection efficiency with 16, 32, 48, and 96 minute cadences are shown in red, green, blue, and black, respectively. constant value, \(\sim\) two planets per dex\({}^{2}\), below 5 \(M_{\oplus}\) following Henderson et al. (2014) and Penny et al. (2013, 2019). The mass function finally used can be stated as, \[f_{p}[\log(a),\log(M_{p})]\equiv\frac{d^{2}N}{d\log(a)d\log(M_{p})} \tag{17}\] \[=\begin{cases}0.24\ \text{dex}^{-2}\left(\frac{M_{p}}{95M_{\oplus}} \right)^{-0.73}&\text{if }M_{p}\geq 5M_{\oplus},\\ 2\ \text{dex}^{-2}&\text{if }M_{p}<5M_{\oplus}.\end{cases}\] Figure 15 shows the planet detection maps computed using Equation (12) along each line of sight. According to Figure 15, the mean number of planets detected with a 16 minute cadence in the region \(-0.75<b<0.5\) is \(\sim 1.6\) events per square degree, which is \(\sim 41\%\) and \(\sim 18\%\) lower than that in the region \(-2.0<b<-0.75\) and \(-3.25<b<-2.0\), respectively. Compared to the planet detections in the same region with a 16 minute cadence, the yields are \(\sim 31\%,\ 46\%,\ 70\%\) lower with 32, 48, 96 minute cadences, respectively. The planet detection map with a 16 minute cadence (upper left panel in Figure 15) is used to determine the order of the observation fields in the next section. The field numbers are ranked by the high expectation number of planet detections summed across each PRIME FOV. We investigate the impact of assuming other planet frequencies via microlensing as given in Suzuki et al. (2016) and Shvartzvald et al. (2016). Figure 21 in Penny et al. (2019) shows a comparison of modified planet frequency based on Cassan et al. (2012) to the latest measurements of mass-ratio function by microlensing surveys (Suzuki et al., 2016; Shvartzvald et al., 2016). They assumed a \(0.5M_{\odot}\) host star to convert mass-ratio to planet mass. The frequencies of low-mass plan Figure 13: Microlensing detection maps along each line of sight. Each plot shows the number of detections with 16, 32, 48, and 96 minute cadences. This figure is obtained by multiplying star counts, \(N_{\text{source}}\), (Figure 5), event rate, \(\Gamma_{\text{source}}\), (Figure 7) and mean detection efficiency of microlensing events, \(\epsilon_{\text{ML}}\), (Figure 9). ets (\(M_{p}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}30M_{\oplus}\)) obtained in Suzuki et al. (2016) are lower than the modified planet frequency, which suggests lower yields of low-mass planets. However, the frequency of the Earth-mass planets is still not well understood owing to the lack of low-mass planets in the statistical analyses. The frequencies of high-mass planets (\(3000<M_{p}/M_{\oplus}<10000\)) obtained in Shvartzvald et al. (2016) are higher than the modified planet frequency, which suggests that the modified planet distributions underestimate planet yields for high-mass planets. Figure 14: The detection efficiency of planetary signatures, \(\epsilon_{\rm PL}(M_{p})\) depending on planet mass, which are obtained by taking the average of all 875 fields and are summed across semi-major axis, \(0.3<a<30\) au. Red, green, blue, and black color plots shows detection efficiency with 16, 32, 48, and 96 minute cadences, respectively. Figure 15: Planet detection maps along each line of sight. Each plot shows the number of detections with 16, 32, 48, and 96 minute cadences. This figure is obtained by multiplying the number of microlensing detections, \(N_{\rm ML}(l,b)\), (Figure 13) and the mean detection efficiency of planets, \(\epsilon_{\rm PL}\), which is obtained by the averaging over all fields, over mass of \(0.1<M_{p}<10^{5}M_{\oplus}\), and semi-major axis of \(0.3<a<30\) au and corrected by a modified cool-planet frequency based on Penny et al. (2019). The planet detection map with a 16 minute cadence (upper left panel) is used to determine the order of the observation fields to in Section 5.1. Each white square shows a 1.45 deg\({}^{2}\) FOV field. The field numbers are ranked by the expected number of planet detections summed across each square. ## 5 Observation strategies and Yields Now that we have the expected number of microlensing events and planets as a function of Galactic coordinate and observation cadence, we are finally ready for discussing the PRIME survey strategy. In this section, we define four observation strategies and calculate both microlensing yields and planet yields depending on each observation strategy. ### Observation fields and strategies We divide our simulation fields (\(|b|\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}2^{\circ}\), \(|l|\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}4^{\circ}\)) into 35 observation fields according to the size of the PRIME FOV and calculate the total number of planets expected to be detected in each observation field. Then the observation field numbers are ranked in order of these total number of detections (upper left panel in Figure 15). Because the number of observation fields we can observe is determined by the observation cadence, we define four strategies as following and compare the planet yields among these four strategies: **S1:**: 6 fields (F1-F6) with a 16 minute cadence **S2:**: 12 fields (F1-F12) with a 32 minute cadence **S3:**: 18 fields (F1-F18) with a 48 minute cadence **S4:**: 18 fields (F1-F18) with a hybrid cadence (16min cadence for F1-F3, 48min cadence for F4-6, 96min cadence for the other 12 fields), where we assumed that it takes 160 secs in total to observe a field (exposure + overheads) to calculate the cadence. Figure 16 shows all the 18 fields (F1-F18) considered here as well as which fields are observed by each strategy. As shown in the figure, the S1, S2, and S3 strategies each have different survey regions and monitor all the fields in each region equally. The S4 strategy has the same survey region as S3, but each field is monitored with different cadence. We call S4 a hybrid strategy. We consider these different strategies because there is a trade-off between the number of fields and frequency of observations. On the one hand, an increase of the number of fields allows us to monitor more sources, which will yield a lot of microlensing events. On the other hand, a higher cadence observation has a higher sensitivity to low-mass planets, because the timescales of the planetary signature scales with \(\sqrt{q}t_{\rm E}\). The typical timescales of planetary signatures for Jupiter-mass planets and Earth-mass planets are a few days and a few hours, respectively. Thus high cadence observations are required in order to detect Earth-mass planets, and it is unclear which strategy yields planet discoveries most efficiently including small mass planets without doing a simulation. However the following concerns caused by observations with a lower cadence are not considered in this paper. Lower cadence observations make it more difficult to measure the source radius crossing time, \(\theta_{*}(\equiv\rho t_{\rm E})\), and therefore \(\theta_{\rm E}\). So it is more challenging to measure host and planet masses either by a combination of \(\theta_{\rm E}\) and \(\pi_{\rm E}\) measurements (as in Muraki et al., 2011) or with the color dependent centroid shift (Bennett et al., 2006; Dong et al., 2009). Note that this paper is primarily concerned with the search for an optimal observation strategy with the goal of increasing planet yields to measure the planet frequency in the inner Galactic bulge. However we will discuss other observation strategies in Section 6.1, including a uniform survey that monitors a large contiguous area around the inner Galactic bulge, in order to measure the NIR event rate map to help optimize the choice of \(Roman\) microlenisng survey fields. ### Yields Table 3 shows our estimation of the number of microlensing events and the number of planets detected by the PRIME microlensing survey assuming the Cassan et al. (2012) mass function as modified by Penny et al. (2019) (Equation 17) over a certain mass range. The total number of microlensing events detected are \(\sim 2300\), \(3400\), \(4100\), and \(3900\), for the S1, S2, S3, and S4 strategies, respectively. The impact of increasing the number of sources by observing more fields is more significant than the impact of decreasing the detection efficiencies by observing with lower cadence. In Figure 17, we plot the planet detection rate per dex for four observation strategies, calculated by the sum of the semi-major axis over \(0.3<a<30\) au and the sum of the survey area (\(8.7-26.2\) deg\({}^{2}\)) shown in Table 3). In order to detect low mass planets, high cadence observations are required (S1), while in order to detect high mass planets, observing a larger number of fields is more important than observing with a higher cadence (S2 and S3). When we use a hybrid observation cadences (S4), it is possible to detect both low mass planets and high mass planets. The lower panel in Figure 17 shows the detection rates of each strategy relative to that of S4. As the result, we predict that PRIME will discover \(42-52\) planets (\(1-2\) planets with \(M_{p}\leq M_{\oplus}\), \(22-25\) planets with mass \(1M_{\oplus}<M_{p}\leq 100M_{\oplus}\), \(19-25\) planets \(100M_{\oplus}<M_{p}\leq 10000M_{\oplus}\)), per year depending on each observation strategy. Figure 16: Field locations for the PRIME microlensing survey for each observation strategy considered in this work, plotted over the planet detection map with a 16 minute cadence. Top and middle panels show the observation strategies, S1–S4 described in Section 5.1. The bottom panel shows the spatially uniform survey including the Galactic center and the Galactic plane described in Section 6.1. The field numbers are ranked by their expectation of planet detections (Figure 15). Each square shows a 1.45 deg\({}^{2}\) FOV field, where the red, green, blue, and white indicate the cadences of 16, 32, 48, and 96 minutes, respectively. The gray region shows the assumed field placement for \(Roman\) microlensing survey (Penny et al., 2019). Figure 17: Upper panel shows the number of planet detections per dex as a function of planet mass, \(M_{p}\). These plots are obtained by integrating over the semi-major axis \(0.3<a<30\) au and over the survey area (\(8.7-26.2\) deg\({}^{2}\)) shown in Figure 16, assuming the Cassan et al. (2012) mass function as modified by Penny et al. (2019). The red, green, blue, and orange plots show the detection rate for the observation strategy S1, S2, S3, and S4 described in Section 5.1. The pink plot shows the detections when we conduct a spatially uniform survey described in Section 6.1. The lower panel shows the detections of each strategy relative to that of S4. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Strategy & S1 & S2 & S3 & S4 & Uniform \\ Total field number & 6 & 12 & 18 & 18 & 18 \\ Area(deg\({}^{2}\)) & 8.7 & 17.5 & 26.2 & 26.2 & 26.2 \\ Mass(\(M_{\oplus}\)) & & & & & \\ \hline \(0.1<M_{p}\leq 1.0\) & 1.8 & 1.3 & 1.3 & 1.7 & 1.6 \\ \(1.0<M_{p}\leq 10\) & 8.7 & 9.5 & 9.2 & 9.0 & 8.4 \\ \(10<M_{p}\leq 100\) & 13.0 & 14.9 & 16.0 & 15.0 & 13.8 \\ \(100<M_{p}\leq 1000\) & 11.3 & 13.9 & 15.0 & 14.1 & 12.8 \\ \(1000<M_{p}\leq 10000\) & 7.5 & 9.6 & 10.4 & 10.0 & 9.0 \\ \hline Total (\(10^{-1}-10^{4}M_{\oplus}\)) & 42.4 & 49.1 & 51.8 & 49.8 & 45.6 \\ Total Microlensing & \(\sim 2300\) & \(\sim 3400\) & \(\sim 4100\) & \(\sim 3900\) & \(\sim 3400\) \\ \hline \end{tabular} \end{table} Table 3: Best-estimate Planet Yields per year by the PRIME microlensing survey ## 6 Discussion ### How to decide the optimal survey strategy? The final survey strategy will vary according to the interests of several sciences: to reveal the planet frequency around the Galactic center, to optimize the \(Roman\) microlenisng survey fields, to characterize the lens and planet parameters by follow-up observations. We will discuss each of these science interests in detail. In this paper, we focus on revealing the demography of cold planets down to Earth mass beyond the snow-line toward the inner Galactic bulge. In order to achieve that goal, it is required to optimize the observation strategy and to increase both the number of planets and the range of mass comparing four observation strategies, we find that it is possible to detect both low mass planets and high mass planets by an observation strategy with a hybrid observation cadence, S4. We predict that PRIME will discover up to \(\sim 3900\) microlensing events and \(\sim 50\) planets per year by using S4. However another important goal of the PRIME is the optimization of the \(Roman\) microlensing survey fields by measuring the NIR microlenisng event rate map and \(t_{\rm E}\) distributions. In order to achieve that goal, it is required to conduct a spatially uniform survey toward the inner Galactic bulge. We investigate how the planet yields change with the uniform survey strategy. The bottom panel in Figure 16 shows the considered field locations when we conduct a uniform survey including the Galactic center and the Galactic plane. Here, we use a hybrid observation cadence and the total number of fields is 18, which are the same as in observation strategy S4. Table 3 shows our estimation of the number of microlensing events and the planet detections. The result shows \(\sim 6-10\%\) fewer planet discoveries depending on the planet mass and \(\sim 13\%\) fewer microlensing discoveries compared to the observation strategy, S4. Therefore, the uniform survey not only allows for the detection of a relatively large number of planetary signals including low-mass planets to measure the planet frequency toward the Galactic inner bulge, but also allows for the measurement of event rates across the Galactic center and Galactic plane to help optimize \(Roman\)'s observation strategy. NIR or optical follow-up observations will help to constrain the microlensing and physical parameters of planetary systems. In particular, color measurements of microlenisng events will enable us to determine \(\theta_{\rm E}\), which constrains the lens mass and distance. Differences in extinction can affect field selection because they affect whether color measurements can be performed or not, but field selection by extinction in other bands is outside the scope this work. ### Inner Galactic bulge survey by PRIME In this study, we use KGM, which is a population synthesis model optimized for the inner Galactic bulge that includes a nuclear stellar disk model. As shown in Section 4.1, the luminosity function at the low mass stars is not in agreement with measurements. It is also known that there is the underestimation of extinction values in the Galactic central region which is shown in Koshimoto et al. (in prep). Observations of the star counts, event rate, and detection efficiencies will drive improvements in Galactic models. Although previous NIR observations towards the inner Galactic bulge such as the VVV survey have revealed detailed structure of the Galactic bar/bulge (e.g. Wegg and Gerhard, 2013; Wegg et al., 2015), the formation history and structure of our Galaxy is a long-standing challenge (Shen and Zheng, 2020). To constrain the dynamical history and evolution of Galaxy, accurate measurements of a stellar 6-D phase space distribution and stellar properties in the inner bulge region will be provided by the future time domain survey such as \(Roman\), the _Japan Astrometry Satellite Mission for INfrared Exploration_(_JAASMINE_; Gouda, 2012) and _GaiaNIR_(Hobbs et al., 2016, 2019). Prior to these surveys, a time domain survey with high cadence using PRIME will play an important role in providing new insights into the formation history and structure of our Galaxy. In addition to aspects of microlensing, the time domain data by the PRIME microlensing survey will provide useful information in studies of Galactic structure, through variable stars such as eclipsing binaries, pulsating RR Lyrae, and Cepheids (e.g. Pietrukowicz et al., 2020; Botan et al., 2021). ## 7 Summary We present the expected microlensing and planet yields for four survey strategies using the PRIME instrument. In order to maximize the number of planet detections and the range of masses, we need to optimize the number of the observation fields and observation cadence, which are in a trade-off relationship. Assuming the an underlying planet population of one planet per square dex per star and the Cassan et al. (2012) mass function of planets beyond snow-line as modified by Penny et al. (2019), we predict that PRIME will discover \(2300-4100\) microlensing events and \(42-52\) planets per year depending on the observation strategy. In particular, the observation strategy with a hybrid observation cadence (S4) makes it possible to detect both low mass planets and high mass planets. By using S4, we predict that PRIME will discover up to \(\sim 3900\) microlensing events and \(\sim 50\) planets per year (\(\sim 1.7\) planets with \(M_{p}\leq 1M_{\oplus}\), \(\sim 24\) planets with mass \(1M_{\oplus}<M_{p}\leq 100M_{\oplus}\), \(\sim 24\) planets \(100M_{\oplus}<M_{p}\leq 10000M_{\oplus}\)). Besides, the spatially uniform survey not only allows for the detection of a relatively large number of planetary signals including low-mass planets, but also allows for the measurement of event rates across the Galactic center and Galactic plane. genstars (Koshimoto, 2022; Koshimoto et al. in prep.), M\({}_{\rm{}}\)ulensModel(Poleski and Yee, 2019), VBBinaryLensing (Bozza, 2010; Bozza et al., 2018) We would appreciate Kento Masuda for valuable comments and discussions. Work by I.K. is supported by JSPS KAKENHI Grant Number 20J20633. Work by T.S. is supported by JSPS KAKENHI Grant Number 23103002, 24253004, and 26247023. Work by N.K. is supported by the JSPS overseas research fellowship. Work by D.S. is supported by JSPS KAKENHI Grant Number 19KK0082 and 20H04754. Work by D.P.B. is supported by NASA through grant NASA-80NSSC18K0274. This research has made use of the KMTNet system operated by the Korea Astronomy and Space Science Institute (KASI) and the data were obtained at three host sites of CTIO in Chile, SAAO in South Africa, and SSO in Australia.
2304.12073
The Game Chromatic Number of Complete Multipartite Graphs with No Singletons
In this paper we investigate the game chromatic number for complete multipartite graphs. We devise several strategies for Alice, and one strategy for Bob, and we prove their optimality in all complete multipartite graphs with no singletons. All the strategies presented are computable in linear time, and the values of the game chromatic number depend directly only on the number and the sizes of sets in the partition.
Paweł Obszarski, Krzysztof Turowski, Hubert Zięba
2023-04-24T13:10:55Z
http://arxiv.org/abs/2304.12073v2
# The Game Chromatic Number of Complete Multipartite Graphs with No Singletons ###### Abstract In this paper we investigate the game chromatic number for complete multipartite graphs. We devise several strategies for Alice, and one strategy for Bob, and we prove their optimality in all complete multipartite graphs with no singletons. All the strategies presented are computable in linear time, and the values of the game chromatic number depend directly only on the number and the sizes of sets in the partition. keywords: chromatic games, game chromatic number, complete multipartite graphs + Footnote †: journal: ## 1 Introduction The origins of the map-coloring game can be traced to Scientific American 1981 article [1], but it has been analyzed extensively only since it was reinvented by Bodlaender a decade later [2] as the game played on graphs. As for today, there are many generalizations and variations of the graph coloring game, depending on what exactly is colored and what are the additional constraints on the graph structure, admissible coloring, etc. See for example the survey [3], covering some variants, techniques, and results for these problems. The standard version of the graph coloring game is played between Alice and Bob on a graph \(G\) with a set \(C\) of \(k\) colors, with \(k\) fixed. We say that color \(c\in C\) is _legal_ for a vertex \(v\in V(G)\) if no neighbor of \(v\) is colored with \(c\). The game proceeds with Alice and Bob taking subsequent turns and coloring any uncolored vertex with a legal color until the entire graph is colored or there are no legal colors for all uncolored vertices. Alice wins in the former case and Bob in the latter. The game chromatic number of a graph \(G\), denoted by \(\chi_{g}(G)\), is defined as the minimum \(k\) such that there exists a winning strategy for Alice, that is, it is certain that the entire graph will be colored regardless of strategy of Bob. This parameter is well-defined because Alice always wins if \(C\) contains at least as many colors as there are vertices of \(G\). ### Previous results The graph coloring game was studied by many authors. In the case of forests \(\mathcal{F}\) we know that \(\max\{\chi_{g}(G)\colon G\in\mathcal{F}\}\leq 4\), and that this bound is tight [4]. There is also known a polynomial algorithm for deciding whether \(\chi_{g}(F)=2\) for a given forest \(F\)[5] or finding the exact value of \(\chi_{g}(G)\) for caterpillars [6], however the computational complexity of computing the value of \(\chi_{g}(F)\) is still unknown. For the class of planar graphs \(\mathcal{P}\) it was proved in [7; 8] that \(8\leq\max\{\chi_{g}(G)\colon G\in\mathcal{P}\}\leq 17\). For the class of outerplanar graphs \(\mathcal{OP}\), it was shown in [9] that \(6\leq\max\{\chi_{g}(G)\colon G\in\mathcal{OP}\}\leq 7\). For \(k\)-trees \(\mathcal{KT}\) it is the case that \(\max\{\chi_{g}(G)\colon G\in\mathcal{OP}\}=3k+2\) for \(k\geq 2\) (see [10]). Similarly, it was shown in [11] that \(\max\{\chi_{g}(G)\colon G\in\mathcal{C}\}=5\) for cacti graphs \(\mathcal{C}\). Finally, for interval graphs it was proved in [4] that \(\chi_{g}(G)\leq 3\omega(G)-2\), where \(\omega(G)\) is the clique number of \(G\), and that there are examples of graphs with \(\chi_{g}(G)\geq 2\omega(G)-1\). The bounds on the game chromatic number were also studied for various products graphs, most notably Cartesian product graphs [12; 13; 14; 15], direct product graphs [16], and most recently strong product graphs [17]. In another line of research, in [18] the value of the game chromatic number for any graph can be bounded by the function acyclic chromatic number \(\chi_{a}(G)\), i.e. the minimum number of colors such that there exists a coloring where each pair of colors induce an acyclic graph in \(G\). Moreover, for random graphs \(G_{n,p}\) it was proved in [19; 20; 21] that with high probability the game chromatic number is within a multiplicative range of the chromatic number of the graph \(\chi(G)\). ### Our results As it is clear from the above survey, the graph coloring game was mostly studied for certain classes of graphs, often very sparse and with a small value of the chromatic number. In this paper we go in a different direction and focus on a dense class of graphs, but still easy to determine the value of the chromatic number, i.e. complete multipartite graphs. This class was initially investigated in passing by Dunn in [22], where a theorem was proved that when all parts have identical size: **Theorem 1**.: _[_22_, Theorem 1]_ _For any complete \(k\)-partite graph \(K_{r,\ldots,r}\) it holds that:_ \[\chi_{g}(K_{r,\ldots,r})=\begin{cases}r&\text{if $k=1$,}\\ 2r-2&\text{if $k=3$ and $r\geq 3$,}\\ 2r-1&\text{otherwise.}\end{cases}\] There was also a claim [23] in which there was a formula (although without any proof) for complete \(k\)-partite graphs \(K_{r_{1},\ldots,r_{k}}\) with either all \(r_{i}\in\{1,2\}\) or all \(r_{i}\in\{1,3\}\) - accompanied with a statement (though disappointingly without any reference to any proof or formula) that \(\chi_{g}(K_{r_{1},\ldots,r_{k}})\) is known for all graphs with all \(r_{i}\in\{1,2,3\}\). Thus, the problem still remained open for graphs with where some \(r_{i}\geq 4\) and not all \(r_{i}\) are equal. In this paper, we make a progress towards a solution to the problem for all complete \(k\)-partite graphs. We provide several strategies for Alice and a single optimal strategy for Bob, and we prove their optimality for multipartite graphs with no singletons (see Table 1 for a summary). All our strategies are computable in polynomial time and additionally, they all lead to simple closed formulas for the game chromatic number in terms depending on the structure of the graphs. ### Notation and concepts Throughout this paper we denote by \(K_{r_{1},\ldots,r_{k}}\) a complete \(k\)-partite graph on \(n=\sum_{i=1}^{k}r_{i}\) vertices with partitions of sizes \(r_{1}\), \(r_{2}\),..., \(r_{k}\). Without loss of generality, we assume that \(r_{1}\geq r_{2}\geq\ldots\geq r_{k}\). For convenience we also assume that we pick a partition with minimum \(k\), i.e. we consider complete graphs as \(K_{r}\), but not as \(K_{1,1,\ldots,1}\). This ensures that if \(k\geq 2\), then it follows that \(r_{1}\geq 2\) \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(k\) & \(r_{i}\) & \(n\) & \(\chi_{g}(K_{r_{1},\ldots,r_{k}})\) & Upper bound & Lower bound \\ \hline 1 & any & any & 1 & obvious \\ \hline 2 & \(r_{2}=1\) & any & 2 & obvious \\ \hline 2 & \(r_{2}\geq 2\) & any & 3 & Corollary 3 & Corollary 10 \\ \hline \multirow{2}{*}{\(\geq 3\)} & \(\exists_{j}r_{j}=3\) & \multirow{2}{*}{even} & \multirow{2}{*}{\(2k-2\)} & Corollary 5 & Corollary 14 \\ & \(r_{k}=2\) & & & \\ \hline \multirow{2}{*}{\(\geq 3\)} & \(\forall_{j}r_{j}\neq 3\) & \multirow{2}{*}{even} & \multirow{2}{*}{\(2k-1\)} & Corollary 3 & Corollary 12 \\ & \(r_{k}=2\) & & & \\ \hline \multirow{2}{*}{\(\geq 3\)} & \(\exists_{j}r_{j}=3\) & \multirow{2}{*}{odd} & \multirow{2}{*}{\(\min\left\{2k-2,\sum_{j}\left\lceil\frac{r_{j}}{2}\right\rceil\right\}\)} & Corollary 5 & Corollary 14 \\ & \(r_{k}=2\) & & & \\ \hline \multirow{2}{*}{\(\geq 3\)} & \(\forall_{j}r_{j}\neq 3\) & \multirow{2}{*}{odd} & \multirow{2}{*}{\(\min\left\{2k-1,\sum_{j}\left\lceil\frac{r_{j}}{2}\right\rceil\right\}\)} & Corollary 3 & Corollary 12 \\ & \(r_{k}=2\) & & & \\ \hline \multirow{2}{*}{\(\geq 3\)} & \(r_{k}=3\) & any & \(2k-2\) & Corollary 5 & Corollary 14 \\ & \(r_{k}\geq 4\) & & & \\ \hline \end{tabular} \end{table} Table 1: The summary of our results Throughout the proofs we denote by \(V_{1}\), \(V_{2}\),..., \(V_{k}\) a partition of \(V(K_{r_{1},\ldots,r_{k}})\) into disjoint independent sets of cardinalities \(r_{i}\) for \(i=1,\ldots k\). Let us also denote by \(l_{j}=|\{i\colon r_{i}=j:i=1,\ldots,k\}|\) the count of sets \(V_{i}\) of size exactly \(j\). Finally, we can assume that the new colors appear in the game in order 1, 2,.... Throughout the paper, we will call a set with all vertices colored, no vertices colored, or some (but not all) vertices colored a _fully colored_, _uncolored_, or _partially colored set_, respectively. We will also say that a player _started_ coloring a set \(V_{i}\) or, equivalently, that \(V_{i}\) was started by that player if he or she colored the first vertex in this set. Let us also denote the move after which every set \(V_{i}\) has at least one vertex colored as the _fixing move_. This concept is crucial in our investigations of the effectiveness of the strategies we proposed for Alice and Bob. Intuitively, if Alice has a strategy such that she and Bob always use only at most \(l\) colors up to a moment when all \(V_{i}\) are partially or fully colored (i.e. up to the fixing move), then a set of \(l\) colors is sufficient to color the whole graph. Bob always can be forced to repeat the colors from this point on since by the structure of \(K_{r_{1},\ldots,r_{k}}\) if color \(c\) is used in any set \(V_{i}\), then it remains feasible for all other vertices in this set for the rest of the game - and it is forbidden for all other vertices. Therefore \(\chi_{g}(K_{r_{1},\ldots,r_{k}})\leq l\). On the other hand, if Bob has a strategy so the players always use \(l\) colors while there still remains at least one uncolored set \(V_{i}\) (i.e. before the fixing move), then \(\chi_{g}(K_{r_{1},\ldots,r_{k}})>l\). ## 2 Strategies for Alice First, we will introduce two simple, but powerful strategies for Alice: **Definition 1**: _Let \((A1)\) be the following strategy for Alice: in any move, pick any vertex in any uncolored set \(V_{i}\) and assign to it a new color. Otherwise, pick any vertex in any partially colored set \(V_{i}\) and use a color that was already used for some vertex in \(V_{i}\)._ **Definition 2**: _Let \((A2)\) be the following strategy for Alice for \(K_{r_{1},\ldots,r_{k}}\) with \(k\geq 2\) and \(r_{j}=3\) for some particular (fixed) \(j\):_ 1. _in the first move pick an uncolored vertex from_ \(V_{j}\) _and assigns to it a new color,_ 2. _otherwise, if Bob played in his last move at a vertex in_ \(V_{j}\)_, then pick another uncolored vertex from_ \(V_{j}\) _and repeat a color just used by Bob,_ 3. _otherwise, if there is an uncolored set_ \(V_{i}\)_, then pick any uncolored vertex from_ \(V_{i}\) _and assign a new color to it,_ 4. _otherwise, pick any uncolored vertex from a partially colored set_ \(V_{i}\) _and assign to it a color that was already used for some vertex in_ \(V_{i}\)_._ Now we proceed to simple bounds on the number of colors necessary to finish the game on the whole graph: **Lemma 2**.: _If Alice uses \((A1)\), then she and Bob can color the whole graph \(K_{r_{1},\ldots,r_{k}}\) using any set of at least \(2k-1\) colors._ Proof.: Clearly, after at most \(2k-1\) moves in total (\(k\) by Alice, \(k-1\) by Bob) every \(V_{i}\) contains at least one colored vertex. If a color set contains at least \(2k-1\) colors, then Alice and Bob are always able to finish coloring the rest of the graph, as they can always reuse colors that already appear in the same \(V_{i}\). **Corollary 3**.: _For any graph \(K_{r_{1},\ldots,r_{k}}\) it holds that \(\chi_{g}(K_{r_{1},\ldots,r_{k}})\leqslant 2k-1\)._ **Lemma 4**.: _If \(K_{r_{1},\ldots,r_{k}}\) has \(k\geqslant 3\) and \(r_{j}=3\) for some \(j=1,2,\ldots k\) and Alice uses \((A2)\), then she and Bob can color the whole graph \(K_{r_{1},\ldots,r_{k}}\) using any set of at least \(2k-2\) colors._ Proof.: The crucial observation is that before the fixing move, Bob had to start coloring at least one of the sets \(V_{i}\): * and therefore Bob is his second move was forced to pick a vertex in an uncolored set \(V_{i}\), * but then again he was the first player to pick a vertex from some previously uncolored set. Regardless of Bob's choice, it means that before the fixing move Alice started coloring at most \(k-2\) of the sets \(V_{i}\) as one was started by Bob and another has to be yet uncolored. Moreover, by the definition of the strategy, \(V_{j}\) was the only set in which she played more than once before the fixing move - so the total number of her moves before the fixing move is at most \(k-1\). Clearly, Bob too could not make more than \(k-1\) moves before the fixing move occurred. Therefore the number of colors used before the fixing move cannot exceed the sum of \(k-2\) (upper bound on the number of new colors introduced by Alice before the fixing move) and \(k-1\) (upper bound on the number of new colors introduced by Bob before the fixing move) - that is, it is at most equal to \(2k-3\). Therefore any set of at least \(2k-2\) is sufficient to make the fixing move and to complete the whole coloring according to the rules, as the players can always reuse colors that already appear in the same \(V_{i}\). **Corollary 5**.: _For any graph \(K_{r_{1},\ldots,r_{k}}\) with \(k\geqslant 3\) and \(r_{j}=3\) for some \(j=1,2,\ldots k\) it holds that \(\chi_{g}(K_{r_{1},\ldots,r_{k}})\leqslant 2k-2\)._ Now we introduce another strategy for complete multipartite graphs with an odd number of vertices. It is particularly well-suited for graphs with many \(r_{i}=2\): **Definition 3**.: Let \((A3)\) be the following strategy for Alice for \(K_{r_{1},\ldots,r_{k}}\) with odd number of vertices \(n\): 1. in the first move pick an uncolored vertex from \(V_{i}\) respective to the smallest odd \(r_{i}\) and assigns to it a new color, 2. otherwise, if Bob played in his last move at a vertex in \(V_{i}\) and \(V_{i}\) is partially colored, then pick another uncolored vertex from \(V_{i}\) and repeat a color just used by Bob, 3. otherwise, if there is a partially colored set \(V_{i}\), then pick any uncolored vertex from \(V_{i}\) and assign to it a color that was already used for some vertex in \(V_{i}\), 4. otherwise, pick any uncolored vertex from an uncolored set \(V_{i}\) with the smallest odd \(r_{i}\) and assign a new color to it. Note that the strategy does not specify what to do if there are no partially colored sets and no odd uncolored sets. This is a deliberate decision on our part since we can prove that such a situation cannot occur: **Lemma 6**.: _Let \(K_{r_{1},\ldots,r_{k}}\) be a graph with an odd number of vertices \(n\). Then \((A3)\) is a correct strategy for Alice, i.e. it does specify a valid move in every possible state of the game. In particular, before her move there is always at least one partially colored set or one odd uncolored set._ Proof.: Suppose that by using \((A3)\) Alice reached a state of the game when she faces only fully colored sets and even uncolored sets. Then, it means that the total number of moves played is odd, as \(n\) is odd. But if this is the case, then it has to be Bob's move, not Alice's - a contradiction. Now we are ready to proceed with an assessment of the quality of this strategy: **Lemma 7**.: _Let \(K_{r_{1},\ldots,r_{k}}\) be a graph with an odd \(n\) and \(k\geq 3\). If Alice uses \((A3)\), then she and Bob can color the whole graph \(K_{r_{1},\ldots,r_{k}}\) using any set of at least \(\sum_{i}[\frac{r_{i}}{2}]\) colors._ Proof.: Observe that using \((A3)\) ensures that Alice never starts coloring a set with an even number of vertices. Now, let \(V_{j}\) be the set in which the fixing move is played. For any \(i\neq j\) we know that there are used at most \(\lceil\frac{r_{i}}{2}\rceil\) colors: * therefore, the total number of colors used in \(V_{i}\) does not exceed \(\frac{r_{i}}{2}\), * so he used at most \(\frac{r_{i}+1}{2}\) different colors (and Alice always repeated already used ones), * so they used at most \(\frac{r_{i}+1}{2}\) different colors. In total, before the fixing move we need at most \(\sum_{i\neq j}[\frac{r_{i}}{2}]\) colors for all vertices in all sets other than \(V_{j}\). And if there are available at least \(\sum_{i\neq j}[\frac{r_{i}}{2}]+1\) colors, then Alice or Bob can always play the fixing move and finish the whole coloring. However, since \(\lceil\frac{r_{i}}{2}\rceil\geq 1\), we know that the total number of colors can be always bounded also by \(\sum_{i}[\frac{r_{i}}{2}]\). **Corollary 8**.: _For any graph \(K_{r_{1},\ldots,r_{k}}\) with odd \(n\) it holds that \(\chi_{g}(K_{r_{1},\ldots,r_{k}})\leq\sum_{i}[\frac{r_{i}}{2}]\)._ ## 3 Strategy for Bob Surprisingly, there is only one main strategy for Bob: **Definition 4**.: Let \((B1)\) be the following strategy for Bob for \(K_{r_{1},\ldots,r_{k}}\): 1. if Alice picked a vertex in \(V_{i}\) and \(V_{i}\) is partially colored, pick any uncolored vertex from \(V_{i}\), 2. otherwise, if Alice picked a vertex in \(V_{i}\), it is now fully colored and there is any partially colored set, then choose any partially colored \(V_{j}\) with the smallest number of uncolored vertices and pick any uncolored vertex from \(V_{j}\), 3. otherwise, if Alice picked a vertex in \(V_{i}\), it is now fully colored, and there is are uncolored set, then choose any vertex from the largest uncolored \(V_{j}\). Assign to a chosen vertex a new color if you can. Otherwise, reuse a color that already appeared for some other vertex in the respective set of the chosen vertex. Now we proceed with the counterparts of Corollaries 3, 5 and 8, establishing the optimality of the respective strategies for Alice and Bob. **Lemma 9**.: _If \(K_{r_{1},\ldots,r_{k}}\) has \(r_{k}\geq 4\) and Bob uses \((B1)\), then he and Alice cannot color the whole graph \(K_{r_{1},\ldots,r_{k}}\) using any set with less than \(2k-1\) colors._ Proof.: Note that using this strategy we can ensure that at any time there can exist only at most one set \(V_{i}\) that has two properties: \((a)\) it has exactly one vertex colored, and \((b)\) it was colored only by Bob. And if such \(V_{i}\) does exist, then by the definition of the strategy there has to be also somewhere in the graph a fully colored set \(V_{j}\) with its first vertex colored by Alice. Because if it were not the case, then Bob would never deliberately start coloring any set. Note that such \(V_{j}\) would have been not only started by Alice, but also the fact that \(r_{j}\geq r_{k}\geq 4\) together with Bob's strategy implies that Bob played at least 2 moves there - so in total in \(V_{j}\) the players would use at least 3 colors. Moreover, after every move by Bob until the fixing move all other partially or fully colored sets \(V_{l}\), \(l\notin\{i,j\}\), are: * either started by Alice with an immediate response by Bob, * or started by Bob with later moves by both Alice and Bob, * or started by Bob with a second move also made by Bob. In all cases, it is obvious that in these sets players used at least 2 distinct colors. Let us now think about a situation just before the fixing move. If Alice makes the fixing move, then there are two possible situations: * either there exists some \(V_{i}\) such that it has only one vertex colored and it was colored only by Bob and therefore players used one color for this set, but also at least 3 colors for the respective \(V_{j}\) by the argument in the first paragraph, and at least 2 colors for all sets \(V_{l}\) (\(l\notin\{i,j\}\)), by the argument above. * and therefore every set is either started by Alice (with an immediate response by Bob), or started by Bob (with at least one more move by Bob since \(r_{j}\geqslant 4\)), in each case in each set there are used at least \(2\) colors. Either way, before the fixing move there were at least \(2k-2\) colors used in total. If this is Bob's move, then all sets but the last one were already fully colored before the fixing move. But since \(r_{i}\geqslant 4\), by his strategy Bob played in all sets but one at least twice, so he himself had to use at least \(2k-2\) colors. Overall, Bob may ensure by using \((B1)\) that at least \(2k-2\) colors were used before the fixing move. Therefore if the color set contains less than \(2k-1\) colors, then there always exists a way to enforce a situation in which we cannot play the fixing move, i.e. legally extend the coloring to the last set, and therefore to the whole graph. **Corollary 10**.: _For \(K_{r_{1},\ldots,r_{k}}\) with \(r_{k}\geqslant 4\) it holds that \(\chi_{g}(K_{r_{1},\ldots,r_{k}})\geqslant 2k-1\) colors._ **Lemma 11**.: _If \(K_{r_{1},\ldots,r_{k}}\) has \(k\geqslant 3\), \(r_{j}\neq 3\), \(r_{j}\geqslant 2\) for all \(j=1,2,\ldots,k\) and Bob uses \((B1)\), then he and Alice cannot color the whole graph \(K_{r_{1},\ldots,r_{k}}\) using any set with at most \(\min\{2k-2,\sum_{i}[\frac{r_{i}}{2}]-1\}\) colors._ _Moreover, if \(n\) is even, Alice and Bob cannot color the whole graph \(K_{r_{1},\ldots,r_{k}}\) using any set with at most \(2k-2\) colors._ Proof.: We split the proof into three subcases (1)-(3), depending on the properties of the last set that was first colored by Bob before the fixing move occurred. **Case (1):** There is no such set i.e. Alice started coloring all sets until the fixing move. In this case, using \((B1)\) guarantees that: * if the fixing move is made in \(V_{j}\) with \(r_{j}\geqslant 2\), then before the fixing move Alice used \(k-1\) different colors when she colored the first vertices of first \(k-1\) sets \(V_{j}\) with \(r_{j}\geqslant 2\) and Bob used \(k-1\) different colors in his subsequent moves, plus they both use \(l_{1}\) colors for all sets \(V_{j}\) with \(r_{j}=1\), * if the fixing move is made in \(V_{j}\) with \(r_{j}=1\), then before the fixing move Alice uses \(k-1\) different colors when she colors the first vertices of \(k-l_{1}\) sets with \(r_{j}\geqslant 2\) and Bob uses \(k-l_{1}-1\) different colors in his subsequent moves, plus they both use \(l_{1}-1\) colors for all sets \(V_{j}\) with \(r_{j}=1\) but the one in which the fixing move is played. Therefore, both players used at least \(2k-l_{1}-2\) colors before reaching the fixing move, so it clearly holds that this number of colors is not sufficient to make a legal fixing move. **Case (2):** Assume that there is such a set \(V_{i}\) with \(r_{i}>2\). Note that the strategy implies two facts: that Bob never started coloring any set \(V_{j}\) with \(r_{j}=2\), and when Bob started coloring \(V_{i}\) it had to be that there were no partially colored sets. Moreover: * but then \(r_{j}\geqslant 5\), so there are at least \(\lceil\frac{r_{j}}{2}\rceil\geqslant 3\) distinct colors used there (one by the player starting it and at least \(\lceil\frac{r_{j}}{2}\rceil\) by subsequent moves by Bob due to his strategy), * if Alice started coloring any \(V_{j}\) with \(r_{j}\geqslant 4\) (\(j\neq i\)), then Bob responded in the same set in his next move, thus they used at least 2 different colors there, * in particular, this implies that Bob played at least \(\left\lfloor\frac{r_{j}}{2}\right\rfloor\geqslant 2\) moves there, all using different colors, * since Alice started coloring all \(V_{j}\) with \(r_{j}=2\), by his strategy Bob immediately responded by coloring the other vertex from the same set, so they used 2 distinct colors there. In total, just before the fixing move the players used at least one color for one set (i.e. \(V_{i}\) itself), at least 3 colors for another set (i.e. \(V_{j}\) from the first case above), and at least 2 colors for all other sets of size at least 2 (excluding the set with the fixing move). By this, we arrive at the result that they used at least \(2k-2\) colors while there still remained one uncolored set - which contradicts the possibility of making a legal fixing move. **Case (3):** Suppose we have an appropriate set \(V_{i}\) with \(r_{i}=2\). This means that all sets with \(r_{j}\neq 2\) have to be fully colored before Bob started coloring \(V_{i}\). But then we know that by Bob's strategy each set \(V_{j}\) with \(r_{j}>2\) was: * either started by Alice and with at least \(\left\lfloor\frac{r_{j}}{2}\right\rfloor\) moves by Bob, * or started by Bob and with at least \(\left\lceil\frac{r_{j}}{2}\right\rceil\) moves by Bob. In both cases the players use at least \(\left\lceil\frac{r_{j}}{2}\right\rceil\) distinct colors for the vertices in \(V_{j}\). In total, all sets with \(r_{j}>2\) require at least \(\sum_{j:r_{j}>2}\left\lceil\frac{r_{j}}{2}\right\rceil\) distinct colors. Clearly, each set with \(r_{j}=2\) requires at least one color - so to color all but the last one (in which some player makes the fixing move) we need \(\sum_{j:r_{j}=2}\left\lceil\frac{r_{j}}{2}\right\rceil-1\) colors. Therefore, Bob can ensure that at least \[\sum_{j:r_{j}>2}\left\lceil\frac{r_{j}}{2}\right\rceil+\sum_{j:r_{j}=2}\left \lceil\frac{r_{j}}{2}\right\rceil-1=\sum_{j}\left\lceil\frac{r_{j}}{2}\right\rceil-1\] colors are needed before the fixing move. And if the size of the set of colors does not exceed that number, no player can make a legal fixing move. Finally, note that the case (3) occurs only for odd \(n\), as it implies that all \(V_{j}\) with \(r_{j}>2\) were fully colored before Bob played his first move in \(V_{i}\). However, if \(n\) is even, the situation when only sets \(V_{j}\) with \(r_{j}=2\) are uncolored and there are no partially colored sets can only occur before Alice's move. To conclude the proof for odd \(n\) it is sufficient to observe that if in some cases \(c_{1}\) colors are insufficient and in all other cases \(c_{2}\) colors are insufficient to make the legal fixing move, then \(\min\{c_{1},c_{2}\}\) colors are always insufficient. **Corollary 12**.: _For \(K_{r_{1},\ldots,r_{k}}\) with \(k\geqslant 3\) and \(r_{j}\neq 3\), \(r_{j}\geqslant 2\) for all \(j=1,2,\ldots,k\) it holds that \(\chi_{g}(K_{r_{1},\ldots,r_{k}})\geqslant\min\{2k-1,\sum_{i}\left\lceil\frac{r_ {i}}{2}\right\rceil\}\) colors. Additionally, if \(n\) is even, then \(\chi_{g}(K_{r_{1},\ldots,r_{k}})\geqslant 2k-1\)._ **Lemma 13**.: _If \(K_{r_{1},\ldots,r_{k}}\) has \(k\geqslant 3\), \(r_{j}=3\) for some \(j\in\{1,2,\ldots,k\}\) and Bob uses \((B1)\), then he and Alice cannot finish the game on \(K_{r_{1},\ldots,r_{k}}\) using any set with at most \(\min\{2k-3,\sum_{i}\left\lceil\frac{r_{i}}{2}\right\rceil-1\}\) colors._ _Moreover, if \(n\) is even, then Alice and Bob cannot color the whole graph \(K_{r_{1},\ldots,r_{k}}\) using any set with at most \(2k-3\) colors._ Proof.: Similarly as in the previous proof, we distinguish the cases (1)-(2) depending on the size of the last set started by Bob before the fixing move, denoted by \(V_{i}\). **Case (1):** If either \(V_{i}\) does not exist or it exists and its respective \(r_{i}>2\), then we know that before the fixing move Bob started coloring only sets with size greater than 2. Let us call _B-singleton_ a set that has three properties: it is partially colored, it has exactly one vertex colored, and it was colored only by Bob. First, we note that after any move by Bob, there always exists at most one B-singleton. Clearly, after his first move this condition is met. Moreover, if it is true after his \(l\)-th move, then: * either Alice starts coloring a new set \(V_{i}\), in which case Bob responds in the same set, so overall no new B-singleton appears, * so only if before his move there are no B-singletons. In both cases, the invariant is also true after Bob's \((l+1)\)-th move. For any other set \(V_{j}\) other than B-singleton and the set in which the fixing move is made, one of these condition holds: * the first vertex was colored by Alice and the second one was colored right after that by Bob, * the first vertex was colored by Bob, then at some point the second vertex was colored by Alice and the third one was colored right after that by Bob, * since it is not a B-singleton and it does not fall under the case above - at some point the second vertex was also colored by Bob. Thus, Bob's strategy guarantees that for every such \(V_{j}\) there are at least 2 distinct colors used. Of course in B-singleton there is exactly one color used. Therefore, the total number of colors in use before the fixing move is at least equal to \(1+2(k-2)\). Therefore in this case we cannot play the fixing move and complete the coloring if the set of colors contains strictly less than \(2k-2\) colors. **Case (2):** If such \(V_{i}\) does exist and its respective \(r_{i}=2\), then we know that all \(V_{j}\) with odd \(r_{j}\neq 2\) had to be fully colored to force Bob to play the first move in \(V_{i}\), so it implies that \(n\) has to be odd. Here we basically repeat the arguments from the previous proof: Bob's strategy ensures that in each set \(V_{j}\) with \(r_{j}>2\) the players use at least \(\lceil\frac{r_{j}}{2}\rceil\) colors and in each set \(V_{j}\) with \(r_{j}=2\) the players use at least one color i.e. also at least \(\lceil\frac{r_{j}}{2}\rceil\) colors. This, combined with the fact that the fixing move in this case is always played in a set \(V_{j}\) with \(r_{j}=2\), guarantees that at least \(\sum_{j}\lceil\frac{r_{j}}{2}\rceil-1\) colors are needed before the fixing move. Thus, if \(n\) is odd, then more than \(\min\{2k-3,\sum_{j}\lceil\frac{r_{j}}{2}\rceil-1\}\) colors are needed to complete the coloring. On the other hand, if \(n\) is even, then only the first case may occur, so more than \(2k-3\) colors are needed. **Corollary 14**: _For \(K_{r_{1},\ldots,r_{k}}\) with \(k\geq 3\) and \(r_{j}=3\) for some \(j\in\{1,2,\ldots,k\}\) it holds that \(\chi_{g}(K_{r_{1},\ldots,r_{k}})\geq\min\{2k-2,\sum_{i}[\frac{r_{i}}{2}]\}\) colors._ _Additionally, if \(n\) is even, then \(\chi_{g}(K_{r_{1},\ldots,r_{k}})\geq 2k-2\)._ ## 4 Conjectures for the general case When we allow for the appearance of the singletons in the complete multipartite graph the matter becomes more complicated. The strategies presented above can still be optimal for some graphs, nevertheless this claim no longer holds in general. For example, let us consider a graph \(K_{r,r,1}\) for \(r\geq 5\). On the one hand, strategies \((A1)\) and \((A3)\) would give us the solutions using \(5\) and \(2\lceil\frac{r}{2}\rceil+1\geq 7\) colors, respectively, if Bob plays optimally. Moreover, the strategy \((A2)\) is inapplicable in this case, since \(K_{r,r,1}\) does not contain any set of size exactly \(3\). On the other hand, it is clear that the optimal strategy by Alice is to color the singleton first, let Bob begin coloring \(V_{1}\) or \(V_{2}\), and then seal the total number of colors at \(3\) by playing the fixing move at any vertex from the remaining uncolored set and coloring it using a new color. In general, this would suggest that Alice should prefer coloring uncolored singletons until they are not available. Thus, we could define strategies \((A1^{\prime})\)-\((A3^{\prime})\) by modifying the respective original strategies with an overriding rule "if there is an uncolored singleton, color it with a new color" (in case of \((A2)\) this rule should have a priority just below a rule forcing Alice to play in \(V_{j}\)). **Conjecture 15**: _The set of strategies \(\{(A1),(A2),(A3),(A1^{\prime}),(A2^{\prime}),(A3^{\prime})\}\) is optimal for Alice for all complete multipartite graphs._ Similarly, the strategy \((B1)\) fails to achieve the optimal coloring for \(K_{2,2,1,\ldots,1}\) with an even \(k\geq 4\), since it requires using at most \(r\) colors: when Alice starts by coloring a singleton, Bob responds by coloring a vertex in \(V_{1}\) (or \(V_{2}\), without loss of generality). Then Alice uses the same color in the other vertex in \(V_{1}\), Bob picks a vertex from \(V_{2}\) and colors it using a new color, Alice responds again by coloring the other vertex in \(V_{2}\) with the same color as Bob, and they just have to keep coloring the singletons with new colors. It can be easily verified that \(\chi_{g}(K_{2,2,1,\ldots,1})=k+1\). However, we can modify \((B1)\) to prioritize choosing uncolored singletons over uncolored sets of size \(2\): **Definition 5**: _Let \((B1^{\prime})\) be the following strategy for Bob for \(K_{r_{1},\ldots,r_{k}}\):_ 1. _if Alice picked a vertex in_ \(V_{i}\) _and_ \(V_{i}\) _is partially colored, pick any uncolored vertex from_ \(V_{i}\)_,_ 2. _otherwise, if Alice picked a vertex in_ \(V_{i}\)_, it is now fully colored and there is any partially colored set, then choose any partially colored_ \(V_{j}\) _with the smallest number of uncolored vertices and pick any uncolored vertex from_ \(V_{j}\) 3. otherwise, if Alice picked a vertex in \(V_{i}\), it is now fully colored, and there is are uncolored set with size at least 3, then choose any vertex from the largest uncolored \(V_{j}\), 4. otherwise, pick any vertex from the smallest uncolored \(V_{j}\). Assign to a chosen vertex a new color if you can. Otherwise, reuse a color that already appeared for some other vertex in the respective set for the chosen vertex. Note that for complete multipartite graphs with \(r_{k}\geq 2\) the strategies \((B1)\) and \((B1^{\prime})\) are identical. We did not find any counterexample on small graphs, either by reasoning on various specific subclasses, or by a computer-aided exhaustive search on small graphs, therefore we conjecture that: **Conjecture 16**.: _The strategy \((B1^{\prime})\) is optimal for Bob for all complete multipartite graphs._ The other remaining question is whether in the general case, there exists a quite simple formula like the one presented in Table 1, or whether there appears a more complex dependency on the graph structure: **Conjecture 17**.: _Find the exact formula for \(\chi_{g}\) for all complete multipartite graphs._
2310.10923
Metallised 3D printed plastic resonator demonstrates superconductivity below 4 K
We report the first observation of a superconducting transition in a 3D printed, metallised-plastic device. A cylindrical cavity is 3D printed from a photosensitive polymer resin and then a 20 $\mu$m layer of tin deposited. A resonant TE microwave mode at 13.41 GHz is observed to reduce its losses by an order of magnitude once it is cooled below 3.72 K; the superconducting transition temperature of tin, with the mode's $Q$ factor increasing from $2.7\times10^4$ to $4.0\times10^5$.
Jeremy Bourhill, Gwendal Cochet, Julien Haumant, Vincent Vlaminck, Alexandre Manchec, Michael Tobar, Vincent Castel
2023-10-17T01:33:00Z
http://arxiv.org/abs/2310.10923v1
# Metallised 3D printed plastic resonator demonstrates superconductivity below 4 K ###### Abstract We report the first observation of a superconducting transition in a 3D printed, metallised-plastic device. A cylindrical cavity is 3D printed from a photosensitive polymer resin and then a 20 \(\mu\)m layer of tin deposited. A resonant TE microwave mode at 13.41 GHz is observed to reduce its losses by an order of magnitude once it is cooled below 3.72 K; the superconducting transition temperature of tin, with the mode's \(Q\) factor increasing from \(2.7\times 10^{4}\) to \(4.0\times 10^{5}\). ## I Introduction The ease, accessibility, cost and versatility of photosensitive liquid resin stereolithography (SLA) 3D printing is vastly superior when compared to traditional subtractive manufacturing techniques. It is not uncommon for 3D printers with micrometer sized print resolutions to be found in the personal home, which is not the case for their cousins; Selective Laser Melting (SLM) 3D printers, which use metallic powders for additive manufacturing, and whose base cost and material cost are substantially higher. Plastic resin 3D printing has also been demonstrated to be an extraordinarily portable technology, with the ability to move a printer from one location to another without the need for arduous set-up and pack-down procedures, or overly cumbersome printing units - a typical printer weighs on the order of 20 kg. This fact makes them an intriguing technology to explore for their applications to defence and space exploration wherein manufacturing from the same device may be required at changing locations with short lead times. It has been previously demonstrated that microwave resonant cavities with equivalent performance to their traditionally manufactured counterparts can be produced by plastic 3D SLA printing followed by metallisation [1; 2]. This result has opened up the possibility of rapid, low cost, and highly reproducible production of a wide variety of microwave devices; such as filters, isolators, RF and magnetic field shielding, and more complicated systems for coupling microwave photons to additional degrees of freedom to form a hybrid system [1; 2; 3]. At the same time, SLM printing has been demonstrated to produce metallic cavities and structures which display superconducting transitions when cooled below their critical temperatures, \(T_{c}\), first in aluminium [4] and then in niobium [5]. These advances have been used to construct non-trivial resonant devices [6; 7] with unique properties which could not have been manufactured through subtractive manufacturing methods, which will greatly benefit from transitioning into superconducting devices at low temperatures. Superconducting cavities find numerous uses in physics. Once transitioned, they trap and store resonant microwave radiation and reduce losses, resulting in devices with very high quality factor (\(Q\)), narrow bandwidth, and long coherence times [8]. These cavities find application in particle accelerators [9; 10; 11], sensing [12; 13; 14], metrology and precision RF sources [15; 16], and for testing fundamental physics. In particular, tests of the speed of light and the constancy of fundamental constants [15; 17; 18; 19] as well as the search for hidden sector particles and other dark matter candidates [20; 21; 22], depend on such cavities. They are also essential in cavity quantum electrodynamics (CQED) experiments to shield qubit devices, thereby providing a reduced density of states for the qubit to radiate into [23; 24]. Superconducting cavities are often precision-machined from extremely high purity aluminium or niobium at great material and labour costs in order to achieve optimal surface preparation. Here, we demonstrate superconductivity in a tin (Sn) coated, 3D-printed-plastic microwave resonant cavity exhibited via the sharp increase of the \(Q\) factor of the cavity's lowest-order transverse magnetic mode (TE\({}_{011}\)) at the \(T_{c}\) of Sn, 3.72 K [25]. Sn is a type-I superconductor and has an intermediate \(T_{c}\) located below that of niobium (Nb, \(T_{c}=9.2\) K), but above Aluminium (Al, \(T_{c}=1.2\) K) - the two most commonly used elemental superconductors. Given that most commercially available liquid helium based cryogenic systems can reach temperatures below 4 K, often down to or below at least 3.5 K, Sn offers an advantage over Al given it can reach superconductivity in a standard helium-4 system without the need for dilution refrigeration, whilst its cost, availability and ease of machinability give it certain advantages over Nb. Sn is readily available and specimens of very high purity can be prepared. If necessary they can be grown as single crystals [26]. These factors make it one of the most convenient superconducting materials. Whilst the observed \(Q\) factor of the resonant TE\({}_{011}\) mode at mK temperatures is by no means ground-breaking, this study demonstrates an important proof-of-concept of manufacturing superconducting devices from plastic 3D printed structures coated in metal. There are no real limitations other than those imposed by electrochemistry on the type of metal that can be used to coat the plastic or the thickness of this layer. In fact, there exists a great deal of parameter space unexplored in these types of devices in order to optimise the achieved conductivity and hence \(Q\) factor. ## II System description ### Cavity manufacturing method A hollow cylinder is first 3D printed with the FormLabs\({}^{\circledR}\) Form 3B+\({}^{\circledR}\) SLA 3D printer, using the BioMed-Clear-Resin\({}^{\circledR}\). This printer has a 25 \(\mu\)m resolution for the vertical axis. The internal diameter and height of the cavity were both designed to be 30 mm, with radial wall thickness 7 mm. It is printed in two pieces - a main body and a lid (see Fig. 1 (a)), with the lid being 5 mm thick and the base of the main body, which has two holes for the coaxial in- and output probes, is 15 mm thick. After printing, the cavity was cleaned of residual resin in an isopropyl alcohol bath and then treated in an ultraviolet chamber including a heat treatment at 60\({}^{\circ}\) C for 60 minutes. A conventional 3-D metallisation procedure of plastic elements has been adapted in order to produce a full metallisation of the plastic cavity by Elliptika\({}^{\circledR}\) and described as follows: i) dry etching by sandblasting (increases surface roughness, increasing pores for Pd\({}^{2+}\) adsorption); ii) surface activation with Pd\({}^{2+}\) solution [27]; iii) autocatalytic bath of copper: Pd particles act as catalyst sites and permit the growth of a homogeneous layer of Cu, which spontaneously reaches 3 \(\mu\)m thickness; iv) standard electrodeposition process of Cu (10 \(\mu\)m); v) Sn electrodeposition finish of 20 \(\mu\)m. It should be noted that thicker layers of Cu and Sn are deposited in steps (iv) and (v) when compared to previous implementations of this procedure [1], which is achieved by simply increasing the run times of the electrodeposition stages. The increase in Cu thickness is to ensure that sufficient heat conduction is achieved in the outer metal layer, given that the bulk plastic body of the system is a poor conductor of heat, it is important if we want the conducting walls of the cavity to reach \(T_{c}\) that the cryostat can efficiently conduct heat away from them. The Sn layer is made thicker to ensure it acts as a bulk superconductor, well above the percolation limit [28]. Post metallisation, it is measured that the cavity height is \(h=29.7\) mm, whilst the cavity diameter is \(D=29.24\). Deviations from the design size are to be expected given the metal layer thickness and the 25 \(\mu\)m resolution of the 3D printer. ### Experimental method Two coaxial cables terminated in SMA flange mounts act as the input and output RF coupling ports, with the cables inserted through the base of the cavity. It is important when setting the probe couplings at room temperature to ensure that they are very weak. This is because as \(Q\) increases, so do the losses from the ports, and we do not wish to over-couple to the mode of interest once transitioned, thereby loading it and indirectly lowering its \(Q\) value. In fact, as was done in [4], we use the opposite probe "type" in order to minimise coupling losses as much as possible: straight coaxial probes parallel to the cylinder axis are used, which should excite \(E_{z}\) fields despite our mode of interest being a TE mode. The cavity has a rubidium oxide temperature sensor mounted to its base, ensuring direct measurement of the real cavity surface temperature, and together they are mounted to the mixing chamber (MXC) plate of a dilution refrigerator. The input microwave lines are heavily attenuated, whilst the output signal is amplified with a cryogenic Low Noise Factory\({}^{\circledR}\) amplifier at the 4 K stage, and a room temperature amplifier, as depicted in Fig. 1(c). This setup ensures good signal-to-noise ratio of the output spectrum of the cavity when measured with a Vector Network Analyser (VNA). Data is recorded during the condense procedure of the dilution fridge, in which the temperature of the MXC changes from \(\sim 5\) K to \(\sim 20\) mK, an ideal window to observe the superconducting transition of Sn. Figure 1: The 3D printed, metallised plastic can with internal dimensions \(h=30\) mm, \(D=29.375\) mm (a) disassembled as lid and body, and (b) fully assembled with flangeed SMA input probes and copper cold-finger attachment. (c) Schematic illustration of the experimental setup in the dilution refrigerator demonstrating cold attenuation on the input line and isolator and amplification (HEMT and room temperature) on the output line. ## III Results A sample of the \(S_{21}\) transmission spectra recorded at different temperatures is displayed in Fig. 2(a) and (b). It can be observed from the first figure that within the observed 100 MHz frequency span there exists 3 modes, each of which becomes more resolved as the temperature decreases; a direct result of the increase in conductivity of the conducting boundaries. The mode of interest is the central, high \(Q\) TE\({}_{011}\) mode with frequency \(\omega/2\pi=13.413\) GHz. It is determined from finite element simulation that this is indeed the mode we are looking at. A zoomed in picture of the mode is shown in Fig. 2(b) as well as theoretical fits using a Fano model [29] and the resulting calculated \(Q\) factor. It is clear that an increase in \(Q\) as well as peak transmission is associated with a decrease in temperature. By plotting the fitted \(Q\) values against temperature in Fig. 2(c), we observe that a jump in \(Q\) occurs around \(T_{c}=3.72\) K; the superconducting transition for Sn. Over an order of magnitude improvement in \(Q\) factor is achieved; rising from a \(T>T_{c}\) value of \(Q\sim 2.7\times 10^{4}\) to a \(T<T_{c}\) value of \(Q=4.0\times 10^{5}\). Being a type-I superconductor, Sn has a typically low critical magnetic field value of 0.03 T [26] above which the superconducting properties of the metal are lost. Indeed, in a second experimental run in which the cavity is placed inside an American Magnetics" superconducting magnet mounted to the 4K plate of the fridge, simply switching the magnet on instantly reduces the mode amplitude and \(Q\)-factor due to some residual magnetisation of the magnet upon power up. ## IV Discussion A microwave resonance inside an ideal empty metallic structure will be dominated by surface resistance losses. The impact of these losses can be calculated from the so called "geometric-factor" \(G\), where \[G=\mu_{0}\omega\frac{\iiint\left|\vec{H}^{2}\right|dV}{\iint\left|\vec{H}^{2} \right|dS}, \tag{1}\] where \(\mu_{0}\) is the vacuum permeability, \(\omega\) is the resonant angular frequency, \(\vec{H_{\tau}}\) is the tangential magnetic field of the resonant mode, \(S\) is the surfaces of the resonator, \(\vec{H}\) is the magnetic field of the resonant mode and \(V\) the cavity volume. The quantity \(G\) is related to the \(Q\) factor via \(G=QR_{s}\), where \(R_{s}\) is the surface resistance in ohms. Essentially, the expression \(G\) characterises the ratio of the resonant mode's magnetic field within the volume of the cavity compared to that at the surface, where it induces current in the metallic walls and hence experiences resistive loss. The geometric factor \(G\) obtained through the FEM is approximately 389 \(\Omega\) at 13.413 GHz, allowing us to estimate the surface resistance at 972 \(\mu\Omega\) below \(T_{c}\) and 14.4 m\(\Omega\) above \(T_{c}\). This superconducting resistivity is of equal order to previously achieved values in SLM 3D printed Al and Nb resonators [4; 5], however an order of magnitude greater than values measured in 500 nm thick Sn samples [28]. The measured \(Q\) factor is likely limited by two factors; surface roughness and electrical contact between the lid and the cavity. The former could be improved by polishing the Sn layer of the cavity, either mechanically or electrochemically. Conductivity between the two segments of the resonator would likely also be improved with a smoother surface finish of the connecting surfaces. In our results we note that on either side of the highlighted mode are two more strongly coupled TM modes - their transmission amplitudes are much larger than the central mode, and their \(Q\) factors lower as a result of higher \(G\) and coupling losses. ## V Conclusion In conclusion, our study marks a decisive advance in the combination of superconductivity and additive manufacturing. We have successfully demonstrated the first-ever superconducting transition in a 3D-printed metallized plastic device. Cooling the cavity below 3.72 K, the superconducting transition temperature of tin, resulted Figure 2: (a) S21 transmission spectra over a 100 MHz span at various temperatures with the central highlighted region zoomed in (b). (c) \(Q\) factor of the mode extracted from fitting as a function of temperature with the superconducting transition temperature of Sn labelled in red. in a substantial reduction in losses and a noticeable increase in the \(Q\) factor, from \(2.7\times 10^{4}\) to \(4.0\times 10^{5}\), at a TE mode resonant at 13.413 GHz. This breakthrough offers huge potential for any application or industry that requires fast, on-site, short lead-time manufacturing. The adaptability and versatility of this technology offers a promising route to meeting the dynamic demands of certain industries and the access to a new design parameter space for superconducting devices, unburdened by the limitations of subtractive manufacturing. ###### Acknowledgements. This work was jointly funded by the Region Bretagne through the project OSCAR-SAD18024, by the UWA Research COLLABORATION AWARDS (RCA) grant "Investigation of 3D printed microwave cavities at cryogenic temperature", and by the Australian Research Council Centre of Excellence for Engineered Quantum Systems, CE170100009 and the Centre of Excellence for Dark Matter Particle Physics, CE200100008..
2307.14152
Investigating the Impact of Variables on Handover Performance in 5G Ultra-Dense Networks
The advent of 5G New Radio (NR) technology has revolutionized the landscape of wireless communication, offering various enhancements such as elevated system capacity, improved spectrum efficiency, and higher data transmission rates. To achieve these benefits, 5G has implemented the Ultra-Dense Network (UDN) architecture, characterized by the deployment of numerous small general Node B (gNB) units. While this approach boosts system capacity and frequency reuse, it also raises concerns such as increased signal interference, longer handover times, and higher handover failure rates. To address these challenges, the critical factor of Time to Trigger (TTT) in handover management must be accurately determined. Furthermore, the density of gNBs has a significant impact on handover performance. This study provides a comprehensive analysis of 5G handover management. Through the development and utilization of a downlink system-level simulator, the effects of various TTT values and gNB densities on 5G handover were evaluated, taking into consideration the movement of Traffic Users (TUs) with varying velocities. Simulation results showed that the handover performance can be optimized by adjusting the TTT under different gNB densities, providing valuable insights into the proper selection of TTT, UDN, and TU velocity to enhance 5G handover performance.
Donglin Wang, Anjie Qiu, Qiuheng Zhou, Sanket Partani, Hans D. Schotten
2023-07-26T12:31:39Z
http://arxiv.org/abs/2307.14152v1
# Investigating the Impact of Variables on Handover Performance in 5G Ultra-Dense Networks ###### Abstract The advent of 5G New Radio (NR) technology has revolutionized the landscape of wireless communication, offering various enhancements such as elevated system capacity, improved spectrum efficiency, and higher data transmission rates. To achieve these benefits, 5G has implemented the Ultra-Dense Network (UDN) architecture, characterized by the deployment of numerous small general Node B (gNB) units. While this approach boosts system capacity and frequency reuse, it also raises concerns such as increased signal interference, longer handover times, and higher handover failure rates. To address these challenges, the critical factor of Time to Trigger (TTT) in handover management must be accurately determined. Furthermore, the density of gNBs has a significant impact on handover performance. This study provides a comprehensive analysis of 5G handover management. Through the development and utilization of a downlink system-level simulator, the effects of various TTT values and gNB densities on 5G handover were evaluated, taking into consideration the movement of Traffic Users (TUs) with varying velocities. Simulation results showed that the handover performance can be optimized by adjusting the TTT under different gNB densities, providing valuable insights into the proper selection of TTT, UDN, and TU velocity to enhance 5G handover performance. 5G NR, Handover, UDN, TTT, Simulator ## I Introduction The exponential growth of mobile data globally, particularly in connected vehicle applications, has fueled the development of 5G NR technology as outlined in 3rd Generation Partnership Project (3GPP) releases 15 and 16 [1][2]. One of the defining features of 5G is the deployment of the UDN, aimed at meeting the demand for high data traffic. The IMT-2020 group recognizes UDN as a critical component of 5G core technologies, which significantly improves spectrum efficiency and system capacity by increasing the coverage of 5G gNBs while reducing the number of served TUs per gNB [3]. However, the deployment of UDN in 5G is anticipated to cause longer handover times, thus reducing the overall handover performance [4]. The increased capacity of 5G through UDN comes at the cost of higher handover rates and increased signal overheads, highlighting the importance of effectively managing handovers in 5G networks. Before delving into the impact of 5G gNB density on the 5G handover rate, it is essential to understand the handover procedure in both LTE and 5G NR. A comprehensive documentation survey of handover management in LTE and 5G NR is presented in [5], highlighting the critical aspects and challenges that must be considered when developing an optimized handover scheme. In [6], a formal analysis of 5G handover is presented, covering various aspects such as protocol testing, verification, mobile networking, and more. To date, research efforts have primarily focused on optimizing the LTE handover scheme [7][8]. Nevertheless, with the advent of 5G NR technology, it is imperative to assess the impact of 5G gNB density on the handover rate, which has not been thoroughly explored in previous studies. TTT is a valuable parameter in the performance of the 5G handover rate. The handover is processed between two neighboring cells if the criterion of handover is met during TTT [9]. Using the TTT time period can prevent excessive frequent handover events in a short time period. But a too large TTT time period may cause connection loss or bad connection quality of TU from one cell to another cell. It is necessary to detect the effect of various TTT values on the handover performance. In [10], Juwon presents a handover optimization scheme for different speeds of TUs in LTE. In this work, adjustable TTT parameters are applied. In this work [11], the effects of TTT values on the handover performance in an LTE system are analyzed. In [12], a mobility robustness optimization method for handover failure reduction in LTE small-cell networks is proposed by adjusting TTT and offset parameters. The results from [11][12] show the handover performance is improved by applying adaptive TTT parameters. However, these works evaluated the effect of various TTT values on the handover performance for LTE handover not for 5G NR handover performance. In our work, we want to evaluate the effect of different TTT values on the 5G UDN handover performance by applying varying densities of gNB from the 5G network. The paper is organized as follows: for section II, we are going to show what the 5G handover structure looks like, and how to make a handover-triggering decision. Section III establishes the 5G UDN simulation scenario and sets simulation parameters. Section IV analyzes the simulation results for different simulation scenarios. Lastly, in section V, the conclusion and future work plan are drawn. ## II 5G handover process In this work, the 5G NR handover procedure is considered and is also developed based on LTE handover technologies. But there are some differences between LTE handover and 5G NR handover [4]. 5G NR handovers can be performed in a more dynamic manner, allowing for a smoother transition between cells, as well as improved performance and user experience. 5G NR handovers can also be performed at a much faster rate, with less latency and delay, compared to LTE handovers, which can improve network performance and efficiency. In 5G NR cellular networks, the mobility mechanisms enable the TU to move within the network coverage and be served by networks. The Radio Access Mobility (RAM) of a TU can be in two states, IDLE_MODE or CONNECTED_MODE. The TU in IDLE_MODE performs cell selection to receive incoming data, whereas the TU in CONNECTED_MODE actively transmits or receives data. Handover only occurs when the TU is in the CONNECTED_MODE and a new cell is determined to be superior to the current serving cell [5]. To be more specific, handovers in 5G networks can be classified into two categories: intra-layer-handover and inter-layer-handover. The distinction between the two categories lies in the serving and target networks, whether the same Radio Access Network (RAN) technologies are used. If the same RAN technology is employed by the serving and targeting networks, an intra-layer-handover is performed. Conversely, if different RAN technologies are used, an inter-layer-handover is required, such as transitioning from 5G to LTE or vice versa. In this study, we only consider intra-layer-handover between 5G networks, as specified by 3GPP Release 16.3.0 [13]. ### _Overview of 5G Handover Procedure_ In Fig. 1, a simplified illustration of the 5G handover procedure is presented. This procedure can be generally divided into three stages. #### Ii-A1 Measurement and monitoring stage The TU is engaged in the data communication with the Serving gNB via the Uu interface. The Serving gNB provides the TU Measurement Configuration (Config.) information through a re-connection message. The TU performs and processes the collected measurements of the Received Signal Strengths (RSSs), and then transmits the Measurement Report (Rep.) back to the Serving gNB at designated intervals of 10 milliseconds. #### Ii-A2 Handover Decision Making The procedure of handover decision-making in 5G networks involves the assessment of RSS of both the Serving and Target gNBs. The assessment is triggered by the A3 event [9], which requires that the RSS of the Target gNB is better than that of the Serving gNB, along with a Hysteresis (Hys) margin of 3dB as shown in Fig. 2. The A3 event must be maintained for a specific time period referred to as the TTT timer. The TTT time is a crucial factor in determining the reliability and frequency of handovers in 5G networks. If the conditions are met, the Serving gNB makes the handover decision and initiates the process by sending a Handover Request (Req.) to the Target gNB via Xn interface as shown in Fig. 1. #### Ii-A3 Handover Execution Process The actual execution of the handover occurs in this stage. The communication between the TU and the Serving gNB is served, and a new connection is established between the TU and the Target gNB. Upon successful connection to the Target gNB, the handover process is considered complete. For further details regarding the actual steps involved in the handover execution, please refer to [6]. ### _Algorithm for 5G handover-triggering_ In this part, the functions for the 5G handover triggering algorithm are provided. Step 1: The Signal-to-Interference-plus-Noise Ratio (SINR) (in dB) for the TU is calculated at each tic, which served as an indicator of the RSS quality. It measures the strength of the desired signal compared to the unwanted interference and noise: \[sinr_{i}=10log_{10}\left(\frac{P_{i}}{\sum_{\neq i}P_{k}+N_{0}}\right) \tag{1}\] where \(sinr_{i}\) is the SINR of the TU at one tic w.r.t current Serving gNB \(i\), \(P_{i}\) is the power received from the current Serving gNB \(i\). The received power is calculated based on transmit power, pathloss (function of distance, frequency, or Fig. 1: Simplified 5G handover procedure Fig. 2: A3 event triggering antenna heights), shadowing, fast-fading, and antenna gain as shown in TABLE II. \(k\) represents other gNBs except for the current Serving gNB \(i\). The number of other gNB cells is dependent on the density of gNBs and their communication ranges, both of which will be examined in the scenario configuration section. \(P_{k}\) is the interference power from other gNBs. In addition, \(N_{0}\) is the noise figure. Step 2: In this step, we find the best SINR for the TU. The \(best\_sinr_{j}\) represents the best SINR value from the Target gNB \(j\) as shown in Eq. 2. \[best\_sinr_{j}=MAX\left(sinr_{1},sinr_{2},sinr_{3},...sinr_{x}\right) \tag{2}\] where \(sinr_{1}\) to \(sinr_{x}\) are the SINR values from all other reachable gNBs. Step 3: The 5G handover logical algorithm is described in detail in Algorithm 1. The used simulation parameters are described in TABLE II. ``` 1:Input: serving_gnb, serving_sinr, best_sinr, target_gnb, sinr_min, avg_sinr, best_cio, current_cio, ho_hys, TTT, ho_timer, ho_trigger, ho_exec_time 2:Output: ho_times 3:if\(serving\_gnb\neq target\_gnb\)then 4:if\(best\_sinr>sinr\_min\&best\_sinr-avg\_sinr+best\_cio-current\_cio)>ho\_hys\)then 5:\(ho\_trigger\gets 1\) 6:\(ho\_timer\gets h\_timer+1\) 7:if\(ho\_timer==TTT\)then 8:\(serving\_gnb\gets target\_gnb\) 9:\(ho\_exec\_time\gets 25\) 10:\(ho\_times\gets ho\_times+1\) 11:\(ho\_trigger\gets 0\) 12:endif 13:endif 14:endif ``` **Algorithm 1** 5G handover triggering logical algorithm As demonstrated in Algorithm 1, the objective is to determine the number of successful handover times in each simulation. The handover triggering logic is applied after the subsequent steps. 1) It is ensured that the selected \(target\_gnb\) is distinct from the current \(serving\_gnb\), implying that they are located in disparate locations; 2) it is to check the calculated \(best\_sinr\) exceeds the predefined minimum SINR threshold of \(sinr\_min=-7dB\). The handover trigger also evaluates the difference between \(best\_sinr\) and \(avg\_sinr\) of the TU, as well as the effect of the load balancing algorithm on the current connection \(current\_cio\) and the potential connection \(best\_cio\). The difference must be greater than the handover Hys of 3 dB. In addition, \(avg\_sinr\) is the average SINR of the TU determined by taking the average of the previous 10 SINR values and recalculating at each tic in the simulation. The calculation of the average SINR can only occur once the previous \(10\) values have been obtained; 3) and 4) if all the conditions are met, the \(ho\_trigger\) flag is set to 1 and the \(ho\_timer\) counter is incremented by one; 5) it is to check whether the \(ho\_timer\) value is equal to the predefined TTT value, then the handover process is triggered; 6) it is the execution step, the value of the \(serving\_gnb\) is updated to the \(target\_gnb\). The \(ho\_exec\_time\) is set to 25 tics. The output counter \(ho\_times\) for the number of successful handovers is incremented by one, and the flags for handover triggering \(ho\_trigger\) and handover timer \(ho\_timer\), are reset to zero. If any of the conditions specified in the handover triggering logic are not met, the algorithm will be re-executed. ## III Simulation scenarios The objective of this paper is to analyze the performance of 5G handover on variable TTTs and various UDNs. Thus, a system-level downlink simulator is implemented in Python. ### _gNB deployment_ As depicted in Fig. 3, two simulation scenarios have been established where the deployment of gNB follows the Poisson Point Process (PPP) [14]. In these scenarios, it's assumed that all gNBs process similar technical characteristics, including identical transmission power. In Fig. 3, an example of \(den\_gNB=20\) is given which indicates 20 gNBs following PPP distribution in a 1000 m x 1000 m urban area. To evaluate the impact of UDNs on 5G handover, we have conducted simulations using \(den\_gNB\) values of 10, 20, 30, 40, and 50. ### _User mobility model_ In our simulations, we have defined two distinct TUs with varying running routes from different directions. An illustration of one of these routes is shown in Fig. 3(a), where the starting point and ending point are [1000,0] and [0,1000] respectively, with a speed of 50 km/h and the direction angle (\(\theta\)) is 135 degrees. Further information can be found in TABLE I. A comprehensive list of all simulation parameters, including both physical layer and system layer details, can be found in TABLE II. ## IV simulation results analysis The results of the simulations are analyzed to assess the impact of UDNs and TTTs on 5G NR handover times and performance. In order to obtain statistically robust results, we run each simulation 100 times and calculate the average value. The final results presented in this paper are based on these average values. ### _KPIs for measuring the handover effect_ To evaluate the performance of the 5G handover in UDN scenarios, we have employed two Key Performance Indicators (KPIs). These KPIs provide an objective measure of the handover's effectiveness and enable us to compare the results of different simulations. #### Iv-A1 Handover rate One of the KPIs is the average number of handover times per TU after each simulation run. This metric, referred to as the handover rate, reflects the number of successful handover events. When the value of the average handover rate is less than 1, it indicates a handover failure. #### Iv-A2 Average SINR value The second KPI we have used is the average SINR of a TU after each simulation run. The average SINR value \(ho\_avg\_sinr\) is calculated below: \[\begin{split} ho\_avg\_sinr=& MEAN(best\_sinr_{j}, best\_sinr_{k},\\ best\_sinr_{m},...,)\end{split} \tag{3}\] where \(best\_sinr_{j}\) is defined in Eq. 2 which indicates the best SINR value of Target \(gNB_{j}\). This metric represents the handover performance and reflects the average SINR value of a TU after every successful handover execution. The higher the average SINR, the better the handover performance. ### _Effect of variable TTT values and density of UDN with a fixed velocity on handover_ To obtain a comprehensive understanding of the effect of TTTs and UDN density on handover, we conduct simulations with a range of TTT values (1 to 12 tics), UDN densities (10, 20, 30, 40, and 50 gNBs), and a fixed velocity of 50 km/h. The results of these simulations are shown in Fig. 4, where 3-dimensional figures are generated for the two assumed simulation scenarios. Our results indicate that the starting location or direction of the TU does not significantly affect the final outcomes, as the simulation results for both Case A and Case B are nearly identical. This suggests that the handover mechanism in 5G networks is robust to such variations. In Fig. 4(a) case A, it is easy to find that the overall handover rate decreases with increasing TTT values when the UDN density is 10 gNBs. Specifically, the handover rate drops from 4 to 0.01 as the TTT increases from 1 tic to 12 tics. This suggests that in order to reduce the handover rate, it is necessary to increase the TTT value. However, if the TTT value is significantly increased, such as 12 tics, the handover rate may fall below 1, resulting in handover failure. This is because a larger TTT value may cause severe degradation of SINR during the TTT period. These results highlight the importance of understanding the effect of the TTT value on the handover rate and selecting an appropriate TTT value. It is crucial to strike a balance between reducing the handover rate and avoiding handover failure. In Fig. 4(b) case B, when TTT is 1, the handover rate rises from 4 dB to 8.97 with the increasing density of gNB from 10 to 50. It means that with the increasing density of gNB, the handover rate will be significantly raised when the TTT value is not large. Ultra-dense deployment of gNB leads to redundant handovers. The impact of UDN density on the handover rate is mitigated as the TTT value increases. As the TTT surpasses 8, the density of UDN no long influences the handover rate to a significant extent. The larger the TTT for handover procedures, the weaker the effect of UDN density on handover frequency. ### _Performance on handover_ Additionally, the tables present the average SINR values for two scenarios which clearly demonstrate the impact of TTT and UDN density on handover performance. The tables display the values for TTT, UDN density, and the average SINR (\(ho\_avg\_sinr\)) for each scenario. Fig. 3: Simulation scenarios with den_gNBs (20) for different directions In TABLE III for case A, the results of the simulation indicate that in case the density of UDN is 10, the optimal TTT value for the best handover performance is 8, as it yields an average SINR of 40.35 dB. However, when the TTT value is continually increased, the handover performance deteriorates, as the TU is unable to maintain a connection with the network. As the density of UDN increases, the range of optimal TTT values becomes more restricted, and the optimal TTT value decreases. For instance, when the density of UDN is 20, the best TTT value is 4. With a density of 30 or 40, the optimal TTT value is 3. This highlights the importance of selecting the appropriate TTT value in order to maintain an optimal handover performance, especially in scenarios of high UDN density. From TABLE IV case B, the simulation results obtained from case A and case B show slight variations. For a density of 10 gNBs, the optimal TTT value is 7 with an average SINR of 30.36 dB in case B. However, as the density increases to 20, the best TTT value decreases to 6, resulting in a decrease in the average SINR to 29.46 dB. Additionally, for both cases, A and B with a TTT value of 1, the performance of TU in a high density of UDN has significantly decreased compared to its performance in a low-density environment. This is due to the increased interference from multiple gNBs as well as the high number of handovers in a high density of UDN. These tables provide a useful mapping to determine the optimal TTT values for different scenarios. ### _Effect of variable TTT values and velocities with a fixed density of UDN on handover_ The simulation results in Fig. 5 provide insight into the relationship between TTT values, velocities, and handover performance in a fixed density of UDN (20) scenario. These results can serve as a reference for determining the optimal TTT values for different use cases. In Fig. 5(a), it is observed that as the TTT value increases, the handover rate decreases with the increase in velocity. This is because, with the increase in TTT value, the TU's connection with the network is maintained for a longer period, leading to fewer handovers. However, this does not necessarily translate to better handover performance, as can be seen in Fig. 5(b). When TTT is 1, the average SINR decreases from 36.29 dB at 10 km/h to 16.93 dB at 50 km/h but the handover rate increase Fig. 4: Simulation results for the two different scenarios from 1 at 10km/h to 4 at 50 km/h. This is due to the fast-moving TU encountering more frequent handovers, leading to increased signal interference and decreased handover performance. It can be observed that as the velocity of the TU is 50 km/h, the average SINR rises from 16.93 dB at \(TTT=1\) to 30.3 dB at \(TTT=7\). When TTT is between 4 and 8 with the optimal velocity of 30 km/h, the TU can achieve the best handover performance. ## V Conclusion This paper has conducted an analysis of the impact of TTT values, UDN densities, and TU velocities on 5G handover performance. The simulation results show that the TTT value plays a crucial role in determining the handover rate, and finding a proper balance between the TTT value, UDN density, and TU velocity is crucial for optimizing handover performance. The authors have also developed a simulation tool in Python to evaluate the handover times and performance for different scenarios. In future work, the authors aim to improve handover performance by using machine learning algorithms in 5G/6G wireless networks. ## VI acknowledgement This work has been funded by the German Federal Ministry of Education and Research as part of the AI4mobile project, with a funding number of 16KIS1170K. The authors acknowledge the contributions of all AI4Mobile partners, but the content of the paper is the sole responsibility of the authors and may not necessarily reflect the views of the project as a whole.
2305.02149
Bavard duality for the relative Gromov seminorm
The relative Gromov seminorm is a finer invariant than stable commutator length where a relative homology class is fixed. We show a duality result between bounded cohomology and the relative Gromov seminorm, analogously to Bavard duality for scl. We give an application to computations of scl in graphs of groups. We also explain how our duality result can be given a purely algebraic interpretation via a relative version of the Hopf formula. Moreover, we show that this leads to a natural generalisation of a result of Calegari on a connection between scl and the rotation quasimorphism.
Alexis Marchand
2023-05-03T14:30:19Z
http://arxiv.org/abs/2305.02149v2
# Bavard duality for the relative Gromov seminorm ###### Abstract. The relative Gromov seminorm is a finer invariant than stable commutator length where a relative homology class is fixed. We show a duality result between bounded cohomology and the relative Gromov seminorm, analogously to Bavard duality for scl. We explain how this can be given a purely algebraic interpretation via a relative version of the Hopf formula. Moreover, we show that this leads to a natural generalisation of a result of Calegari on a connection between scl and the rotation quasimorphism. ## 1. Introduction Stable commutator length, or scl, is an invariant of groups that can be thought of as a kind of homological \(\ell^{1}\)-norm on the commutator subgroup. It has attracted attention for its connections with various topics in geometric topology and group theory -- see Calegari's book [8] for a comprehensive survey. However, scl has proved very hard to compute: Calegari [9] showed that scl is computable and has rational values in free groups, and Chen [14] generalised this to certain graphs of groups, encompassing previous results of various authors [10, 13, 15, 27, 29], but neither computability nor rationality of scl are known for closed surface groups. In [25], the author approaches the problem of understanding scl in surface groups by examining whether or not certain embeddings of surfaces are isometric for scl. A conclusion of that paper is that some of the results that one can prove for scl in free groups can only be generalised to closed surface groups if one works in a fixed relative homology class. More precisely, the author generalises a result about scl in free groups to one about the \(\ell^{1}\)-seminorm -- also called the _relative Gromov seminorm_ -- on the space \(H_{2}\left(\pi_{1}S,c\right)\), where \(S\) is a (possibly closed) surface, and \(c\) is a \(1\)-chain over \(\pi_{1}S\). The relative Gromov seminorm is a finer invariant than scl, in the sense that \(\operatorname{scl}(c)\) can be computed as an infimum of \(\left\|\alpha\right\|_{1}\) over \(\alpha\in H_{2}(\pi_{1}S,c)\) bounding \(c\) -- see Corollary 2.8. The metastrategy here is that one might be able to obtain partial information about scl in groups \(G\) with \(H_{2}(G)\neq 0\) by first understanding the relative Gromov seminorm. A pioneering result in the study of scl was the discovery of _Bavard duality_[1], showing that the dual space of the scl-seminorm can be understood in terms of _quasimorphisms_ -- see [8, SS2.5] for more details. Bavard duality has led to a vast array of work on scl, most notably yielding various spectral gap results [2, 11, 15, 17, 22], and it is natural to ask for an analogue in the context of the relative Gromov seminorm. Combining several well-known results, we show that _bounded cohomology_ provides such an analogue: **Theorem A** (Bavard duality for the relative Gromov seminorm).: _Let \(X\) be a countable CW-complex and \(\gamma:\coprod S^{1}\to X\). Given a real class \(\alpha\in H_{2}(X,\gamma;\mathbb{R})\), the ###### Abstract We consider the problem of finding a new class of linear operators on a Hilbert space \(H_{b}\left(G,w;\mathbb{Z}\right)\) with a linear operator \(\mathcal{ ## 2. The relative Gromov seminorm The Gromov seminorm will be our measure of complexity for relative homology classes. We approach it from two points of view: first as an \(\ell^{1}\)-seminorm, then as a measure of the minimal complexity of surfaces representing a given class. We'll show that, for rational classes, those two points of view coincide. This is well-known for absolute homology [8, SS1.2.5], and we only adapt previous arguments to the relative case. ### Homology of a space relative to a chain Let \(X\) be a topological space and \(\gamma:\coprod S^{1}\to X\) be a collection of loops in \(X\). We assume throughout that each copy of \(S^{1}\) in \(\coprod S^{1}\) is oriented. Note that, if \(\left\{\gamma_{i}:S^{1}\to X\right\}_{i}\) are the restrictions of \(\gamma\) to each summand in the disjoint union, then \(\gamma\) can be viewed as representing the integral \(1\)-chain \(c=\sum_{i}\left[\gamma_{i}\right]\in C_{1}\left(\pi_{1}X;\mathbb{Z}\right)\), where \(\left[\gamma_{i}\right]\) is the class of \(\gamma_{i}\) in \(\pi_{1}X\). Conversely, every integral \(1\)-chain \(c\) over \(\pi_{1}X\) can be represented by a map \(\gamma:\coprod S^{1}\to X\). Let \(X_{\gamma}\) denote the _mapping cylinder_ of \(\gamma\): \[X_{\gamma}=\left(X\amalg\left(\coprod S^{1}\times[0,1]\right)\right)/\sim,\] where \(\sim\) is the equivalence relation generated by \(\left(u,0\right)\sim\gamma(u)\) for \(u\in\coprod S^{1}\). There is an embedding \(\coprod S^{1}\hookrightarrow X_{\gamma}\) via \(u\mapsto\left(u,1\right)\), and we will identify \(\coprod S^{1}\) with its image under this embedding. The homology of the pair \(\left(X,\gamma\right)\) over the coefficient ring \(R=\mathbb{Z}\) or \(\mathbb{Q}\) or \(\mathbb{R}\) is defined by \[H_{*}\left(X,\gamma;R\right)=H_{*}\left(X_{\gamma},\coprod S^{1};R\right).\] Note that there is a natural isomorphism \[H_{*}\left(X,\gamma;\mathbb{R}\right)\cong H_{*}\left(X,\gamma;\mathbb{Q} \right)\otimes_{\mathbb{Q}}\mathbb{R},\] allowing us to view \(H_{*}\left(X,\gamma;\mathbb{Q}\right)\) as a subset of \(H_{*}\left(X,\gamma;\mathbb{R}\right)\). A class \(\alpha\in H_{*}\left(X,\gamma;\mathbb{Z}\right)\) will be called _integral_, while \(\alpha\in H_{*}\left(X,\gamma;\mathbb{Q}\right)\) will be called _rational_ and \(\alpha\in H_{*}\left(X,\gamma;\mathbb{R}\right)\) will be called _real_. If \(G\) is a group, \(c\in C_{1}\left(G;\mathbb{Z}\right)\) is a \(1\)-chain, and \(X\) is a \(K(G,1)\) space, then \(c\) is represented by some map \(\gamma:\coprod S^{1}\to X\), and we define \[H_{*}\left(G,c\right)=H_{*}\left(X,\gamma\right).\] This is well-defined since \(K(G,1)\) spaces are unique up to homotopy. ### Rational points in real vector spaces The difference between real and rational classes in \(H_{2}\left(X,\gamma\right)\) will play a role in the sequel, and we make a brief digression to introduce some general terminology related to this. **Definition 2.1**.: Let \(V\) be a real vector space. A _rational structure_ on \(V\) is the choice of an equivalence class of bases of \(V\), where two bases are considered equivalent if each vector of one basis has rational coordinates in the second basis. Any basis in the equivalence class is called a _rational basis_. Given a rational structure on \(V\), a _rational point_ is a vector of \(V\) that has rational coordinates in a rational basis. The set \(V_{\mathbb{Q}}\) of rational points of \(V\) is naturally a \(\mathbb{Q}\)-vector space, and satisfies \(V=V_{\mathbb{Q}}\otimes_{\mathbb{Q}}\mathbb{R}\). In fact, a rational structure on \(V\) can be defined equivalently as the choice of a \(\mathbb{Q}\)-subspace \(V_{\mathbb{Q}}\) of \(V\) such that \(V=V_{\mathbb{Q}}\otimes_{\mathbb{Q}}\mathbb{R}\). **Example 2.2**.: The space \(\mathbb{R}^{n}\) has a rational structure given by the equivalence class of the standard basis, and its set of rational points is \(\mathbb{Q}^{n}\). A _rational subspace_\(W\) of \(V\) is a (real) subspace spanned by rational points. It naturally inherits a rational structure from \(V\). If \(V\) and \(W\) are real vector spaces equipped with rational structures, a _rational linear map_\(f:V\to W\) is a linear map such that the image of each vector in a rational basis of \(V\) has rational coordinates in a rational basis of \(W\). This implies that the kernel and the image of \(f\) are rational subspaces of \(V\) and \(W\) respectively. Let \(C_{*}^{\mathbb{Q}}\) be a chain complex over \(\mathbb{Q}\) and let \(C_{*}^{\mathbb{R}}=C_{*}^{\mathbb{Q}}\otimes_{\mathbb{Q}}\mathbb{R}\). Hence, each vector space \(C_{*}^{\mathbb{R}}\) has a rational structure whose set of rational points is \(C_{n}^{\mathbb{Q}}\). The boundary map \(d_{n}:C_{n}^{\mathbb{R}}\to C_{n-1}^{\mathbb{R}}\) is rational, and the space \(Z_{n}^{\mathbb{R}}=\operatorname{Ker}d_{n}\) of \(n\)-cycles is a rational subspace. In particular, the set of rational points of \(Z_{n}^{\mathbb{R}}\) is the space \(Z_{n}^{\mathbb{Q}}\) of \(n\)-cycles for \(C_{*}^{\mathbb{Q}}\). Moreover, there is an isomorphism \[H_{n}\left(C_{*}^{\mathbb{R}}\right)\cong H_{n}\left(C_{*}^{\mathbb{Q}}\right) \otimes_{\mathbb{Q}}\mathbb{R},\] giving \(H_{n}\left(C_{*}^{\mathbb{R}}\right)\) a rational structure whose set of rational points is \(H_{n}\left(C_{*}^{\mathbb{Q}}\right)\). The following lemma says that any real cycle representing a rational homology class can be approximated by a rational cycle: **Lemma 2.3** (Rational approximation in homology).: _Let \(C_{*}^{\mathbb{Q}}\) be a chain complex over \(\mathbb{Q}\) and let \(C_{*}^{\mathbb{R}}=C_{*}^{\mathbb{Q}}\otimes_{\mathbb{Q}}\mathbb{R}\). Let \(\left\|\cdot\right\|\) be a norm on \(C_{*}^{\mathbb{R}}\). Consider a real \(n\)-cycle \(a\in Z_{n}^{\mathbb{R}}\) whose homology class \([a]\) is rational:_ \[[a]\in H_{n}\left(C_{*}^{\mathbb{Q}}\right)\leq H_{n}\left(C_{*}^{\mathbb{R}} \right).\] _Then for any \(\varepsilon>0\), there exists a rational \(n\)-cycle \(a^{\prime}\in Z_{n}^{\mathbb{Q}}\) such that_ * \([a]=[a^{\prime}]\) _in_ \(H_{n}\left(C_{*}^{\mathbb{R}}\right)\)_, and_ * \(\|a-a^{\prime}\|\leq\varepsilon\)_._ Proof.: We follow an argument of Calegari [8, Remark 1.5]. Observe that the natural projection map \[p:Z_{n}^{\mathbb{R}}\to H_{n}\left(C_{*}^{\mathbb{R}}\right),\] is rational. Hence, since \([a]\) is a rational point of \(H_{n}\left(C_{*}^{\mathbb{R}}\right)\), the affine subspace \(p^{-1}([a])\) is rational in \(Z_{n}^{\mathbb{R}}\), so its rational points are contained in \(Z_{n}^{\mathbb{Q}}\). We may assume that \(Z_{n}^{\mathbb{R}}\) is finite-dimensional by restricting to a finite-dimensional rational subspace containing \(a\); hence rational points are dense. Since the real \(n\)-cycle \(a\) lies in \(p^{-1}([a])\), there is \(a^{\prime}\in p^{-1}([a])\) rational arbitrarily close to \(a\) for \(\left\|\cdot\right\|\). This rational \(n\)-cycle \(a^{\prime}\) lies in \(Z_{n}^{\mathbb{Q}}\) and is homologous to \(a\) as wanted. ### The Gromov seminorm as an \(\ell^{1}\)-seminorm We now give a first definition of the Gromov seminorm. Recall that \(H_{*}\left(X_{\gamma};\mathbb{R}\right)\) is the homology of the singular chain complex \(C_{*}\left(X_{\gamma};\mathbb{R}\right)\). Each \(\mathbb{R}\)-vector space \(C_{n}\left(X_{\gamma};\mathbb{R}\right)\) can be equipped with the \(\ell^{1}\)-norm defined by \[\left\|\sum_{\sigma}\lambda_{\sigma}\sigma\right\|_{1}=\sum_{\sigma}\left| \lambda_{\sigma}\right|,\] with \(\lambda_{\sigma}\in\mathbb{R}\) for each singular \(n\)-simplex \(\sigma:\Delta^{n}\to X\). The quotient \[C_{n}\left(X_{\gamma},\coprod S^{1};\mathbb{R}\right)=C_{n}\left(X_{\gamma}; \mathbb{R}\right)/C_{n}\left(\coprod S^{1};\mathbb{R}\right)\] inherits a seminorm that we also denote by \(\left\|\cdot\right\|_{1}\), and that is defined by \[\left\|\bar{c}\right\|_{1}=\inf_{c\in\bar{c}}\left\|c\right\|_{1},\] where the infimum is over all absolute \(n\)-chains \(c\in C_{n}\left(X_{\gamma};\mathbb{R}\right)\) representing \(\bar{c}\in C_{n}\left(X_{\gamma},\coprod S^{1};\mathbb{R}\right)\). The restriction of \(\left\|\cdot\right\|_{1}\) defines a seminorm on the subspace \(Z_{n}\left(X_{\gamma},\coprod S^{1};\mathbb{R}\right)\) of relative \(n\)-cycles, which descends to a seminorm, still denoted by \(\left\|\cdot\right\|_{1}\), on homology: **Definition 2.4**.: Let \(X\) be a topological space and \(\gamma:\coprod S^{1}\to X\). The _relative Gromov seminorm_ on \(H_{n}\left(X,\gamma;\mathbb{R}\right)\) is defined by \[\left\|\alpha\right\|_{1}=\inf\left\{\left\|a\right\|_{1}\,\middle|\,a\in Z_{n }\left(X_{\gamma},\coprod S^{1};\mathbb{R}\right),\,[a]=\alpha\right\}\] **Remark 2.5**.: Given a group \(G\) and a \(1\)-chain \(c\), one can extend the definition of the Gromov seminorm to \(H_{2}\left(G,c;\mathbb{R}\right)\). Indeed, \(\left\|\cdot\right\|_{1}\) is invariant under homotopy equivalence, and in fact under any map inducing an isomorphism of fundamental groups -- this follows from Gromov's Mapping Theorem [18, Corollary 5.11] and duality between \(\ell^{1}\)-homology and bounded cohomology [18, Corollary 6.2]. The above definitions still make sense if \(\mathbb{R}\) is replaced with \(\mathbb{Q}\) everywhere. Given \(\alpha\in H_{n}\left(X,\gamma;\mathbb{Q}\right)\leq H_{n}\left(X,\gamma; \mathbb{R}\right)\), it is natural to ask whether the Gromov seminorm of \(\alpha\) as a rational class coincides with its Gromov seminorm as a real class. The following lemma gives an affirmative answer: **Lemma 2.6** (Equality of the rational and real Gromov seminorms).: _Let \(X\) be a topological space and \(\gamma:\coprod S^{1}\to X\). Given a rational class \(\alpha\in H_{n}\left(X,\gamma;\mathbb{Q}\right)\), the Gromov seminorm of \(\alpha\) (seen as a real class) can be computed over rational cycles:_ \[\left\|\alpha\right\|_{1}=\inf\left\{\left\|a\right\|_{1}\big{|}\,a\in Z_{n} \left(X_{\gamma},\coprod S^{1};\mathbb{Q}\right),\,[a]=\alpha\right\}\] Proof.: This follows from Lemma 2.3. In other words, Lemma 2.6 says that the inclusion \(H_{n}\left(X,\gamma;\mathbb{Q}\right)\hookrightarrow H_{n}\left(X,\gamma; \mathbb{R}\right)\) is an isometric embedding if \(H_{n}\left(X,\gamma;\mathbb{Q}\right)\) and \(H_{n}\left(X,\gamma;\mathbb{R}\right)\) are equipped with the rational and real Gromov seminorms respectively. ### Topological interpretation of the Gromov seminorm Analogously to (and motivated by) the topological interpretation of stable commutator length in terms of surfaces projectively bounding a given loop [8, SS2.6], we now give a topological interpretation of the Gromov seminorm for rational classes in \(H_{2}\). This extends the topological interpretation of the absolute Gromov seminorm on \(H_{2}\)[8, SS1.2.5]. An _admissible surface_ for \(\gamma:\coprod S^{1}\to X\) is the data of an oriented compact (possibly disconnected) surface \(\Sigma\), and of maps \(f:\Sigma\to X\) and \(\partial f:\partial\Sigma\to\coprod S^{1}\) making the following diagram commute: where \(\iota:\partial\Sigma\hookrightarrow\Sigma\) is the inclusion. Such an admissible surface will be denoted by \(f:\left(\Sigma,\partial\Sigma\right)\to\left(X,\gamma\right)\). It induces a map \[f_{*}:H_{*}\left(\Sigma,\partial\Sigma\right)\to H_{*}\left(X,\gamma\right).\] In particular, \(f\) represents a class \(f_{*}[\Sigma]\in H_{2}\left(X,\gamma\right)\), where \([\Sigma]\in H_{2}\left(\Sigma,\partial\Sigma\right)\) is the (integral, rational, or real) fundamental class of \(\Sigma\). The topological complexity of a compact surface \(\Sigma\) will be measured by its _reduced Euler characteristic_, defined by \(\chi^{-}(\Sigma)=\sum_{K}\min\left\{0,\chi(K)\right\}\), where the sum is over all connected components \(K\) of \(\Sigma\). **Proposition 2.7** (Topological interpretation of the Gromov seminorm).: _Let \(X\) be a topological space and \(\gamma:\coprod S^{1}\to X\). If \(\alpha\in H_{2}\left(X,\gamma;\mathbb{Q}\right)\) is a rational class, then there is an equality_ \[\left\|\alpha\right\|_{1}=\inf_{f,\Sigma}\frac{-2\chi^{-}(\Sigma)}{n(\Sigma)},\] _where the infimum is taken over all admissible surfaces \(f:\left(\Sigma,\partial\Sigma\right)\to\left(X,\gamma\right)\) such that \(f_{*}[\Sigma]=n(\Sigma)\alpha\) for some \(n(\Sigma)\in\mathbb{N}_{\geq 1}\)._ Proof.: First consider an admissible surface \(f:(\Sigma,\partial\Sigma)\to(X,\gamma)\) with \(f_{*}[\Sigma]=n(\Sigma)\alpha\). Then we can estimate \[\left\|\alpha\right\|_{1}=\frac{\left\|f_{*}[\Sigma]\right\|_{1}}{n(\Sigma)} \leq\frac{\left\|[\Sigma]\right\|_{1}}{n(\Sigma)}.\] But the \(\ell^{1}\)-seminorm of \([\Sigma]\) is known as the _simplicial volume_ of \(\Sigma\), and it is equal to \(-2\chi^{-}(\Sigma)\)[18, Corollary 7.5]. This proves the inequality (\(\leq\)) of the proposition. For the reverse inequality, we follow the same line of reasoning as in Calegari's proof that scl is not greater than the Gersten boundary norm [8, Lemma 2.69], which is based on an argument of Bavard [1, Proposition 3.2]. Let \(a\in Z_{2}\left(X_{\gamma},\coprod S^{1};\mathbb{R}\right)\) be a relative \(2\)-cycle representing \(\alpha\). Let \(a_{0}\in C_{2}\left(X_{\gamma};\mathbb{R}\right)\) be a \(2\)-chain mapping to \(a\). By Lemma 2.6, we may assume that \(a_{0}\) is rational since \(\alpha\) is rational. Hence there exists \(q\in\mathbb{N}_{\geq 1}\) such that \(qa_{0}\) is integral; we can write \(qa_{0}=\sum_{j}\varepsilon_{j}\sigma_{j}\), with \(\varepsilon_{j}\in\left\{\pm 1\right\}\) and \(\sigma_{j}:\Delta^{2}\to X_{\gamma}\) a singular \(2\)-simplex. We can assume that no singular \(2\)-simplex appears twice with opposite signs in the above expression, so that \[\left\|qa_{0}\right\|_{1}=\sum_{j}\left|\varepsilon_{j}\right|.\] The fact that \(a\) is a relative \(2\)-cycle means that \(da_{0}\) has support contained in \(\coprod S^{1}\). Therefore, we can construct a partial pairing on the edges of the simplices \(\sigma_{j}\) such that paired edges have the same image in \(X_{\gamma}\), and non-paired edges all map to \(\coprod S^{1}\). We then construct a \(2\)-dimensional simplicial complex \(\Sigma\) by taking a collection \(\left\{\Delta_{j}^{2}\right\}_{j}\) of \(2\)-simplices and gluing them along this pairing. The simplicial complex \(\Sigma\) thus constructed is a surface with boundary, and the singular simplices \(\sigma_{j}\) define a natural map \(f:\Sigma\to X_{\gamma}\). By construction, this map is an admissible surface \((\Sigma,\partial\Sigma)\to(X,\gamma)\), and \(f_{*}[\Sigma]=q\alpha\) in \(H_{2}\left(X,\gamma;\mathbb{R}\right)\). As above, \(-2\chi^{-}(\Sigma)\) is the simplicial volume \(\left\|[\Sigma]\right\|_{1}\) of \(\Sigma\)[18, Corollary 7.5]. On the other hand, our triangulation of \(\Sigma\) by the simplices \(\sigma_{j}\) gives an upper bound on the simplicial volume: \[\frac{-2\chi^{-}(\Sigma)}{q}=\frac{\left\|[\Sigma]\right\|_{1}}{q}\leq\frac{ 1}{q}\sum_{j}\left|\varepsilon_{j}\right|=\frac{\left\|qa_{0}\right\|_{1}}{q}= \left\|a_{0}\right\|_{1}.\] By taking the infimum over \(a_{0}\) representing \(\alpha\), we obtain the inequality (\(\geq\)). The topological interpretation of \(\left\|\cdot\right\|_{1}\) connects it to stable commutator length: **Corollary 2.8** (Gromov seminorm and scl).: _Let \(X\) be a topological space and let \(c\in C_{1}\left(\pi_{1}X;\mathbb{Z}\right)\) be an integral \(1\)-chain represented by a map \(\gamma:\coprod S^{1}\to X\). Then_ \[\operatorname{scl}_{\pi_{1}X}(c)=\frac{1}{4}\inf\left\{\left\|\alpha\right\|_ {1}\big{|}\alpha\in H_{2}\left(X,\gamma;\mathbb{Q}\right),\,\partial\alpha= \left[\coprod S^{1}\right]\right\},\] _where \(\partial:H_{2}(X,\gamma;\mathbb{Q})\to H_{1}\left(\coprod S^{1};\mathbb{Q}\right)\) is the boundary map in the long exact sequence of the pair \(\left(X_{\gamma},\coprod S^{1}\right)\) (see [25, Proposition 2.9])._ Proof.: This follows from the topological interpretations of \(\left\|\cdot\right\|_{1}\) (Proposition 2.7) and scl [8, Proposition 2.74]. ### Simplicity and incompressibility for admissible surfaces We will need admissible surfaces with additional properties: **Definition 2.9**.: We say that an admissible surface \(f:(\Sigma,\partial\Sigma)\to(X,\gamma)\) is * _Incompressible_ if every simple closed curve in \(\Sigma\) with nullhomotopic image in \(X\) is nullhomotopic in \(\Sigma\), * _Simple_ if there are no two boundary components of \(\Sigma\) whose images under \(f\) represent powers of the same conjugacy class in \(\pi_{1}M\). **Lemma 2.10** (Simple incompressible admissible surfaces).: _Let \(X\) be a topological space and \(\gamma:\coprod S^{1}\to X\). Then for every rational class \(\alpha\in H_{2}\left(X,\gamma;\mathbb{Q}\right)\) and for every \(\varepsilon>0\), there is a simple, incompressible, admissible surface \(f:\left(\Sigma,\partial\Sigma\right)\to\left(X,\gamma\right)\) such that \(f_{*}[\Sigma]=n(\Sigma)\alpha\) for some \(n(\Sigma)\in\mathbb{N}_{\geq 1}\), and_ \[\left\|\alpha\right\|_{1}\leq\frac{-2\chi^{-}(\Sigma)}{n(\Sigma)}\leq\left\| \alpha\right\|_{1}+\varepsilon. \tag{2.1}\] Proof.: Proposition 2.7 implies the existence of an admissible surface \(f:\left(\Sigma,\partial\Sigma\right)\to\left(X,\gamma\right)\) satisfying (2.1) with \(f_{*}[\Sigma]=n(\Sigma)\alpha\) for some \(n(\Sigma)\in\mathbb{N}_{\geq 1}\). If \(f\) is not simple, then we can find two boundary components \(\partial_{1}\) and \(\partial_{2}\) of \(\Sigma\) whose image under \(f\) represent powers of the same conjugacy class in \(\pi_{1}X\). Hence we can glue a \(1\)-handle \(H\) between \(\partial_{1}\) and \(\partial_{2}\), with \(H\) mapping to a path connecting the respective basepoints of \(f\circ\partial_{1}\) and \(f\circ\partial_{2}\). This does not change \(f_{*}[\Sigma]\) but increases \(-\chi^{-}(\Sigma)\) by \(1\). In order to keep control of \(-\chi^{-}(\Sigma)/n(\Sigma)\), we perform this operation only after replacing \(\Sigma\) by a finite cover of large degree \(N\) that preserves the number of boundary components. Hence, the quantity \(-\chi^{-}(\Sigma)/n(\Sigma)\) will only increase by \(1/N\) (this is a simple case of an asymptotic promotion argument, adapted from [8, Proposition 2.10] -- see [14, SS4] and [25, SS4.d] for similar arguments). Since this operation decreases the number of boundary components of \(\Sigma\) by \(1\), we will obtain a simple admissible surface after finitely many iterations. Now if \(f\) is compressible, then there is a simple closed curve \(\beta\) in \(\Sigma\) which is not nullhomotopic but such that \(f\circ\beta\) is. In this case, one can cut \(\Sigma\) along \(\beta\) and glue two discs onto the resulting boundary components; the map \(f\) extends onto the new discs since \(f\circ\beta\) is assumed to be nullhomotopic. This does not change \(f_{*}[\Sigma]\) and makes \(-\chi^{-}(\Sigma)\) decrease, so that (2.1) still holds, and moreover the property of \(f\) being simple is preserved. After performing this operation a finite number of times, we therefore obtain that \(f\) is simple and incompressible. ## 3. Bavard duality for the relative Gromov seminorm Bavard [1] proved that the dual space of the scl-seminorm can be interpreted in terms of quasimorphisms. This can be thought of as a kind of \(\ell^{1}\)-\(\ell^{\infty}\) duality, and has had a wide range of applications in giving lower bounds for scl [2, 11, 15, 17, 22]. We refer the reader to Calegari's book [8, SS2.5] for more details on classical Bavard duality. Our aim is to obtain an analogous statement for the relative Gromov seminorm on \(H_{2}\left(X,\gamma\right)\), where \(\gamma:\coprod S^{1}\to X\) is a collection of loops in a topological space \(X\). It is a classical fact that \(\ell^{1}\)-homology is dual to _bounded cohomology_, which is a variation of singular cohomology where cochains are assumed to be bounded maps from the set of singular simplices to \(\mathbb{R}\) -- see Frigerio's book [18] for a detailed account. The natural dual of \(H_{2}\left(X,\gamma\right)\) should therefore be \(H_{b}^{2}\left(X,\gamma\right)=H_{b}^{2}\left(X_{\gamma},\coprod S^{1}\right)\). (See [18, SS5.7] for relative bounded cohomology.) But since \(\pi_{1}S^{1}=\mathbb{Z}\) is amenable, \(H_{b}^{*}\left(\coprod S^{1}\right)\) vanishes and the long exact sequence of \(\left(X_{\gamma},\coprod S^{1}\right)\) gives an isomorphism \[H_{b}^{2}\left(X,\gamma\right)\cong H_{b}^{2}\left(X\right).\] This isomorphism, together with the _Kronecker product_\(H_{b}^{2}\left(X,\gamma\right)\times H_{2}\left(X,\gamma\right)\to\mathbb{R}\) -- which is nothing but the natural pairing between homology and cohomology [18, Chapter 6] -- defines a pairing \[\left\langle\cdot,\cdot\right\rangle:H_{b}^{2}\left(X\right)\times H_{2}\left( X,\gamma\right)\to\mathbb{R}.\] It turns out that the \(\ell^{\infty}\)-norm is dual to the \(\ell^{1}\)-seminorm under this pairing: **Theorem A** (Bavard duality for the relative Gromov seminorm).: _Let \(X\) be a countable CW-complex and \(\gamma:\coprod S^{1}\to X\). Given a real class \(\alpha\in H_{2}(X,\gamma;\mathbb{R})\), the relative Gromov seminorm of \(\alpha\) is given by_ \[\left\|\alpha\right\|_{1}=\sup\left\{\frac{\left\langle u,\alpha\right\rangle}{ \left\|u\right\|_{\infty}}\,\right|\,u\in H_{b}^{2}\left(X;\mathbb{R}\right) \smallsetminus\left\{0\right\}\right\}.\] Proof.: Duality between \(\ell^{1}\)-homology and bounded cohomology [18, Lemma 6.1] implies that \[\left\|\alpha\right\|_{1}=\sup\left\{\frac{\left\langle u,\alpha\right\rangle} {\left\|u\right\|_{\infty}}\,\right|\,u\in H_{b}^{2}\left(X,\gamma;\mathbb{R} \right),\,\left\|u\right\|_{\infty}\neq 0\right\}.\] In addition, a result proved independently by Bucher et al. [5, Theorem 1.2] and by Kim and Kuessner [24, Theorem 1.2] implies that the isomorphism \(H_{b}^{2}\left(X,\gamma\right)\cong H_{b}^{2}\left(X\right)\) is isometric for the \(\ell^{\infty}\)-norm. Together with the fact that \(\left\|u\right\|_{\infty}=0\) only if \(u=0\) in \(H_{b}^{2}(X)\)[18, Corollary 6.7], this implies the result. We'll say that a class \(u\in H_{b}^{2}\left(X;\mathbb{R}\right)\) is _extremal_ for \(\alpha\in H_{2}\left(X,\gamma;\mathbb{R}\right)\) if it realises the supremum in Theorem A. Note that extremal classes exist for all \(\alpha\in H_{2}\left(X,\gamma;\mathbb{R}\right)\) by the Hahn-Banach Theorem. ## 4. An algebraic interpretation a la Hopf We now prove a relative version of the Hopf formula, and explain how this can be used to provide a purely algebraic interpretation of Theorem A. We focus on the special case of the homology of a group relative to a single element (rather than to an integral \(1\)-chain). An analogous Hopf formula could be given in the general case, but the notation would become cumbersome. ### A relative Hopf formula Recall that the classical Hopf formula computes \(H_{2}(G)\) when \(G\) is a group given by a presentation (see [4, Theorem II.5.3]): **Theorem 4.1** (Hopf formula [23]).: _Let \(F\) be a free group, \(R\trianglelefteq F\), and \(G=F/R\). Then there is an isomorphism_ \[H_{2}\left(G;\mathbb{Z}\right)\cong R\cap[F,F]/[F,R].\] With the same setup as in Theorem 4.1, our goal is to compute \(H_{2}\left(G,w;\mathbb{Z}\right)\) for \(w\in G\). This is provided by the following (which recovers Theorem 4.1 when \(w=1\)). Our proof is topological and inspired by [8, SS1.1.6] and [26]. **Theorem 4.2** (Relative Hopf formula).: _Let \(F\) be a free group, \(R\trianglelefteq F\), and \(G=F/R\). Let \(w\in G\) and let \(\bar{w}\in F\) be a preimage of \(w\) under \(F\xrightarrow{p}F/R\). Then there is an isomorphism_ \[H_{2}\left(G,w;\mathbb{Z}\right)\cong\left\langle\bar{w}\right\rangle R\cap[F,F]/[F,R].\] Proof.: Let \(X\) be a \(K(G,1)\) and let \(\gamma:S^{1}\to X\) represent \(w\). Then \(H_{2}\left(G,w\right)=H_{2}\left(X,\gamma\right)\) by definition (see SS2.a), and we construct a morphism \[\Phi:\left\langle\bar{w}\right\rangle R\cap[F,F]\to H_{2}\left(X,\gamma; \mathbb{Z}\right)\] as follows. Let \(\bar{g}\in\left\langle\bar{w}\right\rangle R\cap[F,F]\). Since \(\bar{g}\in[F,F]\), one can write \[\bar{g}=\left[\bar{a}_{1},\bar{b}_{1}\right]\cdots\left[\bar{a}_{k},\bar{b}_{ k}\right],\] with \(\bar{a}_{i},\bar{b}_{i}\in F\). Set \(a_{i}=p\left(\bar{a}_{i}\right)\in G\), \(b_{i}=p\left(\bar{b}_{i}\right)\in G\) and \(g=p\left(\bar{g}\right)\in G\). The assumption that \(\bar{g}\in\left\langle\bar{w}\right\rangle R\) in \(F\) means that \(g\in\left\langle w\right\rangle\) in \(G\). Let \(\Sigma_{k,1}\) be an oriented genus \(k\) surface with one boundary component. The surface \(\Sigma_{k,1}\) has a cell structure with one \(0\)-cell \(\bullet\), \((2k+1)\)\(1\)-cells with labels \(\alpha_{1},\beta_{1},\ldots,\alpha_{k},\beta_{k},\delta\), and one \(2\)-cell glued along the word \(\delta^{-1}\left[\alpha_{1},\beta_{1}\right]\cdots\left[\alpha_{k},\beta_{k}\right]\). See Figure 1. Define a map \(f^{(1)}:\Sigma_{k,1}^{(1)}\to X\) on the \(1\)-skeleton of \(\Sigma_{k,1}\) by sending \(\bullet\) to a basepoint \(x_{0}\) in \(X\), each \(1\)-cell \(\alpha_{i}\) to a loop representing \(a_{i}\) in \(\pi_{1}\left(X,x_{0}\right)=G\), each \(\beta_{i}\) to a loop representing \(b_{i}\), and \(\delta\) to some power of \(\gamma\) representing \(g\in\langle w\rangle\). Since \(g=[a_{1},b_{1}]\cdots[a_{k},b_{k}]\) in \(G=\pi_{1}\left(X,x_{0}\right)\), the map \(f^{(1)}:\Sigma_{k,1}^{(1)}\to X\) can be extended over the \(2\)-cell of \(\Sigma_{k,1}\) to \(f:\Sigma_{k,1}\to X\). Observe that \(f\) sends \(\partial\Sigma_{k,1}=\delta\) to a power of \(\gamma\); therefore, \(f\) is an admissible surface \[f:\left(\Sigma_{k,1},\partial\Sigma_{k,1}\right)\to\left(X,\gamma\right).\] Now we define \(\Phi(\bar{g})\) by \[\Phi(\bar{g})=f_{*}\left[\Sigma_{k,1}\right]\in H_{2}\left(X,\gamma;\mathbb{Z }\right),\] where \(\left[\Sigma_{k,1}\right]\in H_{2}\left(\Sigma_{k,1},\partial\Sigma_{k,1}; \mathbb{Z}\right)\) is the integral fundamental class of \(\Sigma_{k,1}\). The construction of \(\Phi\left(\bar{g}\right)\) explained above depends _a priori_ on the choice of an expression \(\bar{g}=\left[\bar{a}_{1},\bar{b}_{1}\right]\cdots\left[\bar{a}_{k},\bar{b}_{ k}\right]\), which might not be unique. For now, we see \(\Phi\) as a map defined on the monoid \(\Theta\) of all formal expressions \(\left[\bar{a}_{1},\bar{b}_{1}\right]\cdots\left[\bar{a}_{k},\bar{b}_{k}\right]\) whose image in \(F\) lies in \(\langle\bar{w}\rangle\,R\), and we'll show that this induces a well-defined map on \(\langle\bar{w}\rangle\,R\cap[F,F]\). **Claim**.: The map \(\Phi:\Theta\to H_{2}\left(X,\gamma;\mathbb{Z}\right)\) is a monoid homomorphism. Proof of the claim.: Consider two formal expressions \(\theta=\left[\bar{a}_{1},\bar{b}_{1}\right]\cdots\left[\bar{a}_{k},\bar{b}_{ k}\right]\) and \(\theta^{\prime}=\left[\bar{a}^{\prime}_{1},\bar{b}^{\prime}_{1}\right]\cdots \left[\bar{a}^{\prime}_{\ell},\bar{b}^{\prime}_{\ell}\right]\) in \(\Theta\). As explained above, this gives rise to admissible surfaces \(f:\left(\Sigma_{k,1},\partial\Sigma_{k,1}\right)\to\left(X,\gamma\right)\) and \(f^{\prime}:\left(\Sigma_{\ell,1},\partial\Sigma_{\ell,1}\right)\to\left(X,\gamma\right)\), and we have \(\Phi(\theta)=f_{*}\left[\Sigma_{k,1}\right]\) and \(\Phi\left(\theta^{\prime}\right)=f_{*}^{\prime}\left[\Sigma_{\ell,1}\right]\). Consider the wedge sum \[\Sigma_{\vee}=\Sigma_{k,1}\vee\Sigma_{\ell,1}.\] The maps \(f\) and \(f^{\prime}\) naturally induce \(f_{\vee}:\Sigma_{\vee}\to X\), and the fundamental classes of \(\Sigma_{k,1}\) and \(\Sigma_{\ell,1}\) sum to a class \(\left[\Sigma_{\vee}\right]\in H_{2}\left(\Sigma_{\vee},\partial\Sigma_{\vee}; \mathbb{Z}\right)\), where we define \(\partial\Sigma_{\vee}=\partial\Sigma_{k,1}\vee\partial\Sigma_{\ell,1}\subseteq \Sigma_{\vee}\). Hence, \[\Phi\left(\theta_{1}\right)+\Phi\left(\theta_{2}\right)=\left(f_{\vee}\right) _{*}\left[\Sigma_{\vee}\right].\] Now there is a homotopy equivalence \(\left(\Sigma_{\vee},\partial\Sigma_{\vee}\right)\simeq\left(\Sigma_{k+\ell,1}, \partial\Sigma_{k+\ell,1}\right)\), as illustrated in Figure 2. This yields an admissible surface \(\left(\Sigma_{k+\ell,1},\partial\Sigma_{k+\ell,1}\right)\to\left(X,\gamma\right)\) whose image is \(\left(\Sigma_{k+\ell,1},\partial\Sigma_{k+\ell,1}\right)\). Figure 1. The cell structure on \(\Sigma_{k,1}\) (with \(k=2\)). image in \(H_{2}\left(X,\gamma;\mathbb{Z}\right)\) is \(\Phi\left(\theta\right)+\Phi\left(\theta^{\prime}\right)\). But note that this map is exactly the one obtained when the above construction is applied to \(\theta\theta^{\prime}\). This proves that \(\Phi\left(\theta\right)+\Phi\left(\theta^{\prime}\right)=\Phi\left(\theta \theta^{\prime}\right)\), so \(\Phi\) is a monoid homomorphism. Using the claim, we now prove that \(\Phi\) induces a well-defined map on \(\left\langle\bar{w}\right\rangle R\cap\left[F,F\right]\). Consider two formal expressions \(\theta,\theta^{\prime}\in\Theta\) defining the same element of \(\left\langle\bar{w}\right\rangle R\cap\left[F,F\right]\). Write \(\theta=\left[\bar{a}_{1},\bar{b}_{1}\right]\cdots\left[\bar{a}_{k},\bar{b}_{k}\right]\), and consider its formal inverse \(\theta^{-1}=\left[\bar{b}_{k},\bar{a}_{k}\right]\cdots\left[\bar{b}_{1},\bar{ a}_{1}\right]\in\Theta\) (which, despite our choice of notation, is not an inverse of \(\theta\) in the monoid \(\Theta!\)). Then the formal expression \(\theta^{-1}\theta^{\prime}\) represents the trivial element of \(F\). This means that the above construction could still be performed if the \(K(G,1)\) space \(X\) were replaced by a \(K(F,1)\) space \(X_{F}\). In other words, the admissible surface \(f:\left(\Sigma_{k,1},\partial\Sigma_{k,1}\right)\rightarrow\left(X,\gamma\right)\) associated to \(\theta^{-1}\theta^{\prime}\) factors through the map \(X_{F}\to X\) induced by \(F\to G\). Moreover, the image of \(\partial\Sigma_{k,1}\) is nullhomotopic in \(X_{F}\), from which it follows that \[f_{*}\left[\Sigma_{k,1}\right]\in H_{2}\left(X_{F};\mathbb{Z}\right)\leq H_{2 }\left(X_{F},\bar{\gamma};\mathbb{Z}\right),\] where \(\bar{\gamma}:S^{1}\to X_{F}\) is a representative of \(\bar{w}\in F\). But \(H_{2}\left(X_{F};\mathbb{Z}\right)=H_{2}\left(F;\mathbb{Z}\right)=0\) since \(F\) is a free group, so \(\left[\Sigma_{k,1}\right]\) maps to zero in \(H_{2}\left(X_{F},\bar{\gamma};\mathbb{Z}\right)\), and hence also in \(H_{2}\left(X,\gamma;\mathbb{Z}\right)\). Therefore, it follows from the claim that \[0=\Phi\left(\theta^{-1}\theta^{\prime}\right)=\Phi\left(\theta^{-1}\right)+ \Phi\left(\theta^{\prime}\right),\] and it is clear from the construction that \(\Phi\left(\theta^{-1}\right)=-\Phi\left(\theta\right)\), so \(\Phi\left(\theta\right)=\Phi\left(\theta^{\prime}\right)\) as wanted. This proves that \(\Phi\) induces a well-defined map \[\Phi:\left\langle\bar{w}\right\rangle R\cap\left[F,F\right]\to H_{2} \left(X,\gamma;\mathbb{Z}\right),\] which is a group homomorphism by the claim. The homomorphism \(\Phi\) is surjective since every element of \(H_{2}\left(X,\gamma;\mathbb{Z}\right)\) can be represented by a map from an orientable compact connected surface with one boundary component -- this follows essentially from Lemma 2.10. It remains to show the following: **Claim**.: \(\operatorname{Ker}\Phi=\left[F,R\right]\)_._ Proof of the claim.: To prove that \(\left[F,R\right]\subseteq\operatorname{Ker}\Phi\), it suffices to show that for every \(\bar{g}\in F\) and \(\bar{r}\in R\), we have \(\left[\bar{g},\bar{r}\right]\in\operatorname{Ker}\Phi\). But \(\Phi\left(\left[\bar{g},\bar{r}\right]\right)=f_{*}\left[\Sigma_{1,1}\right]\), where \(\Sigma_{1,1}\) is a torus with one boundary component, with equator mapping to \(\bar{g}\) and meridian mapping to \(\bar{r}\). Since the image of \(\bar{r}\) in \(G\) is trivial, we may cut \(\Sigma_{1,1}\) along the meridian and fill in the two resulting discs, obtaining a map \(f_{1}:\left(D^{2},\partial D^{2}\right)\rightarrow\left(X,\gamma\right)\). We can glue \(f_{1}\) to itself with reversed orientation along \(\partial D^{2}\) to obtain \(f_{2}:S^{2}\to X\). But \(X\) is assumed to be a \(K(G,1)\), so it is aspherical, and \(f_{2}\) is nullhomotopic. Therefore, \(f_{1}\) is also nullhomotopic, and \(f_{*}\left[\Sigma_{1,1}\right]=\left(f_{1}\right)_{*}\left[D^{2}\right]=0\). This proves that \(\Phi\left(\left[\bar{g},\bar{r}\right]\right)=0\), so \(\left[F,R\right]\subseteq\operatorname{Ker}\Phi\). Conversely, let \(\bar{g}\in\operatorname{Ker}\Phi\). Let \(f:\left(\Sigma,\partial\Sigma\right)\rightarrow\left(X,\gamma\right)\) be an admissible surface associated to an expression of \(\bar{g}\) as a product of commutators by the above construction, with \(\Sigma=\Sigma_{k,1}\). The assumption that \(\bar{g}\in\operatorname{Ker}\Phi\) means that \(f_{*}\left[\Sigma\right]=0\), so the map \(f_{*}:H_{2}\left(\Sigma,\partial\Sigma;\mathbb{Z}\right)\to H_{2} \left(X,\gamma;\mathbb{Z}\right)\) is zero. Long exact sequences of pairs give a commutative diagram with exact rows (with omitted \(\mathbb{Z}\)-coefficients): If \(f_{*}:H_{1}\left(\partial\Sigma\right)\to H_{1}\left(S^{1}\right)\) were nonzero, then since \(H_{1}\left(\partial\Sigma\right)\cong H_{1}\left(S^{1}\right)\cong\mathbb{Z}\), the map \(f_{*}:H_{1}\left(\partial\Sigma\right)\to H_{1}\left(S^{1}\right)\) would in fact be injective. But \(f_{*}\circ\partial=0\), so the map \(\partial:H_{2}\left(\Sigma,\partial\Sigma\right)\to H_{1}\left(\partial\Sigma\right)\) would be zero, implying by exactness that \(H_{1}\left(\partial\Sigma\right)=0\) since \(H_{1}\left(\partial\Sigma\right)\to H_{1}\left(\Sigma\right)\) is zero. This is a contradiction, and therefore the map \[f_{*}:H_{1}\left(\partial\Sigma\right)\to H_{1}\left(S^{1}\right)\] is zero. Therefore, the restriction of \(f\) to \(\partial\Sigma\) is nullhomotopic, which implies in particular that the image of \(\bar{g}\) in \(G\) is trivial, i.e. \(\bar{g}\in R\cap[F,F]\). Therefore, we are reduced to the setting of the classical Hopf formula (Theorem 4.1), i.e. \(\bar{g}\in R\cap[F,F]\) and \(\Phi\left(\bar{g}\right)=1\) in \(H_{2}\left(X;\mathbb{Z}\right)\). Since \(\Phi\) coincides with the morphism giving the classical Hopf formula (see for instance [26]), it follows that \(\bar{g}\in[F,R]\). We have constructed a surjective group homomorphism \(\Phi:\left\langle\bar{w}\right\rangle R\cap[F,F]\to H_{2}\left(X,\gamma; \mathbb{Z}\right)\) with \(\operatorname{Ker}\Phi=[F,R]\), so \(\Phi\) induces the desired isomorphism. **Remark 4.3**.: In the proof of Theorem 4.2, the assumption that \(X\) is a \(K(G,1)\) is essential. This is why -- contrary to the rest of this paper -- we state the theorem in terms of the relative homology of groups, rather than topological spaces. ### Bavard duality through the lens of the Hopf formula We next explain how to obtain an algebraic restatement of Theorem A using the relative Hopf formula (Theorem 4.2). We denote by \(Z_{b}^{2}\left(G;\mathbb{R}\right)\) the space of _bounded \(2\)-cocycles_ on \(G\), i.e. bounded maps \(\psi:G^{2}\rightarrow\mathbb{R}\) such that \[\psi\left(g_{2},g_{3}\right)-\psi\left(g_{1}g_{2},g_{3}\right)+\psi\left(g_{1 },g_{2}g_{3}\right)-\psi\left(g_{1},g_{2}\right)=0\] for all \(g_{1},g_{2},g_{3}\in G\). **Theorem B** (Bavard duality via the Hopf formula).: _Let \(F\) be a free group, \(R\trianglelefteq F\), and \(G=F/R\). Let \(w\in G\) and let \(\bar{w}\in F\) be a preimage of \(w\) under \(F\xrightarrow{p}G\)._ _Let \(\alpha\in H_{2}\left(G,w;\mathbb{Z}\right)\) and let_ \[\left[\bar{a}_{1},\bar{b}_{1}\right]\cdots\left[\bar{a}_{k},\bar{b}_{k}\right] \in\left\langle\bar{w}\right\rangle R\cap[F,F],\] _be a representative of \(\Psi(\alpha)\), where \(\Psi:H_{2}\left(G,w;\mathbb{Z}\right)\xrightarrow{\cong}\left\langle\bar{w} \right\rangle R\cap[F,F]\left/[F,R\right]\) is the isomorphism of Theorem 4.2. Set \(a_{i}=p\left(\bar{a}_{i}\right)\in G\) and \(b_{i}=p\left(\bar{b}_{i}\right)\in G\)._ _Then_ \[\left\|\iota\alpha\right\|_{1} =\sup\left\{\frac{1}{\left\|\psi\right\|_{\infty}}\left(\psi \left(a_{1},b_{1}\right)+\psi\left(a_{1}b_{1},a_{1}^{-1}\right)+\psi\left(a_{1 }b_{1}a_{1}^{-1},b_{1}^{-1}\right)\right.\right.\] \[\quad+\psi\left(\left[a_{1},b_{1}\right],a_{2}\right)+\psi\left( \left[a_{1},b_{1}\right]a_{2},b_{2}\right)+\psi\left(\left[a_{1},b_{1}\right] a_{2}b_{2},a_{2}^{-1}\right)+\cdots\] \[\quad\quad\left.\left.+\psi\left(\left[a_{1},b_{1}\right]\cdots \left[a_{k-1},b_{k-1}\right]a_{k}b_{k}a_{k}^{-1},b_{k}^{-1}\right)\right) \bigg{|}\psi\in Z_{b}^{2}\left(G;\mathbb{R}\right)\smallsetminus\{0\}\right\},\] _where \(\iota:H_{2}\left(G,w;\mathbb{Z}\right)\to H_{2}\left(G,w;\mathbb{Q}\right)\) is the change-of-coefficients map._ Proof.: Let \(X\) be a \(K(G,1)\) and let \(\gamma:S^{1}\to X\) represent \(w\). Recall that the isomorphism \(\Psi:H_{2}\left(G,w;\mathbb{Z}\right)\xrightarrow{\cong}\left\langle\bar{w} \right\rangle R\cap[F,F]\left/[F,R\right]\) was constructed in the proof of Theorem 4.2 by starting with a product of \(k\) commutators in \(\left\langle\bar{w}\right\rangle R\cap[F,F]\), labelling the edges in a cellular decomposition of the compact surface \(\Sigma_{k,1}\) with those commutators, mapping \(\Sigma_{k,1}\) to \(X\) and considering the image of the fundamental class \([\Sigma_{k,1}]\) in \(H_{2}\left(X,\gamma\right)=H_{2}\left(G,w\right)\). We will now be a bit more specific about the choice of the map \(\Sigma_{k,1}\to X\). We start by picking singular simplices \(\sigma_{g_{1},\ldots,g_{n}}:\Delta^{n}\to X\) for each \(n\)-uple \(\left(g_{1},\ldots,g_{n}\right)\in G^{n}\), in such a way that the restriction of \(\sigma_{g_{1},\ldots,g_{n}}\) to its \(i\)-th face is \(\sigma_{g_{1},\ldots,g_{i}g_{i+1},\ldots,g_{n}}\) (respectively \(\sigma_{g_{2},\ldots,g_{n}}\) for \(i=0\) and \(\sigma_{g_{1},\ldots,g_{n-1}}\) for \(i=n\)). Hence, the map \(\left(g_{1},\ldots,g_{n}\right)\mapsto\sigma_{g_{1},\ldots,g_{n}}\) defines a chain homotopy equivalence between the bar complex of \(G\) and the singular chain complex of \(X\)[4, SSI.4]. Take a one-vertex triangulation of \(\Sigma_{k,1}\) as in Figure 3. We can construct the map \(f:\Sigma_{k,1}\to X\) explicitly by sending each triangle of \(\Sigma_{k,1}\) to the correct singular \(2\)-simplex among the \(\sigma_{g_{1},g_{2}}\)'s. We obtain in particular that \[\alpha=f_{*}\left[\Sigma_{k,1}\right]=\left[\sigma_{a_{1},b_{1}}+ \sigma_{a_{1}b_{1},a_{1}^{-1}}+\sigma_{a_{1}b_{1}a_{1}^{-1},b_{1}^{-1}}+\sigma _{[a_{1},b_{1}],a_{2}}\right.\\ \left.+\sigma_{[a_{1},b_{1}]a_{2},b_{2}}+\cdots+\sigma_{[a_{1},b _{1}]\cdots[a_{k-1},b_{k-1}]a_{k}b_{k}a_{k}^{-1},b_{k}^{-1}}\right]\in H_{2} \left(X,\gamma;\mathbb{Z}\right). \tag{4.1}\] Now Bavard duality for \(H_{2}\left(X,\gamma\right)\) (Theorem A) gives \[\left\|\iota\alpha\right\|_{1}=\sup\left\{\frac{\left\langle u,\alpha\right\rangle }{\left\|u\right\|_{\infty}}\,\right|u\in H_{b}^{2}\left(X;\mathbb{R}\right) \smallsetminus\left\{0\right\}\right\}.\] Pick some \(u\in H_{b}^{2}\left(X;\mathbb{R}\right)\cong H_{b}^{2}\left(G;\mathbb{R}\right)\) and let \(\psi\in Z_{b}^{2}\left(G;\mathbb{R}\right)\) be a \(2\)-cocycle such that \(u=\left[\psi\right]\). The chain homotopy equivalence \(\left(g_{1},\ldots,g_{n}\right)\mapsto\sigma_{g_{1},\ldots,g_{n}}\) tells one how to evaluate \(\psi\) on singular (relative) \(2\)-cycles spanned by the \(\sigma_{g_{1},g_{2}}\)'s in \(C_{2}\left(X;\mathbb{R}\right)\): there is an equality \[\left\langle\psi,\sigma_{g_{1},g_{2}}\right\rangle=\psi\left(g_{1},g_{2}\right).\] Therefore (4.1) implies that the Kronecker product \(\left\langle u,\alpha\right\rangle\) is given by \[\left\langle u,\alpha\right\rangle =\psi\left(a_{1},b_{1}\right)+\psi\left(a_{1}b_{1},a_{1}^{-1} \right)+\psi\left(a_{1}b_{1}a_{1}^{-1},b_{1}^{-1}\right)+\psi\left(\left[a_{1 },b_{1}\right],a_{2}\right)\] \[+\psi\left(\left[a_{1},b_{1}\right]a_{2},b_{2}\right)+\cdots+\psi \left(\left[a_{1},b_{1}\right]\cdots\left[a_{k-1},b_{k-1}\right]a_{k}b_{k}a_{k }^{-1},b_{k}^{-1}\right). \tag{4.2}\] The result follows, remembering that \(\left\|u\right\|_{\infty}=\inf\left\{\left\|\psi\right\|_{\infty}\left|\, \left[\psi\right]=u\right\}\). **Remark 4.4**.: Let \(\phi:G\rightarrow\mathbb{R}\) be a quasimorphism and consider \[\delta\phi:\left(g_{1},g_{2}\right)\in G^{2}\mapsto\phi\left(g_{1}\right)-\phi \left(g_{1}g_{2}\right)+\phi\left(g_{2}\right)\in\mathbb{R}.\] It is easy to check that \(\delta\phi\in Z_{b}^{2}(G;\mathbb{R})\). Given an integral class \(\alpha\in H_{2}\left(G,w;\mathbb{Z}\right)\) with \(\partial\alpha=n\left[S^{1}\right]\), one can use the formula of Theorem B to obtain \[\left\langle\left[\delta\phi\right],\alpha\right\rangle=n\cdot\phi\left(w \right).\] Using the lower bound on \(\left\|\cdot\right\|_{1}\) given by Theorem B together with the connection between \(\operatorname{scl}\) and \(\left\|\cdot\right\|_{1}\) (Corollary 2.8), it follows that \[\operatorname{scl}(w)\geq\frac{1}{2}\sup_{\phi}\frac{\phi(w)}{2\left\|\delta \phi\right\|_{\infty}}.\] On the other hand, classical Bavard duality says that \(\operatorname{scl}(w)\geq\sup_{\phi}\frac{\phi(w)}{2\left\|\delta\phi\right\|_ {\infty}}\)[8, SS2.5]. Feeding quasimorphisms into Theorem B has yielded a non-optimal lower bound on \(\operatorname{scl}\). The reason for this is the difference between a cocycle \(\psi\in Z_{b}^{2}\left(G;\mathbb{R}\right)\) and its class \(\left[\psi\right]\in H_{b}^{2}\left(G;\mathbb{R}\right)\): given \(\phi\in Q(G)\), there are inequalities [8, Lemma 2.58] \[\frac{1}{2}\left\|\delta\phi\right\|_{\infty}\leq\left\|\left[\delta\phi \right]\right\|_{\infty}\leq\left\|\delta\phi\right\|_{\infty},\] and \(\left\|\left[\delta\phi\right]\right\|_{\infty}\) might not be realised by the coboundary of a quasimorphism. Figure 3. One-vertex triangulation of \(\Sigma_{k,1}\). ## 5. The bounded Euler class Calegari [7] exhibited a connection between the rotation quasimorphism, area, and stable commutator length in fundamental groups of compact hyperbolic surfaces with non-empty boundary. We explain how this generalises to a statement about the relative Gromov seminorm in possibly closed hyperbolic surface groups. ### Bounded Euler class of a circle action A choice of hyperbolic structure on a surface \(S\) defines an action of \(\pi_{1}S\) on the hyperbolic plane \(\mathbb{H}^{2}\). This induces an action on the boundary of \(\mathbb{H}^{2}\), which is homeomorphic to the circle \(S^{1}\). In general, the dynamics of an action of a group \(G\) on the circle is encoded by the bounded Euler class, which is a class in \(H_{b}^{2}\left(G\right)\) that was introduced by Ghys [20] as a generalisation of Poincare's rotation number. The bounded Euler class has several equivalent definitions [6], and for our purpose, it will be helpful to define it from the point of view of the orientation cocycle. Consider the action of the group \(H=\operatorname{Homeo}^{+}\left(S^{1}\right)\) of orientation-preserving homeomorphisms of the circle on \(S^{1}\). The _orientation cocycle_\(\operatorname{Or}\in C_{b}^{2}\left(H\curvearrowright S^{1};\mathbb{R}\right)\) is a _bounded homogeneous \(H\)-cochain_ -- i.e. a bounded map \(\operatorname{Or}:\left(S^{1}\right)^{3}\to\mathbb{R}\) that is invariant under the diagonal action of \(H\) on \(\left(S^{1}\right)^{3}\), see [6, SS3.1] -- given by \[\operatorname{Or}\left(x,y,z\right)=\begin{cases}+1&\text{if the triple $\left(x,y,z\right)\in\left(S^{1}\right)^{3}$ is positively oriented}\\ -1&\text{if the triple $\left(x,y,z\right)\in\left(S^{1}\right)^{3}$ is negatively oriented}\\ 0&\text{if the triple $\left(x,y,z\right)\in\left(S^{1}\right)^{3}$ is degenerate} \end{cases}.\] This turns out to be a cocycle, so we can consider its class \([\operatorname{Or}]\in H_{b}^{2}\left(H\curvearrowright S^{1};\mathbb{R}\right)\). **Definition 5.1**.: The _universal real bounded Euler class for circle actions_ is \[\operatorname{eu}_{b}^{\mathbb{R}}=-\frac{1}{2}\eta^{*}\left[\operatorname{Or }\right]\in H_{b}^{2}\left(\operatorname{Homeo}^{+}\left(S^{1}\right); \mathbb{R}\right),\] where \(\eta^{*}:H_{b}^{2}\left(H\curvearrowright S^{1}\right)\to H_{b}^{2}\left( \operatorname{Homeo}^{+}\left(S^{1}\right)\right)\) is the map given by the choice of an arbitrary basepoint in \(S^{1}\). Given an action \(\rho:G\to\operatorname{Homeo}^{+}\left(S^{1}\right)\) of a group on the circle, the (real) _bounded Euler class_ of the action is \[\operatorname{eu}_{b}^{\mathbb{R}}(\rho)=\rho^{*}\operatorname{eu}_{b}^{ \mathbb{R}}\in H_{b}^{2}\left(G;\mathbb{R}\right).\] This measures how far \(\rho\) is from being a rotation action on \(S^{1}\)[18, Corollary 10.27]. The Milnor-Wood inequality [18, Theorem 12.15] implies that \(\left\|\operatorname{eu}_{b}^{\mathbb{R}}(\rho)\right\|_{\infty}\leq\frac{1}{2}\). See [6, 21] for more details on the bounded Euler class. ### Area of a relative \(2\)-class In [7], Calegari defines a notion of area for a homologically trivial \(\gamma:\coprod S^{1}\to S\) in a hyperbolic surface \(S\) with non-empty boundary. In his definition, it is crucial that \(S\) has non-empty boundary because then \(H_{2}(S)=0\), so the map \(\partial:H_{2}\left(S,\gamma\right)\to H_{1}\left(\coprod S^{1}\right)\) is injective and there is a unique class \(\alpha\in\partial^{-1}\left(\coprod S^{1}\right)\). We now explain how to generalise Calegari's notion of area to the case where \(S\) is closed by defining the area of a class in \(H_{2}(S,\gamma)\). Let \(S\) be a hyperbolic surface with (possibly empty) geodesic boundary. Let \(\gamma:\coprod S^{1}\to X\) be a collection of geodesic loops in \(S\), and let \(\alpha\in H_{2}\left(S,\gamma;\mathbb{R}\right)\). By definition, \(H_{2}(S,\gamma)=H_{2}\left(S_{\gamma},\coprod S^{1}\right)\). The mapping cylinder \(S_{\gamma}\) has no geometric structure allowing us to measure areas, but there is a map of pairs \(\left(S_{\gamma},\coprod S^{1}\right)\to\left(S,\operatorname{Im}\gamma\right)\) defined by collapsing the cylinder. This induces a morphism \[H_{2}\left(S,\gamma;\mathbb{R}\right)\to H_{2}\left(S,\operatorname{Im}\gamma; \mathbb{R}\right),\] and we'll measure the area of \(\alpha\) in the image. We pick a cell structure on \(S\) such that * The \(0\)-skeleton of \(S\) contains all multiple points of \(\gamma\), * The \(1\)-skeleton of \(S\) contains \(\operatorname{Im}\gamma\), and * Each \(2\)-cell is positively oriented (for the orientation inherited by \(S\)). There is a cellular relative \(2\)-cycle \(c\) representing the image of \(\alpha\) in \(H_{2}\left(S,\operatorname{Im}\gamma;\mathbb{R}\right)\), and \(c\) is in fact unique as \(C_{3}^{\operatorname{cell}}\left(S\right)=0\) and \(C_{2}^{\operatorname{cell}}\left(\operatorname{Im}\gamma\right)=0\). **Definition 5.2**.: Let \(\gamma:\coprod S^{1}\to S\) be a collection of geodesic loops in a hyperbolic surface \(S\). Given \(\alpha\in H_{2}\left(S,\gamma;\mathbb{R}\right)\), the _area_ of \(\alpha\) is defined by \[\operatorname{area}(\alpha)=\sum_{\sigma}\lambda_{\sigma}\operatorname{area} \left(\sigma\right),\] where \(\sum_{\sigma}\lambda_{\sigma}\sigma\in Z_{2}^{\operatorname{cell}}\left(S, \operatorname{Im}\gamma;\mathbb{R}\right)\) (with \(\lambda_{\sigma}\in\mathbb{R}\) for each \(2\)-cell \(\sigma\)) is the unique cellular relative \(2\)-cycle representing the image of \(\alpha\) in \(H_{2}\left(S,\operatorname{Im}\gamma;\mathbb{R}\right)\). **Remark 5.3**.: Let \(f:\left(\Sigma,\partial\Sigma\right)\to\left(S,\gamma\right)\) be an admissible surface. Assume that \(\Sigma\) is equipped with a hyperbolic structure with respect to which the map \(f:\Sigma\to S\) is an isometric embedding. Then there is an equality \[\operatorname{area}\left(f_{*}[\Sigma]\right)=\operatorname{area}(\Sigma),\] where \(f_{*}[\Sigma]\) is seen as a class in \(H_{2}\left(S,\gamma;\mathbb{R}\right)\). ### Pleated surfaces In order to obtain good estimates on the Gromov semi-norm for a hyperbolic surface \(S\), it will be helpful to measure it with special admissible surfaces, called pleated surfaces. The heuristics behind pleated surfaces is the following: if \(\Sigma\) is an orientable compact connected surface, then its simplicial volume is given by \(\left\|\left[\Sigma\right]\right\|_{1}=-2\chi^{-}(\Sigma)\); however, there is no triangulation of \(\Sigma\) realising this equality. Instead, the simplicial volume is realised by an _ideal triangulation_. The idea is therefore to endow admissible surfaces \(\Sigma\) with ideal triangulations that are compatible with the hyperbolic structure on \(S\). Pleated surfaces, which were introduced by Thurston [28, SS8.8], will achieve this. A _geodesic lamination_\(\Lambda\) in a hyperbolic surface \(\Sigma\) is a closed subset of \(\Sigma\) which decomposes as a disjoint union of complete embedded geodesics. Each such geodesic is called a _leaf_ of \(\Lambda\). **Definition 5.4**.: Let \(M\) be a hyperbolic manifold. A _pleated surface_ in \(M\) is a map \(f:\Sigma\to M\), where \(\Sigma\) is a finite-area hyperbolic surface, such that 1. \(f\) sends each arc in \(\Sigma\) to an arc of the same length in \(M\), 2. There is a geodesic lamination \(\Lambda\subseteq\Sigma\) such that \(f\) sends each leaf of \(\Lambda\) to a geodesic of \(M\), and \(f\) is totally geodesic (i.e. sends every geodesic to a geodesic) on \(\Sigma\smallsetminus\Lambda\), and 3. If \(\Sigma\) is non-compact, then \(f\) sends each small neighbourhood of each cusp of \(\Sigma\) to a small neighbourhood of a cusp of \(M\). The geodesic lamination \(\Lambda\) is called a _pleating locus_ for \(f\). For a more detailed introduction to pleated surfaces in hyperbolic manifolds, we refer the reader to [3, 12, 19]. We now show, following Calegari [8, SS3.1.3], how to obtain pleated admissible surfaces. The fundamental tool for this is Thurston's _spinning construction_: **Lemma 5.5** (Thurston [28, SS8.10]).: _Let \(P\) be a pair of pants (i.e. a compact hyperbolic surface of genus \(0\) with three boundary components) and let \(M\) be a compact hyperbolic surface or a closed hyperbolic manifold. Given a map \(f:P\to M\), either_ 1. _The image of_ \(\pi_{1}P\) _under_ \(f_{*}\) _is a cyclic subgroup of_ \(\pi_{1}M\)_, or_ 2. _The map_ \(f\) _can be homotoped to a pleated surface._ Proof.: Consider a lift \(\tilde{f}:\tilde{P}\to\tilde{M}\) of \(f\) to universal covers. Note that \(\tilde{M}\) is a convex subset of the hyperbolic \(n\)-space \(\mathbb{H}^{n}\), and \(\tilde{P}\) is a convex subset of \(\mathbb{H}^{2}\). Pick a geodesic triangle \(\Delta\) in \(P\) with one vertex on each boundary component. This lifts to a geodesic triangle \(\tilde{\Delta}\) in a fundamental domain of \(\tilde{P}\subseteq\mathbb{H}^{2}\). Now the spinning construction consists in dragging vertices of \(\tilde{\Delta}\) along the lifts of \(\partial P\) to \(\mathbb{H}^{2}\), and moving them to the boundary \(\partial\mathbb{H}^{2}\). See Figure 4. This construction is called _spinning_ because, in \(P\), the triangle \(\Delta\) has been spun around the boundary components of \(P\). In this way, one obtains a geodesic lamination \(\Lambda\) on \(P\) with three leaves, whose complement consists of two open ideal triangles. There are two cases: * If \(f(\Lambda)\) is degenerate (i.e. the images of the three leaves of \(\Lambda\) have the same axis in \(\tilde{M}\)), then \(f_{*}\left(\pi_{1}P\right)\) generates a cyclic subgroup of \(\pi_{1}M\). * Otherwise, construct a map \(f^{\prime}:P\to M\) homotopic to \(f\) as follows. For each boundary component \(\partial_{i}\) of \(P\), we define \(f^{\prime}\left(\partial_{i}\right)\) to be the unique closed geodesic in the homotopy class of \(f\left(\partial_{i}\right)\). Each leaf \(\lambda_{i}\) of \(\Lambda\) is mapped under \(f\) to a quasi-geodesic in \(M\), which can be straightened to a geodesic \(\gamma_{i}\). Set \(f^{\prime}\left(\lambda_{i}\right)=\gamma_{i}\). Finally, each component of \(P\smallsetminus\Lambda\) is an open ideal triangle, and since its image is nondegenerate in \(M\), there is a unique totally geodesic extension of \(f^{\prime}\) to this triangle. Using Thurston's spinning construction, we can obtain pleated admissible surfaces. This is an adaptation of a lemma of Calegari [8, Lemma 3.7]: **Lemma 5.6** (Pleated admissible surfaces).: _Let \(M\) be a compact hyperbolic surface or a closed hyperbolic manifold. Let \(\gamma:\coprod S^{1}\to M\) be a collection of geodesic loops in \(M\). Then for every rational class \(\alpha\in H_{2}\left(M,\gamma;\mathbb{Q}\right)\) and for every \(\varepsilon>0\), there is a pleated admissible surface \(f:(\Sigma,\partial\Sigma)\to(M,\gamma)\) such that \(f_{*}[\Sigma]=n(\Sigma)\alpha\) for some \(n(\Sigma)\in\mathbb{N}_{\geq 1}\), and_ \[\left\|\alpha\right\|_{1}\leq\frac{-2\chi^{-}(\Sigma)}{n(\Sigma)}\leq\left\| \alpha\right\|_{1}+\varepsilon. \tag{5.1}\] Proof.: By Lemma 2.10, there is a simple, incompressible, admissible surface \(f:(\Sigma,\partial\Sigma)\to(M,\gamma)\) satisfying (5.1), with \(f_{*}\left[\Sigma\right]=n(\Sigma)\alpha\) for some \(n(\Sigma)\in\mathbb{N}_{\geq 1}\). Now take a pants decomposition \(\left\{P_{i}\right\}_{i}\) of \(\Sigma\), as in Figure 5. The idea is to apply the spinning construction (Lemma 5.5) to each \(P_{i}\). We can perform the construction separately on each connected component of \(\Sigma\); to simplify notations, we therefore assume that \(\Sigma\) is connected. There are three types of components in the pants decomposition: * Pairs of pants that are part of a twice-punctured torus (e.g. \(P_{2},\dots,P_{5}\) in Figure 5), Figure 4. Thurston’s spinning construction. 2. Pairs of pants that are glued to themselves to form a once-punctured torus (e.g. \(P_{6}\) in Figure 5), 3. Pairs of pants that are not of type (i) or (ii) (e.g. \(P_{0},P_{1}\) in Figure 5). Fix a pants component \(P_{i}\). We want to apply Lemma 5.5 to the restriction \(f_{|P_{i}}:P_{i}\to M\); we need to ensure that \(f_{*}\left(\pi_{1}P_{i}\right)\) is non-cyclic. We distinguish three cases, based on the type of \(P_{i}\). We first show that, if \(P_{i}\) of type (iii), then \(f_{*}\left(\pi_{1}P_{i}\right)\) cannot be cyclic. Recall that \(\gamma:\coprod S^{1}\to M\) represents a \(1\)-chain \(c=\sum_{j}n_{j}w_{j}\in C_{1}\left(\pi_{1}M;\mathbb{Z}\right)\) (with \(w_{j}\in\pi_{1}M\) all distinct, no pair of which generates a cyclic subgroup of \(\pi_{1}M\), and \(n_{j}\in\mathbb{Z}\smallsetminus 0\) -- see SS2.a). Since \(f\) is an admissible surface, each boundary component of \(\Sigma\) maps to a power of some \(w_{j}\), and simplicity implies that no two boundary components of \(\Sigma\) map to powers of the same \(w_{j}\). We can assume that the components \(\left\{P_{i}\right\}_{i}\) of type (iii) are ordered as \(\left\{P_{0},\ldots,P_{k}\right\}\), in such a way that \(P_{0}\) has two boundary components on \(\partial\Sigma\), and each \(P_{i}\) is glued to \(P_{i-1}\) along one boundary component and has one boundary component on \(\partial\Sigma\) (this is consistent with the notations of Figure 5, where \(k=1\)). With these notations, we can order the \(w_{j}\)'s in such a way that \(f_{*}\left(\pi_{1}P_{0}\right)=\left\langle w_{0},w_{1}\right\rangle\), and each \(P_{i}\) has one boundary component glued to \(P_{i-1}\) and whose image represents an element of \(\left\langle w_{0},\ldots,w_{i}\right\rangle\), and one boundary component lying on \(\partial\Sigma\), and whose image represents a power of \(w_{i+1}\). In particular, it follows that \(f_{*}\left(\pi_{1}P_{i}\right)\) is not cyclic for any \(P_{i}\) of type (iii). Now assume that \(P_{i}\) is of type (i). Two of the boundary components \(\partial_{+}\) and \(\partial_{-}\) of \(P_{i}\) are meridians in a twice-punctured torus (\(\partial_{\pm}\) are depicted in Figure 5 for \(P_{i}=P_{4}\)). Let \(\alpha\) be the equator of this twice-punctured torus and let \(\delta_{\alpha}:\Sigma\rightarrow\Sigma\) denote the Dehn twist along \(\alpha\). If \(f_{*}\left(\pi_{1}P_{i}\right)=\left\langle f_{*}\left(\partial_{+}\right),f_{ *}\left(\partial_{-}\right)\right\rangle\) is cyclic, then replace \(\partial_{\pm}\) by \(\delta_{\alpha}\partial_{\pm}\); this amounts to defining a new pants decomposition of \(\Sigma\). For this pants decomposition, \(\left\langle f_{*}\left(\partial_{+}\right),f_{*}\left(\partial_{-}\right)\right\rangle\) is not cyclic because \(f_{*}\alpha\) and \(f_{*}\partial_{\pm}\) do not commute by incompressibility (otherwise \(\left[\alpha,\partial_{\pm}\right]\) would define a simple closed curve in \(\Sigma\) with nullhomotopic image in \(M\)). It might be that, after this modification, the adjacent pair of pants \(P_{j}\) in the same twice-punctured torus as \(P_{i}\) has cyclic image in \(\pi_{1}M\). In this case, one applies the Dehn twist \(\delta_{\alpha}\) a second time. Assume finally that \(P_{i}\) is of type (ii). Then \(P_{i}\) is glued to itself to form a once-punctured torus. Denote by \(\partial_{1}\) one of the two boundary components of \(P_{i}\) that is glued to form a meridian in the once-punctured torus, and by \(\beta\) the equator (\(\partial_{1}\) and \(\beta\) are depicted in Figure 5 for \(P_{i}=P_{6}\)). Then \(f_{*}\left(\pi_{1}P_{i}\right)=\left\langle f_{*}\left(\partial_{1}\right),f_{ *}\left(\partial_{1}\right)^{f_{*}\left(\beta\right)}\right\rangle\), where the exponent denotes conjugation. If \(f_{*}\left(\pi_{1}P_{i}\right)\) is cyclic, then there are \(w\in\pi_{1}M\) and \(k,\ell\in\mathbb{Z}\) such that \[f_{*}\left(\partial_{1}\right)=w^{k}=f_{*}(\beta)w^{\ell}f_{*}(\beta)^{-1}. \tag{5.2}\] But \(\pi_{1}M\) is Gromov-hyperbolic, and therefore it is known to be a _CSA group_, in the sense that all its maximal abelian subgroups are malnormal -- see [16, Example 10]. Hence \(\left\langle w\right\rangle\) is malnormal (after possibly replacing \(w\) by a generator of a maximal abelian subgroup). Figure 5. Pants decomposition of \(\Sigma\). abelian subgroup containing it), and \(f_{*}\left(\partial_{1}\right)\in\left\langle w\right\rangle\cap\left\langle w \right\rangle^{f_{*}\left(\beta\right)}\smallsetminus\left\{1\right\}\), so \(f_{*}(\beta)\in\left\langle w\right\rangle\). In particular, \(f_{*}\left[\partial_{1},\beta\right]=1\), which contradicts incompressibility. This proves that \(f_{*}\left(\pi_{1}P_{i}\right)\) cannot be cyclic. Therefore, after performing the above modifications, we have a pants decomposition of \(\Sigma\) for which \(f_{*}\left(\pi_{1}P_{i}\right)\) is never a cyclic subgroup of \(\pi_{1}M\). By Lemma 5.5, the restriction of \(f\) to each \(P_{i}\) can be homotoped to a pleated map. Moreover, these homotopies can be performed simultaneously as the image of each boundary component of a pair of pants is homotoped to the unique geodesic in its homotopy class. Hence, we obtain a pleated map homotopic to \(f\), which is still an admissible surface and satisfies (5.1). **Remark 5.7**.: In fact, we will not need the estimate (5.1) for the Gromov semi-norm in Lemma 5.6: it will be enough for us to know that every rational class is represented by a pleated admissible surface. ### Bounded Euler class and area A hyperbolic structure on a surface \(S\) induces an action of \(\pi_{1}S\) on the boundary of the hyperbolic plane, which is a circle. Hence, we get a circle action \(\rho:\pi_{1}S\to\mathrm{Homeo}^{+}\left(S^{1}\right)\), defining a bounded Euler class \(\mathrm{eu}_{b}^{\mathbb{R}}(\rho)\in H_{b}^{2}\left(\pi_{1}S;\mathbb{R}\right)\) as explained in SS5.a. We will call it the _bounded Euler class of \(S\)_ and denote it by \(\mathrm{eu}_{b}^{\mathbb{R}}(S)\). It can also be seen as an element of \(H_{b}^{2}(S;\mathbb{R})\) under the isometric isomorphism \(H_{b}^{2}\left(\pi_{1}S\right)\cong H_{b}^{2}(S)\) given by Gromov's Theorem [18, Theorem 5.9]. The following is implicit in Calegari's book [8, Lemma 4.68]: **Lemma 5.8** (Bounded Euler class and area).: _Let \(\gamma:\coprod S^{1}\to S\) be a collection of geodesic loops in a compact hyperbolic surface \(S\). Let \(\alpha\in H_{2}\left(S,\gamma;\mathbb{Q}\right)\) be a rational class. Then_ \[\mathrm{area}(\alpha)=-2\pi\left\langle\mathrm{eu}_{b}^{\mathbb{R}}(S), \alpha\right\rangle.\] Proof.: Lemma 5.6 yields a pleated admissible surface \(f:(\Sigma,\partial\Sigma)\to(S,\gamma)\) with \(f_{*}[\Sigma]=n(\Sigma)\alpha\) for some \(n(\Sigma)\in\mathbb{N}_{\geq 1}\). Hence, \[\left\langle\mathrm{eu}_{b}^{\mathbb{R}}(S),\alpha\right\rangle=\frac{1}{n( \Sigma)}\left\langle\mathrm{eu}_{b}^{\mathbb{R}}(S),f_{*}[\Sigma]\right\rangle.\] Recall from SS5.a that \(\mathrm{eu}_{b}^{\mathbb{R}}\) is defined as the image of \(-\frac{1}{2}\left[\mathrm{Or}\right]\) in \(H_{b}^{2}\left(\mathrm{Homeo}^{+}\left(S^{1}\right)\right)\). The pleated structure on \(\Sigma\) defines an ideal triangulation, and the Kronecker product \(\left\langle\mathrm{eu}_{b}^{\mathbb{R}}(S),f_{*}[\Sigma]\right\rangle\) is therefore given by \[\left\langle\mathrm{eu}_{b}^{\mathbb{R}}(S),f_{*}[\Sigma]\right\rangle=- \frac{1}{2}\sum_{\sigma}\mathrm{Or}\left(f(\sigma)\right),\] where the sum is over all triangles \(\sigma\) in this ideal triangulation, and \(\mathrm{Or}\left(f(\sigma)\right)\) is \(+1\) if \(f(\sigma)\) is positively oriented, \(-1\) if \(f(\sigma)\) is negatively oriented, and \(0\) if \(f(\sigma)\) is degenerate. But each \(f(\sigma)\) is an ideal triangle in \(S\), and contributes \(\pi\,\mathrm{Or}\left(f(\sigma)\right)\) to \(\mathrm{area}\left(\sum_{\sigma}f(\sigma)\right)\) by the Gauss-Bonnet Theorem. Therefore, \[\mathrm{area}(\alpha) =\frac{1}{n(\Sigma)}\,\mathrm{area}\left(f_{*}[\Sigma]\right)= \frac{1}{n(\Sigma)}\,\mathrm{area}\left(\sum_{\sigma}f(\sigma)\right)\] \[=\frac{\pi}{n(\Sigma)}\sum_{\sigma}\mathrm{Or}\left(f(\sigma) \right)=-\frac{2\pi}{n(\Sigma)}\left\langle\mathrm{eu}_{\mathbb{R}}^{b}(S),f_ {*}[\Sigma]\right\rangle\] \[=-2\pi\left\langle\mathrm{eu}_{b}^{\mathbb{R}}(S),\alpha\right\rangle.\qed\] A class \(\alpha\in H_{2}\left(S,\gamma;\mathbb{R}\right)\) is said to be _projectively represented by a positive immersion_ if there is an admissible surface \(f:(\Sigma,\partial\Sigma)\to(S,\gamma)\) with \(f_{*}[\Sigma]=n(\Sigma)\alpha\) for some \(n(\Sigma)\in\mathbb{N}_{\geq 1}\), and such that \(f\) is an orientation-preserving immersion. The following is now a straightforward generalisation of a result of Calegari [8, Lemma 4.62]: **Theorem C** (Extremality of the bounded Euler class).: _Let \(\gamma:\coprod S^{1}\to S\) be a collection of geodesic loops in a compact hyperbolic surface \(S\). Let \(\alpha\in H_{2}(S,\gamma;\mathbb{Q})\) be projectively represented by a positive immersion \(f:(\Sigma,\partial\Sigma)\looparrowright(S,\gamma)\). Then_ \[\left\|\alpha\right\|_{1}=\frac{-2\chi^{-}(\Sigma)}{n(\Sigma)}=-2\left\langle \operatorname{eu}_{b}^{\mathbb{R}}(S),\alpha\right\rangle.\] _In other words, \(f\) is an extremal surface and \(-\operatorname{eu}_{b}^{\mathbb{R}}(S)\) is an extremal class for \(\alpha\). In particular, \(\left\|\alpha\right\|_{1}\in\mathbb{Q}\)._ Proof.: Note that \(\Sigma\) inherits a hyperbolic structure from \(S\) for which \(\operatorname{area}(\Sigma)=n(\Sigma)\operatorname{area}(\alpha)\) (see Remark 5.3). By the Gauss-Bonnet Theorem, \[-2\pi\chi^{-}(\Sigma)=-2\pi\chi(\Sigma)=\operatorname{area}(\Sigma)=n(\Sigma) \operatorname{area}(\alpha).\] Therefore (using the topological interpretation of \(\left\|\cdot\right\|_{1}\) -- see Proposition 2.7), \[\left\|\alpha\right\|_{1}\leq\frac{-2\chi^{-}(\Sigma)}{n(\Sigma)}=\frac{1}{ \pi}\operatorname{area}(\alpha)=-2\left\langle\operatorname{eu}_{b}^{ \mathbb{R}}(S),\alpha\right\rangle,\] where the last equality follows from Lemma 5.8. But the Milnor-Wood inequality [18, Theorem 12.15] implies that \(\left\|\operatorname{eu}_{b}^{\mathbb{R}}(S)\right\|_{\infty}\leq\left\| \operatorname{eu}_{b}^{\mathbb{R}}\right\|_{\infty}=\frac{1}{2}\), so that Bavard duality for \(\left\|\cdot\right\|_{1}\) (Theorem A) gives \[-2\left\langle\operatorname{eu}_{b}^{\mathbb{R}}(S),\alpha\right\rangle\leq \frac{\left\langle-\operatorname{eu}_{b}^{\mathbb{R}}(S),\alpha\right\rangle }{\left\|\operatorname{eu}_{b}^{\mathbb{R}}(S)\right\|_{\infty}}\leq\left\| \alpha\right\|_{1}.\qed\] **Remark 5.9**.: In the case where \(S\) has non-empty boundary, the converse of Theorem C holds: if \(\left\|\alpha\right\|_{1}=-2\left\langle\operatorname{eu}_{b}^{\mathbb{R}}(S),\alpha\right\rangle\), then \(\alpha\) is projectively represented by a positive immersion [8, Lemma 4.62]. However, this uses the existence of extremal surfaces for \(\left\|\alpha\right\|_{1}\) (see [8, Remark 4.65]), which is not known if \(S\) is closed.
2310.06199
Efficacy of reduced order source terms for a coupled wave-circulation model in the Gulf of Mexico
During hurricanes, coupled wave-circulation models are critical tools for public safety. The standard approach is to use a high fidelity circulation model coupled with a wave model which uses the most advanced source terms. As a result, the models can be highly computationally expensive and so this study investigates the potential consequences of using highly simplified (reduced order) source terms within the wave model component of the coupled wave-circulation model. The trade-off between run time and accuracy with respect to observations is quantified for a set of two storms that impacted the Gulf of Mexico, Hurricane Ike and Hurricane Ida. Water surface elevations as well as wave statistics (significant wave height, peak period, and mean wave direction) are compared to observations. The usage of the reduced order source terms yielded significant savings in computational cost. Additionally, relatively low amounts of additional error with respect to observations during the simulations with reduced order source terms. However, large changes in global model outputs of the wave statistics were observed based on the choice of source terms particularly near the track of each hurricane.
Mark Loveland, Jessica Meixner, Eirik Valseth, Clint Dawson
2023-10-09T23:01:46Z
http://arxiv.org/abs/2310.06199v3
# Efficacy of reduced order source terms for a coupled wave-circulation model in the Gulf of Mexico ###### Abstract We propose a coupled wave-circulation model for the full 3rd generation source terms for a coupled wave-circulation model in the Gulf of Mexico. We propose a coupled wave-circulation model for the full 3rd generation source terms for a coupled wave-circulation model in the Gulf of Mexico. # Efficacy of reduced order source terms for a coupled wave-circulation model in the Gulf of Mexico Mark Loveland\({}^{a,}\) Corresponding author Jessica Meixner\({}^{b}\) Eirik Valseth\({}^{a,c,d}\) Clint Dawson\({}^{a}\) ###### Abstract A study is conducted that focuses on the trade-off between run time and accuracy of using reduced order source terms in a coupled wave-circulation model. In the study, ADCIRC+SWAN would waves and ocean circulation respectively, in order to improve model accuracy. In this study we will focus on coupled models which use a spectral wave model governed by the Wave Action Balance Equation (WAE), and a circulation model governed by the Shallow Water Equations. There are many operational-scale models of this kind which include coupling between long and short waves in order to predict storm surge and wave statistics including ADCIRC+SWAN [10], ADCIRC+STWAVE [8, 11], AdH+STWAVE [41], ADCIRC+WAVEWATCHIII [44], and SCHISM-WWMIII [52]. Spectral wave models are expensive and in particular the computation of the source terms within the models can take a large portion of the computing load [50, 42, 34, 1, 45, 30, 32, 52]. The source terms within the context of the WAE are both complex and highly empirical due to the complex nature of the physics it is trying to approximate and is still an active area of research [34, 1]. Since the latter half of the 20th century, the general format of the source terms for the WAE has remained relatively unchanged, and is often written as a sum of wind input (\(S_{in}\)), nonlinear interaction (\(S_{nl}\)), and dissipation(\(S_{diss}\)) [32], though the specific parameterizations of these are always evolving. Operational wind wave models date back to the 1960s and 1970s and had success in many cases [30]. However, there were many issues with the evolution of the spectral shape and there were large discrepancies from model to model due to differences in source term configurations [53]. In the 1970s the sea wave modeling project (SWAMP) was conducted to review, analyze, and compare problems among operational wind wave models. In the review, spectral wind wave models were categorized into three generations based on how the models treat source terms [53]. The categorization is still used commonly in the wave modeling community and can be summarized in the following way. First generation wave models are those that either have no source term explicitly accounting for nonlinear wave-wave interactions or the source term implemented for nonlinear wave-wave interactions is insignificant in magnitude compared to wind input. The first generation models had success in modeling certain scenarios but in general were found to underestimate observed wave growth and required tuning of the source term for wind input, \(S_{in}\), to an extent that was shown to be non-physical [32]. Furthermore, the first generation models were unable to explain what is known as "the overshoot phenomenon" of a growing wind sea. With respect to the wind wave models, the overshoot effect results in the rapid growth of wave energy on the forward (low frequency) face of the spectrum (see **Figure 1**) [32]. The work by Hasselmann _et al._ both with theory and the important JONSWAP experiment led to consensus that this overshoot effect is a real phenomenon that is driven primarily by nonlinear wave-wave interaction [20, 21]. In order to address the nonlinear interactions as illustrated above, second and third generation wind wave models were developed. Second generation models are those with nonlinear source terms \(S_{nl}\) that are defined in terms of a small number of parameters. Within the second generation model category there are two different kinds of models, referred to as 'coupled hybrid' and 'coupled discrete'. The coupled hybrid models are those which separate the treatment of the wind sea and the swell (see **Figure 2**). For the wind sea, the dynamics are treated parametrically and assume a certain spectrum shape a-priori in order to account for nonlinear interaction. The swell portion of the spectrum is treated by using a similar approach to the first generation models but neglecting nonlinear effects. According to the SWAMP study, the hybrid models have difficulties in the transition zone of the spectrum between wind sea and swell where the nonlinear effects still play a role. The coupled discrete models attempt to address this issue in the transition zone by representing both wind sea and swell components of the spectrum without parameterizing the wind sea component. However, the nonlinear interactions are still highly simplified into a parametric form. The SWAMP study argued that simplifying the nonlinear interactions restricts the spectral shapes the coupled discrete model can take on. Furthermore, coupled discrete models had documented problems with stability and generating unrealistic spectrum shapes [53]. As a result of the inconsistencies, the SWAMP study called for a new class of wind wave models deemed third generation models. Beginning in the 1980s and continuing through the 1990s, many operational models based on this third generation paradigm were developed. Some popular examples of third generation wind wave models that are still widely used operationally are SWAN, WAVEWATCH III, WWM III and WAM [6, 55, 51, 34]. Third generation models are different from first and second generation models in that they explicitly approximate the full nonlinear interaction in \(S_{nl}\). Third generation models for the most part are still the common form of wind wave model. There are many specific forms of the source terms that third generation models have employed. For this study, the third generation model in use will be SWAN and the latest available package will be used which is considered the state of the art and often referred to as the ST6 source terms [49]. It is also worth noting that ST4 source terms are still often used and have shown promising results [59]. Figure 1: Data collected from JONSWAP experiment [20]. Shows overshoot effect from nonlinear interactions causing spectrum to shift from high frequencies (5) to low frequencies (11) as sea state develops. Figure from [https://wikwaves.org](https://wikwaves.org). It is also important to note that within the third generation models, the reliance on a significant approximation to the nonlinear interactions, \(S_{nl}\), is necessary in order for the model to be scalable. In this study the Discrete Interaction Approximation (DIA) will be used which is the most commonly used in operational models [25]. There has been recent work on improving this approximation to nonlinear wave-wave interactions with other methods like Gauss quadrature method (GQM), the two scale approximation (TSA) [48], Generalized Multiple DIA [56], and the RIAM method [33]. 3rd generation state of the art source terms, are expensive computationally and there is still an ongoing debate on their physical accuracy namely of the source term packages relying on the DIA [2]. As a result there is work on finding parametric relations for waves generated by hurricanes in order to reduce model cost while still incorporating effects from waves [64]. Furthermore, there has been investigation into using reduced order or parametric models for the spectral waves to force circulation models [7]. In this study, ADCIRC+SWAN, will be used in order to investigate the trade off between computational performance and accuracy of 1st, 2nd, and 3rd generation source term parametrizations within SWAN. In particular, this work will focus on the impacts of source term choices within the context of a coupled spectral wave and shallow water model setup. Following this introductory section, in Section 2, ADCIRC+SWAN will briefly be described along with the source term packages used within SWAN. Next, in Section 3 the specific storms used in this study, Hurricanes Ike and Ida, will be discussed and presented in detail. Then in Section 4, the specific configuration for each ADCIRC+SWAN will be outlined in detail. The results of all of the simulations are shown and analyzed against field data in Section 5. Finally, conclusions and comments on potential future directions of work are drawn in Section 6. Figure 2: A typical wave spectrum which shows the wind sea and the swell. Figure from noaa.gov. ## 2 Model Details ### The ADCIRC+SWAN Model The shallow water model which is coupled to SWAN is called the Advanced Circulation Model (ADCIRC) [40]. ADCIRC is a finite element model which solves the shallow water equations (SWE) on unstructured triangular meshes and is used often to predict tides and storm surges [61]. Both standalone ADCIRC and the coupled model ADCIRC+SWAN have been validated in many realistic scenarios such as Hurricane Katrina, Hurricane Rita, Hurricane Ike, and is currently used operationally for forecasting and hindcasting [10, 12, 17, 29]. The coupled model (ADCIRC+SWAN) works through a tight, sequential coupling set up where ADCIRC is run for some time and the resulting water depths and mean water velocities are passed as inputs to SWAN. SWAN then runs given this information and then outputs wave radiation stress which is added as forcing into the momentum equations of ADCIRC [10]. Both ADCIRC and SWAN are run on the same unstructured mesh which eliminates the need for interpolation of water elevations, water velocities, wave radiation stresses, bathymetry, or wind inputs. ADCIRC obtains the water elevations by solving the Generalized Wave Continuity Equation (GWCE) version of the SWE, which is obtained by differentiation of the continuity equation of the SWE in time and addition of the momentum equations of the SWE [40]. ADCIRC subsequently solves the weak form of the GWCE using the standard Bubnov-Galerkin method using linear polynomial basis functions which is reproduced below. Find \(\zeta_{i}\in U_{h}(\Omega)\): \[(\frac{\partial^{2}\zeta_{i}}{\partial t^{2}},\phi_{j})_{\Omega} +(\tau_{o}\frac{\partial\zeta_{i}}{\partial t},\phi_{j})_{\Omega}+(gh_{i} \nabla\zeta_{i},\nabla\phi_{j})_{\Omega}= \tag{1}\] \[(\mathbf{J}_{i},\nabla\phi_{j})_{\Omega}+(Q_{x}\frac{\partial \tau_{o}}{\partial x},\phi_{j})_{\Omega}+(Q_{y}\frac{\partial\tau_{o}}{ \partial y},\phi_{j})_{\Omega}-(\frac{\partial\mathbf{Q}}{\partial t}\cdot \mathbf{n}+\tau_{o}\mathbf{Q}\cdot\mathbf{n},\phi_{j})_{\Gamma}\quad\forall \phi_{j}\in V_{h}(\Omega).\] In this case, \(\zeta_{i}\), is the water elevation relative to a geoid, \(\tau_{o}\) is a tunable function for numerical stability, \(\mathbf{n}\) is the outward pointing normal vector on the domain boundary \(\Gamma\), \(h_{i}\) is the bathymetric depth to the bottom, \(\mathbf{J}_{i}\) is a vector quantity representing momentum flux and is defined as: \[\mathbf{J}=\left\{\begin{array}{l}\begin{aligned} &\begin{aligned} &\begin{aligned} & -Q_{x}\frac{\partial U}{\partial x}-Q_{y}\frac{\partial U}{ \partial y}+fQ_{y}-\frac{\pi}{2}\frac{\partial\zeta^{2}}{\partial x^{2}}-gd \frac{\partial}{\partial x}\left[\frac{P_{x}}{g\rho}-\alpha r\right]\\ &+\frac{\tau_{ss,extroid}+\tau_{ss,extores}-\tau_{bx}}{\rho}+M_{x}-D_{x} +U\frac{\partial\tau}{\partial t}+\tau_{o}Q_{x}-gd\frac{\partial\zeta}{ \partial x}\\ &-Q_{x}\frac{\partial V}{\partial x}-Q_{y}\frac{\partial V}{ \partial y}+fQ_{x}-\frac{\pi}{2}\frac{\partial\zeta^{2}}{\partial y^{2}}-gd \frac{\partial}{\partial y}\left[\frac{P_{x}}{g\rho}-\alpha r\right]\\ &+\frac{\tau_{sy,extroid}+\tau_{sy,extores}-\tau_{by}}{+M_{y}-D_{y} +V\frac{\partial\tau}{\partial t}+\tau_{o}Q_{y}-gd\frac{\partial\zeta}{ \partial y}\end{aligned}\end{aligned}\right\}.\end{aligned} \end{array}\right. \tag{2}\] In the above equation we have the depth-averaged velocities in the x and y direction as \(U,V\) respectively; the vector \(\mathbf{Q}=(Q_{x},Q_{y})\) represents momentum and is equal \((U\,d,V\,d)\); \(d=\zeta+h\) is the total water depth; \(f\) is the Coriolis force parameter; \(g\) is the constant of gravitational acceleration; \(P_{s}\) is the atmospheric pressure at the surface; \(\rho\) is the density of water; \(\alpha\) is the effective earth elasticity factor; \(\gamma\) is the tidal potential factor; \(\tau_{ss,extroid},\tau_{sy,extind}\) are the wind stresses in the x and y direction; \(\tau_{xx,waves},\tau_{sy,waves}\) are the stresses in the x and y directions from wind waves, which we discuss later; \(\tau_{bx},\tau_{by}\) are the bottom stresses in both x and y; \(M_{x},M_{y}\) represent the lateral stress gradient; \(D_{x},D_{y}\) represent momentum dispersion. In addition to the GWCE, ADCIRC also solves the weak form of the two momentum conservation equations from the shallow water equations in non-conservative form in order to form a complete system of equations as in [40]. From (1), the ADCIRC model produces an approximate water depth, \(d\), and depth-averaged velocities \(U,V\) which are used as inputs into the SWAN model. Then the SWAN model produces an approximation to the WAE and outputs wave radiation stresses used to compute the aforementioned wave stresses, \(\tau_{ss,extores},\tau_{sy,waves}\). The wave stresses are estimated as: \[\tau_{ss,waves} =-\frac{\partial S_{xx}}{\partial x}-\frac{\partial S_{xy}}{ \partial y}, \tag{3}\] \[\tau_{sy,waves} =-\frac{\partial S_{xy}}{\partial x}-\frac{\partial S_{yy}}{ \partial y},\] where \(S_{xx},S_{xy},S_{yy}\) are the wave radiation stresses computed with SWAN output and defined as: \[\begin{split} S_{xx}=\rho g\int_{-\pi}^{\pi}\int_{\sigma_{min}}^{ \sigma_{max}}(n\text{cos}^{2}(\theta)+n-\frac{1}{2})\sigma Nd\sigma d\theta,\\ S_{xy}=\rho g\int_{-\pi}^{\pi}\int_{\sigma_{min}}^{\sigma_{max}}n \text{sin}(\theta)\text{cos}(\theta)\sigma Nd\sigma d\theta,\\ S_{yy}=\rho g\int_{-\pi}^{\pi}\int_{\sigma_{min}}^{\sigma_{max}}(n \text{sin}^{2}(\theta)+n-\frac{1}{2})\sigma Nd\sigma d\theta,\end{split} \tag{4}\] in which \(N\) is the action balance, \(\sigma,\theta\) represents relative radial frequency and direction of the wave spectrum respectively. The variable \(n\) is related to propagation velocity and is defined as: \[n=\frac{1}{2}\left(1+\frac{2kd}{\text{sinh}\left(2kd\right)}\right), \tag{5}\] where \(k\) is the wavenumber magnitude defined through the dispersion relation (8). The SWAN model is defined through the wave action balance equation, the WAE is a linear, scalar-valued, hyperbolic equation in 4 dimensions with a varying (both in 4-D space and time), non-divergence free velocity field, (\(\mathbf{c}(x,y,\sigma,\theta)\)), which can be determined independently of the unknown \(N(x,y,\sigma,\theta)\). This is different than most conservation laws such as Navier-Stokes and related transport equations since typically the propagation velocity is just a function of the unknown i.e. \(\mathbf{c}(N)\). The proper boundary and initial conditions are as follows, assuming we have a bounded domain in four-dimensional space \(\Omega\subset\mathbb{R}^{4}\) that is sufficiently regular. Similar to other advection problems, the boundary is split up into 2 segments, inflow and outflow: \[\begin{split}\Gamma_{-}=x\in\partial\Omega:\,\mathbf{c}\cdot \mathbf{n}<0=\text{inflow}\\ \Gamma_{+}=x\in\partial\Omega:\,\mathbf{c}\cdot\mathbf{n}\geq 0= \text{outflow},\end{split} \tag{6}\] where \(\mathbf{n}\) is the outwards unit normal vector to the boundary. Then it has been shown that the following problem possesses a unique solution [38; 13; 47]: \[\begin{split} N_{t}+\nabla\cdot(\mathbf{c}N)=\frac{S(N,x,y, \sigma,\theta,t)}{\sigma}\quad\text{on}\quad\Omega\times(0,T),\\ N=N_{-}\quad\text{on}\quad\Gamma_{-},\\ N=N_{0}\quad\text{on}\quad\Omega\quad\text{at}\quad t=0,\end{split} \tag{7}\] where \(N_{t}\) denotes the (partial) time derivative of \(N\), \(S\) the source/sink terms, \(\Omega\) the computational domain, and \(N_{-}\) the specified essential boundary condition on the outflow boundary. In the case when \(S\) is either 0 or is independent of \(N\), a fairly straightforward analytic solution to the above problem can be obtained via the method of characteristics [13]. However, in practice \(S\) is quite complex, often nonlinear, and realistic problems become too complex to analyze with pencil and paper and thus the need for numerical methods to approximately solve (7) arises. The propagation velocities, \(\mathbf{c}\), are all determined using the constitutive relation called the dispersion relation which is derived under assumptions of Airy Wave Theory. The dispersion relation relates relative radial frequency, \(\sigma\), to wavenumber magnitude, \(k\): \[\sigma^{2}=gk\tanh(kd). \tag{8}\] Here \(g\) is the constant of gravitational acceleration and \(d\) is total water depth. The velocity \(\mathbf{c}\) is a non-constant, non-divergence free vector quantity defined as: \[\mathbf{c}=(c_{g}cos\theta+U,c_{g}sin\theta+V,c_{\sigma},c_{\theta}), \tag{9}\] where \(U,V\) are the mean water velocities in \(x\), \(y\) directions respectively (which come from ADCIRC), \(c_{g}\) the relative group velocity (defined as \(\frac{\partial\sigma}{\partial k}\), whereas the absolute group velocity is \(\frac{\partial\omega}{\partial k}=c_{g}+\|(U,V)\|\)), and \(c_{\sigma}\), \(c_{\theta}\) define the advection of the spectra with respect to direction and relative frequency, respectively. The group velocity, \(c_{g}\), can be directly obtained by differentiating the dispersion relation in (8) for relative frequency, \(\sigma\), with respect to wavenumber magnitude \(k\). The explicit expression is: \[c_{g}=\frac{1}{2}\left(1+\frac{2kd}{\sinh 2kd}\right)\sqrt{\frac{g}{k}\tanh (kd)}. \tag{10}\] The propagation velocity \(c_{\sigma}\), represents frequency shifting due to changes in depths and currents. By applying the chain rule, we can find an expression of \(c_{\sigma}\): \[\begin{split} c_{\sigma}=\frac{d\sigma}{dt}=&\frac{ k\sigma}{\sinh(2kd)}\left(\frac{\partial d}{\partial t}+U\frac{\partial d}{ \partial x}+V\frac{\partial d}{\partial y}\right)-\\ & c_{g}k\left(\frac{\partial U}{\partial x}{\rm cos}^{2}(\theta) +\frac{\partial U}{\partial y}{\rm cos}(\theta){\rm sin}(\theta)+\frac{ \partial V}{\partial x}{\rm sin}(\theta){\rm cos}(\theta)+\frac{\partial V}{ \partial y}{\rm cos}^{2}(\theta)\right).\end{split} \tag{11}\] The complete derivation is quite lengthy and we refer interested readers to Appendix D in the text of Holthuijsen [28]. Now the propagation velocity \(c_{\theta}\), with respect to \(\theta\), represents the shift in the spectrum due to refraction and diffraction. Similarly to \(c_{\sigma}\), the derivation is quite lengthy but can be obtained by applying the chain rule: \[\begin{split} c_{\theta}=\frac{d\theta}{dt}=&\frac{ \sigma}{\sinh (2kd)}\left(\frac{\partial d}{\partial x}{\rm sin}(\theta)-\frac{\partial d}{ \partial y}{\rm cos}(\theta)\right)+\\ &\frac{\partial U}{\partial x}{\rm cos}(\theta){\rm sin}(\theta)- \frac{\partial U}{\partial y}{\rm cos}^{2}(\theta)+\frac{\partial V}{\partial x }{\rm sin}^{2}(\theta)-\frac{\partial V}{\partial y}{\rm cos}(\theta){\rm sin}( \theta).\end{split} \tag{12}\] ### SWAN Source Term Packages In the definition of the Wave Action Balance Equation (7), the source term \(S(x,y,\sigma,\theta,t)\) was arbitrary. In practice, the source term \(S\) can take many forms but in general it can be thought of as a sum of three key sources/sinks as noted in the introduction above: \[S=S_{in}+S_{diss}+S_{nl}, \tag{13}\] where \(S_{in}\) represents any contribution to the spectrum due to wind input and typically takes the form as a sum of linear and exponential wave growth: \[S_{in}=\alpha+\beta E(x,y,\sigma,\theta,t), \tag{14}\] where \(\alpha,\beta\) are parameters that will be discussed later. \(S_{diss}\) represents any change in the energy spectrum due to dissipation which can include things like whitecapping and surf zone breaking. For the purposes of this dissertation, only the most commonly used dissipation source terms will be included which are whitecapping, bottom friction, and depth-induced breaking: \[S_{diss}=S_{ucc}+S_{bf}+S_{br}. \tag{15}\] \(S_{nl}\) represents changes in the spectrum due to nonlinear wave interactions. In the following subsections we summarize the forms of the source term in the 3rd generation ST6 package as well as 1st and 2nd generation source terms that are implemented in SWAN and will be used in this study. A more general discussion of some of the available source terms in SWAN can be found in Appendix A. #### 2.2.1 Full Source Terms (ST6) In the last decade or so beginning with the work of Ardhuin, Babanin and many others [3], the latest third generation source term package has changed to become semi-empirical in an aim to agree better with observed data and specifically to be more robust to situations with extreme weather. This work culminated into new source terms for \(S_{in}\) and \(S_{ucc}\), and is often referred to as ST6 which was first fully defined in a paper by Rogers _et al._ in 2012 [49]. This source term package is available and widely used in both SWAN and WAVEWATCH III and will be used in this study. The wind input term for the ST6 package is defined as follows, \(\alpha=0\) and \(\beta\) from (14) is given by: \[\beta=\gamma\sigma\frac{\rho_{air}}{\rho_{water}}, \tag{16}\] where \(\gamma\) is a parameter that depends on location, direction, and frequency as: \[\gamma(x,y,f,\theta)=G(x,y,f,\theta)\sqrt{B_{n}(x,y,f)}W(x,y,f,\theta). \tag{17}\] In this context \(f=\frac{\sigma}{2\pi}\) and \(G\), \(B_{n}\), and \(W\) are defined as: \[G=2.8-(1+\tanh{(10\sqrt{B_{n}(x,y,f)}W(x,y,f,\theta)-11)}), \tag{18}\] \[B_{n}(x,y,f)=\frac{A(x,y,f)}{2\pi}\,E(x,y,f)k^{3}c_{g}, \tag{19}\] \[W(x,y,f,\theta)=(\max{(0,\frac{U_{10}}{c}cos(\theta-\theta_{viand})-1)})^{2}. \tag{20}\] Here, \(E(x,y,f)\) is the integrated spectrum \(E(x,y,f)=\int_{\theta}E(x,y,f,\theta)d\theta\), where \(E(x,y,f,\theta)=E(x,y,\sigma,\theta)2\pi\). \(U_{1}0\) is the wind speed at 10 m above the surface, \(\theta_{viand}\) is the direction of the wind, and \(c\) is wave celerity which is related by group velocity as defined in (10) through the relation: \[c_{g}=\frac{1}{2}(1+\frac{2kd}{\sinh{(2kd)}})c. \tag{21}\] \(A(x,y,f)\) is the wave steepness defined by: \[\frac{1}{A(x,y,f)}=\int_{\theta}E_{n}(x,y,f,\theta)d\theta, \tag{22}\] where \(E_{n}(x,y,f,\theta)\) is \(\frac{E(x,y,f,\theta)}{E^{\prime}(x,y,f)}\) and \(E^{\prime}(x,y,f)\) is: \[E^{\prime}(x,y,f)=\max_{\theta}{(E(x,y,f,\theta))}. \tag{23}\] For the ST6 source term package, the whitecapping term is defined in the following way: \[S_{ucc}(x,y,f,\theta)=\left\{\begin{array}{ll}0,&\mbox{if }E(x,y,f,\theta)<E_{T}(x,y,f, \theta)\\ T_{1}(x,y,f,\theta)+T_{2}(x,y,f,\theta)&\mbox{if }E(x,y,f,\theta)\geq E_{T}(x,y,f, \theta),\end{array}\right. \tag{24}\] Where the threshold spectral density, \(E_{T}\), is defined as: \[E_{T}(x,y,f,\theta)=\frac{2\pi B_{nt}}{A(x,y,f)c_{g}k^{3}}, \tag{25}\] with \(B_{nt}\) is a constant which by default is 1.225e-3 and \(A(x,y,f)\) as in (22). The two contributions of whitecapping are defined as follows: \[T_{1}(x,y,f,\theta)=a_{1}A(x,y,f)f\,\left[\frac{\Delta(x,y,f)}{E(x,y,f)} \right]^{L}E(x,y,f,\theta), \tag{26}\] Constants \(a_{1},a_{2}\), \(L\), \(M\) are calibrated based on empirical data and the function \(\Delta=E-E_{T}\) is the difference between variance density, \(E\), and threshold variance density, \(E_{T}\). In addition to the whitecapping and wind input prescribed by the ST6 formulation, bottom friction, depth-induced breaking, three-wave interactions and four-wave interactions are also included in the SWAN runs when 3rd generation sources are activated and the specific forms of the terms can be found in Appendix A. The default options in SWAN for bottom friction, and depth-induced breaking are used [54]. For nonlinear source terms, the Lumped Triad Approximation (LTA) is used for the three-wave interactions while the DIA is used for the four-wave interactions. #### Reduced Order Source Terms In addition to the ST6 source term package which represents a 3rd generation type, SWAN contains a 1st generation and a 2nd generation source package as formulated in a study from Holthuijsen et. al. [27]. The 1st and 2nd generation source packages in SWAN, as defined in the technical manual, are reproduced here [54]: \[S=\begin{cases}A+BE&E<E_{lim}&\&\ |\theta-\theta_{wind}|<\frac{\pi}{2}\\ \frac{E_{lim}-E}{\tau}&E>E_{lim}&\&\ |\theta-\theta_{wind}|<\frac{\pi}{2}\\ 0&E>E_{lim}&\&\ |\theta-\theta_{wind}|\geq\frac{\pi}{2}.\end{cases} \tag{27}\] The difference between the source term package of 1st and 2nd generation is the definition of the saturated spectrum \(E_{lim}\). In (27), \(\theta_{wind}\) is the direction of incoming winds, and \(\tau\) is a time scale set to: \[\tau=\beta_{4}(\frac{2\pi}{\sigma})^{2}\frac{g}{U_{10}\cos{(\theta-\theta_{wind })}}, \tag{28}\] with \(\beta_{4}=250\). The linear growth term, \(A\), is defined as: \[A=\begin{cases}\frac{\beta_{i}xC_{traq}^{2}\rho_{adv}^{2}(U_{10}\max{[0,cos{( \theta-\theta_{wind})})}^{4}}{2\pi g^{2}\rho_{water}^{2}}&\ \ \sigma\geq 0.7\sigma_{PM,d}\\ 0&\ \ \sigma<\sigma_{PM,d}.\end{cases} \tag{29}\] \(\beta_{i}\) is a proportionality constant that is empirically tuned with default set to 188 and the drag coefficient is set to \(C_{drag}=0.0012\). The variable \(\sigma_{PM,d}\) represents the peak relative radian frequency of a Pierson-Moskowitz corrected for variable depths: \[\sigma_{PM,d}=\frac{\sigma_{PM}}{\tanh{(0.833\overline{d}^{0.375})}}. \tag{30}\] The variable \(\overline{d}\) is dimensionless depth set to \(gd/(U_{10})^{2}\) and the Pierson-Moskowitz peak frequency \(\sigma_{PM}=0.13g2\pi/U_{10}\). The exponential growth term, \(B\), in (27) is defined as: \[B=\max{[0,\beta_{2}\frac{5\rho_{air}}{2\pi\rho_{water}}(\frac{U_{10}k\cos{( \theta-\theta_{wind})}}{\sigma}-\beta_{3})]\sigma}, \tag{31}\] where the empirical constants are by default set to \(\beta_{2}=0.59\), \(\beta_{3}=0.12\). The saturated spectrum, \(E_{lim}\), is a modification of the Pierson-Moskowitz spectrum and is defined as: \[E_{lim}=\begin{cases}\frac{\alpha k^{-3}}{2\varepsilon_{g}}e^{-\frac{5}{4}( \frac{\sigma}{\sigma_{PM,d}})^{-4}}\frac{2}{\pi}\cos^{2}{(\theta-\theta_{wind })}&\ \ |\theta-\theta_{wind}|<\frac{\pi}{2}\\ 0&\ \ |\theta-\theta_{wind}|\geq\frac{\pi}{2}.\end{cases} \tag{32}\] The parameter \(\alpha\) is what differentiates 1st generation from 2nd generation source packages. In the case of the 1st generation, \(\alpha=0.0081\). \(\alpha\) in the 2nd generation setting it is: \[\alpha=\max{[0.0081+(0.013-0.0081)e^{-\overline{d}},0.0023\tilde{E}_{tot,sea} ^{-0.223}]}. \tag{33}\] \(\tilde{E}_{tot,sea}\) is the total dimensionless wind sea wave energy defined as \(\tilde{E}_{tot,sea}=g^{2}E_{tot,sea}/U_{10}^{4}\) with \(E_{tot,sea}\) defined as: \[E_{tot,sea}=\int_{\theta_{wind}^{total}-\frac{\pi}{2}}^{\theta_{max}}\int_{0.7 \sigma_{PM,d}}^{\sigma_{max}}Ed\sigma d\theta. \tag{34}\] Now that we have established how exactly ADCIRC and SWAN influence one another as well as the source term configurations that will be used, we will describe the specific test cases used in this study. ## 3 Scenario Descriptions ### Hurricane Ike Hurricane Ike is selected as a test case since there is plenty of data available for comparisons and because this specific scenario has already been validated with ADCIRC+SWAN in [29]. Hurricane Ike made landfall on September 13th, 2008 near the City of Galveston, Texas. Hurricane Ike was a category 4 hurricane with maximum sustained winds of 110 mph at the time of landfall in Galveston [9]. Hurricane Ike produced billions of dollars in damages and induced deadly storm surge across the upper Texas and southwest Louisiana coasts which claimed dozens of lives [66]. Hurricane Ike in particular was chosen as a test case for this study since high levels of storm surge were generated and the fact that wave interaction in some areas resulted in up to 50% changes in modeled storm surge [29]. Thus, we know a-priori that the results from the SWAN model will have significant impacts on the accuracy of the resulting ADCIRC water levels and water velocities. ### Hurricane Ida Hurricane Ida was a category 4 storm which made landfall on August 29, 2021 near Port Fourchon, Louisiana [19]. The storm had an intense wind field with sustained winds of over 150 mph causing billions of dollars in damage and several lives to be lost. A main reason that Hurricane Ida was chosen as a second test case is because the storm produced large amounts of surge, recorded as high as 14 feet in some areas [5]. Additionally, the hurricane made landfall in a separate part of the Gulf of Mexico than Hurricane Ike which improves the robustness of our study than of just a single case. Additionally, since this storm is more recent there is ample data available to compare simulation results to. ## 4 Implementation Details ### Grids The computational grid used in the Hurricane Ike scenario is derived from the work of Kennedy et. al. which consists of 6,675,517 elements and 3,352,598 nodes with resolution ranging from 20 km in the deep ocean up to 20 m or less on the Texas coast [31]. For the Hurricane Ida scenario, a different mesh is used with high refinement specifically along the Louisiana coast. The mesh consists of 3,102,441 elements and 1,593,485 nodes with resolution in a similar range to that of the mesh used in Hurricane Ike. In this case, resolution around the Louisiana coast is as fine as 20 m while the remainder of the Gulf Coast has refinement of anywhere from the order of 100 m to 1000 meters and deep ocean is as sparse as 20 km. In both scenarios, the spectral domain for SWAN is 36 cells in direction, \(\theta\), and 40 cells in frequency, \(\sigma\). The directions have uniform spacing of \(10^{\circ}\) and range across the full circle, \(-180^{\circ}\) to \(180^{\circ}\) and the frequency goes from \(0.031384H\,z-2.55H\,z\) with logarithmic spacing such that \(f_{i+1}=\gamma f_{i}\) where \(\gamma=1.1\) is a constant. ### Forcing and Boundary Conditions The input winds and pressure for the Hurrricane Ike scenario are validated meteorology hindcasts from Ocean Weather Inc. (OWI). Since the OWI wind fields are not publicly available, the best track winds from the NHC's HURDAT2 dataset are used to force pressure and winds for the Hurricane Ida scenario [36]. This is also a similar wind format used during forecasting and thus provides an opportunity to assess a wind product used in such applications. For both scenarios, conditions for the open ocean boundary at the \(60^{\circ}\) meridian for ADCIRC are generated from the TPXO global tide model [14]. To avoid inducing artificial oscillations into the simulation a 30 day tidal spin up of ADCIRC without winds and hence, no waves, is supplied as initial conditions to the ADCIRC+SWAN model in both scenarios. For Hurricane Ike, total simulation time is 8 days and 18 hours beginning on September 5, 2008 at 12:00 GMT and ending September 14, 2008 at 06:00 GMT. For Hurricane Ida the total run time is 9 days and 6 hours beginning at August 26, 2021 at 12:00 GMT and ending at September 4, 2021 at 18:00 GMT. For the SWAN wave model, the initial conditions are set to 0 and the open boundary conditions for SWAN are 0. ### Other Properties This study is conducted by running simulations of both Hurricane Ike and Hurricane Ida, each with identical forcing inputs while changing only the source term implementations in SWAN. ADCIRC is run without SWAN as a control and then 1st Generation (Gen1), 2nd Generation (Gen2), and the latest ST6 (Gen3) packages, as described in Section 4, are run. The coupling interval between ADCIRC and SWAN happens every 600 seconds for both scenarios. Each simulation is run in parallel with 1064 of the Intel Xeon Platinum 8280 ("Cascade Lake") processors distributed among 19 computational nodes on the Frontera supercomputer from the Texas Advanced Computing Center (TACC). ### Post Processing The ensuing ADCIRC+SWAN water levels are compared to available measurements from NOAA gauges and significant wave height, mean wave period, and mean wave direction are compared to available NOAA buoys. A summary of the gauge locations for Hurricane Ike can be found in **Table 1** and can be seen on the maps in **Figures 3, 4**. The gauge locations for Hurricane Ida can be found in **Table 2** and the locations can be seen on the map in **Figure 5**. A summary of the buoy locations used in both scenarios can be found in **Table 3** and their locations are shown on a map in **Figure 6**. The output from the ADCIRC+SWAN model records every 30 minutes at the locations of the gauges/buoys. Errors for each station are computed relative to the gauge/buoy data. The error of water surface elevation from ADCIRC is measured relative to the relevant gauges and the wave characteristics from SWAN are compared to the relevant buoys. \begin{table} \begin{tabular}{|l|l|l|l|} \hline Gauge no. & Gauge Name & Latitude & Longitude \\ \hline 1 & Bob Hall Pier Corpus Christi & 27.5800 & -97.2167 \\ \hline 2 & Eagle Point & 29.4813 & -94.9172 \\ \hline 3 & Freshwater Canal Locks & 29.5341 & -92.3082 \\ \hline 4 & Galveston Bay Entrance North Jetty & 29.3576 & -94.7260 \\ \hline 5 & Galveston Pier 21 & 29.3100 & -94.7933 \\ \hline 6 & Galveston Pleasure Pier & 29.2849 & -94.7894 \\ \hline 7 & Manchester Houston & 29.7247 & -95.2656 \\ \hline 8 & Morgans Point & 29.6817 & -94.9850 \\ \hline 9 & New Canal Station & 30.0272 & -90.1134 \\ \hline 10 & Port Aransas & 27.8404 & -97.0730 \\ \hline 11 & Port Fourchon & 29.0848 & -90.1985 \\ \hline 12 & Rockport & 28.0217 & -97.0467 \\ \hline 13 & Shell Beach & 29.8681 & -89.6733 \\ \hline 14 & USCG Freeport & 28.9428 & -95.3025 \\ \hline \end{tabular} \end{table} Table 1: Elevation stations used to compare ADCIRC+SWAN results for Hurricane Ike. Figure 3: Surface elevation gauges used near Galveston Bay for the Hurricane Ike test case. Two types of error are computed, root mean square error (RMSE) and the percent error of the peak value which are defined as: \[e_{RMSE}=\sqrt{\frac{\sum_{i}(u_{i,sim}-u_{i,meas})^{2}}{N}} \tag{35}\] \[e_{peak}=\frac{|\max{(u_{sim})}-\max{(u_{meas})}|}{\max{(u_{meas}) }}\times 100.\] The wave statistics that are recorded at the buoys are significant wave height, peak period, and mean wave direction. Significant wave height is defined as the highest \(1/3\) of recorded waves at a point, the peak period is the reciprocal of the frequency where the highest action density is recorded, and mean wave direction is as it sounds. In SWAN, the aforementioned statistical values must be estimated. The significant wave height is estimated by first calculating the zeroth moment of the spectrum, \(m_{0}\). In general the \(i^{th}\) moment of the spectrum is defined as: \[m_{i}(x,y)=\int_{-\pi}^{\pi}\int_{\sigma_{\min}}^{\sigma_{\max}}\sigma^{i}E(x, y,\sigma,\theta)d\sigma d\theta, \tag{36}\] \begin{table} \begin{tabular}{|l|l|l|l|} \hline Gauge no. & Gauge Name & Latitude & Longitude \\ \hline 1 & Pillotown, LA & 29.1793 & -89.2588 \\ \hline 2 & Port Fourchon & 29.1142 & -90.1993 \\ \hline 3 & Shell Beach & 30.1267 & -89.2217 \\ \hline 4 & Grand Isle & 29.2633 & -89.9567 \\ \hline 5 & New Canal Station & 30.0272 & -90.1133 \\ \hline 6 & Freshwater Canal Locks & 29.5341 & -92.3082 \\ \hline 7 & LAWMA Amerada Pass & 29.4500 & -91.3383 \\ \hline 8 & Pilots Station East & 28.9316 & -89.4067 \\ \hline 9 & West Bank 1 & 29.7838 & -90.4200 \\ \hline 10 & Eugene Island & 29.3667 & -91.3833 \\ \hline 11 & Bulk Terminal & 30.1900 & -93.300 \\ \hline 12 & Carrollton & 29.9333 & -90.1350 \\ \hline 13 & Calcasieu Pass & 29.7683 & -93.3433 \\ \hline \end{tabular} \end{table} Table 2: Elevation stations used to compare ADCIRC+SWAN results for Hurricane Ida. Figure 4: All surface elevation gauges used during the Hurricane Ike test case. recall \(E=N\sigma\). The significant wave height is estimated as: \[H_{s}(x,y)\approx 4\sqrt{(m_{0})}. \tag{37}\] The reasoning behind this approximation to the highest one third of waves is not trivial but the derivation can be found in several texts, e.g., [28, 37]. The peak period is obtained just by finding the period in the spectrum that has the highest action density, and the mean wave direction in degrees is computed as: \[\theta_{mean}(x,y)=\frac{180}{\pi}\frac{\int_{-\pi}^{\pi}\int_{\sigma_{min}}^{ \sigma_{max}}\cos{(\theta)}E(x,y,\sigma,\theta)d\sigma d\theta}{\int_{-\pi}^{ \pi}\int_{\sigma_{min}}^{\sigma_{max}}sin{(\theta)}E(x,y,\sigma,\theta)d\sigma d \theta}. \tag{38}\] ## 5 Results ### Computational Cost Total wall clock times rounded to the nearest second are tabulated in **Table**4. For the Hurricane Ike scenario ADCIRC without SWAN coupling takes 1 hour and 5 minutes to run while the fastest ADCIRC+SWAN run is Gen2 with over double the run time at 2 hours and 41 minutes. Similarly, for Hurricane Ida the ADCIRC without SWAN Figure 5: Locations of NOAA elevation gauges used to compare for Hurricane Ida. \begin{table} \begin{tabular}{|l|c|c|c|} \hline Buoy no. & Buoy Name & Latitude & Longitude \\ \hline 1 & NBDC 42035 & 29.2320 & -94.4130 \\ \hline 2 & NBDC 42019 & 27.9100 & -95.3450 \\ \hline 3 & NBDC 42001 & 25.9190 & -89.6740 \\ \hline 4 & NBDC 42020 & 26.9680 & -96.6930 \\ \hline 5 & NBDC 42002 & 26.0550 & -93.6460 \\ \hline 6 & NBDC 42055 & 22.1240 & -93.9410 \\ \hline 7 & NBDC 42040 & 29.2070 & -88.2370 \\ \hline 8 & NBDC 42007 & 30.0900 & -88.7690 \\ \hline 9 & NBDC 42039 & 28.7870 & -86.0070 \\ \hline 10 & NBDC 42036 & 28.5010 & -84.5080 \\ \hline \end{tabular} \end{table} Table 3: Wave buoy info. took 38 minutes and 48 seconds while the fastest ADCIRC+SWAN run was with Gen2 at 1 hour 49 minutes and 3 seconds. Thus, we observe in both cases SWAN more than doubles the run time. This overhead is unsurprising since the number of parameters SWAN is solving for at each time step is 505 times larger than ADCIRC. This is because while ADCIRC is solving for 3 quantities (surface elevation and two velocity components) at each node on the mesh, SWAN is solving for the Action Balance on the whole spectral mesh (\(41\times 37\) points) at each node in the finite element mesh. However, SWAN can take much longer time steps (\(\sim\)10 minutes) compared to ADCIRC (\(\sim\)1 second). Taking this into consideration, for each SWAN time step of 10 minutes we would expect ADCIRC to have 600 forward solves. Therefore, over the course of a given simulation we would expect SWAN to solve for about \(505/600\approx.85\) times the variables ADCIRC must solve for. In other words, if SWAN were able to solve for each of its unknowns as quick as ADCIRC could, we would expect ADCIRC+SWAN to take 1.85 times the time of ADCIRC by itself. From **Table 4** we can see that for any source term package in SWAN it takes considerably longer than 1.85 times just ADCIRC. Thus we can conclude that either SWAN takes longer to solve per unknown than ADCIRC (which would be unsurprising since SWAN solves implicitly while ADCIRC is explicit), there is other associated overhead with running SWAN, or a combination of both. Another interesting part of the results in **Table 4** is the large difference in run times due to the choice of source term package. It was observed that running ADCIRC+SWAN with Gen3 source terms took around 1.5 times longer than either Gen1 or Gen2. A plausible explanation for the increased run time is the computational complexity of the Gen3 source terms due to the inclusion of the DIA, which is an approximation for the four wave nonlinear interactions, as well as the LTA for three wave interactions. Additionally, it isn't surprising to see that Gen2 and Gen1 didn't have significantly different run times (only a 5% difference) since the only difference between the two is the definition of a single parameter as defined in (33). \begin{table} \begin{tabular}{|p{108.4pt}|p{108.4pt}|p{108.4pt}|} \hline Model Configuration & Hurricane Ike Run Time (hr,min,sec) & Hurricane Ida Run Time (hr,min,sec) \\ \hline No SWAN & 1hr, 5min, 40sec & 0hr, 38min, 48sec \\ \hline Gen1 & 2hr, 50min, 14sec & 1hr, 52min, 52sec \\ \hline Gen2 & 2hr, 41min, 17sec & 1hr, 49min, 03sec \\ \hline Gen3 & 4hr, 5min, 10sec & 3hr, 12min, 20sec \\ \hline \end{tabular} \end{table} Table 4: Run times for ADCIRC+SWAN. Figure 6: Wave buoy locations which are used in both of the scenarios, Hurricanes Ike and Ida. ### ADCIRC Output During the Hurricane Ike simulation, the data from the 14 NOAA gauges along the Texas coast shown on the maps of **Figures 3**, 4, and the output surface elevation from ADCIRC+SWAN are plotted for all of the 4 configurations in **Figures 7**- 9. During the Hurricane Ida simulation, the 13 NOAA gauges along the Louisiana coast shown on the map of **Figure 5**, and the output surface elevation from ADCIRC+SWAN are plotted for all of the 4 configurations in **Figures 11**- 14. For each storm, the errors at all gauges are tabulated in **Table 5** for Hurricane Ike and **Table 6** for Hurricane Ida. During Hurricane Ike, the average RMSE across all gauges when excluding waves altogether is 0.210 meters while for Gen1 it is.190 m, Gen2 is.196 m, and Gen3 is.197 m. During Hurricane Ida, the average RMSE across all gauges when excluding waves altogether is 0.285 meters while for Gen1 it is.282 m, Gen2 is.281 m, and Gen3 is.282 m. Including waves resulted in less than a 5 percent reduction in RMSE during Hurricane Ike and less than 1.5 percent during Hurricane Ida at the gauge locations. The choice of source terms really did not impact RMSE with only a maximum difference of.007 m between Gen1 and Gen3 during Hurricane Ike and a maximum difference of.001 m between Gen2 and Gen3. The absolute average relative error percent to the peak value without accounting for waves is about 13 percent while Gen1 is 12 percent, Gen2 is 10 percent, and Gen3 is 11 percent during Hurricane Ike. For Hurricane Ida, the same quantity is 20 percent without waves while Gen1 is 20 percent, Gen2 is 21 percent, and Gen3 is 22 percent. Qualitatively examining the outputs in the graphs of **Figures 7** - 10 and **Figures 11**- 14 does not show any drastic improvements in accuracy between one source term package over the others at the gauge locations and during Hurricane Ida it doesn't appear that including waves reduces the error compared to the NOAA elevation gauge data overall. This is not to say including waves does not improve the overall quality of the simulation, for instance we did not include measuring the total inundated area in this study. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & \multicolumn{4}{|c|}{RMSE (m)} & \multicolumn{4}{|c|}{Relative Err. \% of Peak} \\ \hline Gauge no. & No SWAN & Gen1 & Gen2 & Gen3 & No SWAN & Gen1 & Gen2 & Gen3 \\ \hline 1 & 0.177 & 0.144 & 0.133 & 0.145 & -27 & -14 & -10 & -19 \\ \hline 2 & 0.177 & 0.187 & 0.189 & 0.187 & -2 & +3 & +5 & +4 \\ \hline 3 & 0.268 & 0.235 & 0.230 & 0.237 & -10 & -4 & -3 & -4 \\ \hline 4 & 0.151 & 0.136 & 0.129 & 0.133 & -11 & -7 & -4 & -5 \\ \hline 5 & 0.645 & 0.632 & 0.638 & 0.632 & +8 & +14 & +15 & +13 \\ \hline 6 & 0.259 & 0.269 & 0.269 & 0.266 & +27 & +30 & +15 & +29 \\ \hline 7 & 0.243 & 0.212 & 0.205 & 0.207 & +7 & +13 & +15 & +14 \\ \hline 8 & 0.160 & 0.146 & 0.139 & 0.141 & NA & NA & NA & NA \\ \hline 9 & 0.101 & 0.101 & 0.100 & 0.101 & -10 & -10 & -10 & -10 \\ \hline 10 & 0.108 & 0.102 & 0.098 & 0.102 & -17 & -14 & -12 & -14 \\ \hline 11 & 0.150 & 0.110 & 0.108 & 0.112 & -25 & -10 & -8 & -14 \\ \hline 12 & 0.165 & 0.158 & 0.154 & 0.158 & -23 & -20 & -19 & -21 \\ \hline 13 & 0.116 & 0.115 & 0.114 & 0.115 & -13 & -11 & -11 & -12 \\ \hline 14 & 0.218 & 0.237 & 0.242 & 0.220 & +7 & +14 & +17 & +11 \\ \hline \end{tabular} \end{table} Table 5: Error Analysis for Water Surface elevations during Hurricane Ike. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline & \multicolumn{4}{|c|}{RMSE (m)} & \multicolumn{4}{|c|}{Relative Err. \% of Peak} \\ \hline Gauge no. & No SWAN & Gen1 & Gen2 & Gen3 & No SWAN & Gen1 & Gen2 & Gen3 \\ \hline 1 & 0.1602 & 0.135 & 0.131 & 0.128 & -29 & -16 & -12 & -4 \\ \hline 2 & 0.344 & 0.374 & 0.379 & 0.402 & NA & NA & NA & NA \\ \hline 3 & 0.2908 & 0.277 & 0.275 & 0.277 & -33 & -31 & -29 & -31 \\ \hline 4 & 0.833 & 0.833 & 0.828 & 0.802 & NA & NA & NA & NA \\ \hline 5 & 0.313 & 0.303 & 0.300 & 0.300 & -36 & -34 & -34 & -34 \\ \hline 6 & 0.176 & 0.169 & 0.164 & 0.165 & -40 & -40 & -40 & -40 \\ \hline 7 & 0.191 & 0.193 & 0.189 & 0.181 & +21 & +23 & +23 & +24 \\ \hline 8 & 0.154 & 0.133 & 0.141 & 0.164 & -2 & +22 & +29 & +45 \\ \hline 9 & 0.227 & 0.240 & 0.243 & 0.253 & +22 & +26 & +27 & +32 \\ \hline 10 & 0.152 & 0.146 & 0.141 & 0.140 & -7 & -7 & -7 & -6 \\ \hline 11 & 0.152 & 0.151 & 0.152 & 0.151 & +3 & +3 & +3 & +3 \\ \hline 12 & 0.563 & 0.565 & 0.562 & 0.556 & +10 & +10 & +12 & +11 \\ \hline 13 & 0.149 & 0.148 & 0.145 & 0.145 & +12 & +12 & +13 & +13 \\ \hline \end{tabular} \end{table} Table 6: Error Analysis for Water Surface elevations during Hurricane Ida. Figure 7: Surface elevations of NOAA gauges 1.4 along with ADCIRC+SWAN outputs for all configurations during Hurricane Ike. Figure 8: Surface elevations of NOAA gauges 5-8 along with ADCIRC+SWAN outputs for all configurations during Hurricane Ike. Figure 9: Surface elevations of NOAA gauges 9-12 along with ADCIRC+SWAN outputs for all configurations during Hurricane like. Figure 10: Surface elevations of NOAA gauges 13-14 along with ADCIRC+SWAN outputs for all configurations during Hurricane like. Figure 11: Surface elevations of NOAA gauges 1-4 along with ADCIRC+SWAN outputs for all configurations during Hurricane Ida. Figure 12: Surface elevations of NOAA gauges 5-8 along with ADCIRC+SWAN outputs for all configurations during Hurricane Ida. Figure 13: Surface elevations of NOAA gauges 9-12 along with ADCIRC+SWAN outputs for all configurations during Hurricane Ida. Figure 14: Surface elevations of NOAA gauges 13-14 along with ADCIRC+SWAN outputs for all configurations during Hurricane Ida. ### SWAN Output The results of SWAN are plotted against the 10 available NOAA wave buoys in **Figures 15-23** for Hurricane Ike and **Figures 27-32** for Hurricane Ida. At each buoy location, the RMSE and relative error % of the peak value are computed from the ADCIRC+SWAN output for significant wave height, peak period, and mean wave direction. The error analysis for significant wave height from the data in plots of **Figures 15-17** is displayed in **Table 7** for Hurricane Ike while the data from plots of **Figures 27-28** is displayed in **Table 8** for Hurricane Ida. The average RMSE for significant wave height across all stations for Gen1 is 0.900 m, Gen2 is 1.033 m, and Gen3 is 0.802 meters during Hurricane Ike and.813 m for Gen1, 0.883 for Gen2, and 0.834 for Gen3 during Hurricane Ida. During the Hurricane Ike case we do see a 10 percent improvement in RMSE from Gen1 to Gen3 and a roughly 20 percent improvement in error from Gen2 to Gen3 at the buoy locations while during Hurricane Ida we see that Gen1 has the lowest average RMSE but all source term packages are within 0.07 m in average RMSE. The absolute average error percentage relative to the peak value of the buoy was 27 percent for Gen1, 30 percent for Gen2, and Gen3 was 22 percent for Hurricane Ike and for Hurricane Ida was 40 percent for Gen1, 81 for Gen2, and 63 for Gen3. During Hurricane Ike, much of the difference in the errors between the 3 source term packages was due to buoy number 6 (NBDC 42055) where both Gen1 and Gen2 nearly doubled the observed peak significant wave height while Gen3 only underpredicted by 2 percent. During Hurricane Ida, it appears that the significant wave height significantly over-predict the observed data except in case of buoys 7,9, and 10 regardless of source term package. The errors in peak period relative to the buoy data are displayed in **Table 9** for Hurricane Ike and **Table 10** for Hurricane Ida. The average RMSE across all buoys for Gen1 was 2.822 seconds, Gen2 was 2.541 seconds, and Gen3 was 2.413 seconds for Hurricane Ike and 4.865 seconds for Gen1, 5.068 second for Gen2, and 9.960 seconds for Gen3 during Hurricane Ida. During Hurricane Ike we do see improved accuracy relative to these 10 buoys as we upgrade in complexity of source terms from Gen1 to Gen2 to Gen3, with a maximal improvement of about 15 percent in error reduction. The absolute average across all buoys of the relative error percentage of the peak is 10 % for Gen1, 8 % for Gen2, and 14 % for Gen3. For Hurricane Ida we find no such improvement with more complex source terms and in fact Gen3 performs significantly worse when compared to buoys. The mean wave direction errors at each buoy are shown in **Table 11** for Hurricane Ike and **Table 12** for Hurricane Ida. The average RMSE across all buoys is 43.238 degrees for Gen1, 40.392 degrees for Gen2, and 37.076 degrees for Gen3 during Hurricane Ike and 62.380 degrees for Gen1, 60.229 degrees for Gen2, and 67.73 degrees for Gen3 during Hurricane Ida. During Hurricane Ike, the more complex source terms in Gen3 do improve the RMSE over the buoys and in this case by roughly 14 percent relative to Gen1. The absolute average across all buoys of the relative error percentage to the peak buoy value is 23 % for Gen1, 30 % for Gen2, and 17 % for Gen3. However, no such improvements are observed during Hurricane Ida. The larger errors in some of the SWAN output with respect to buoys are most likely related to the inaccuracy of the wind fields, particularly during Hurricane Ida. Since the best track winds were used for Hurricane Ida, this results in a highly simplified wind field that doesn't capture the highly irregular oscillations that are observed by the buoys as shown in **Figures 33- 35**. The average RMSE in wind magnitude across all buoys is roughly 5 m/s during Hurricane Ida while it is only around 2 m/s during Hurricane Ike. The comparison between wind velocity magnitude of the ADCIRC simulation and the buoy data during Hurricane Ike is shown in **Figures 24- 26**. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & \multicolumn{3}{|c|}{RMSE (m)} & \multicolumn{3}{|c|}{Relative Err. \% of Peak} \\ \hline Buoy no. & Gen1 & Gen2 & Gen3 & Gen1 & Gen2 & Gen3 \\ \hline 1 & 0.840 & 0.889 & 0.705 & +3 & +24 & +11 \\ \hline 2 & 1.088 & 0.968 & 0.924 & -8 & +1 & +19 \\ \hline 3 & 0.799 & 1.352 & 0.976 & +9 & +27 & -10 \\ \hline 4 & 1.202 & 1.316 & 0.987 & -14 & 12 & -29 \\ \hline 5 & 0.968 & 1.305 & 0.792 & +32 & +64 & +24 \\ \hline 6 & 1.049 & 1.305 & 0.490 & +71 & +90 & -2 \\ \hline 7 & 0.904 & 0.652 & 1.113 & -25 & -11 & -33 \\ \hline 8 & 0.823 & 0.653 & 0.731 & -36 & -33 & -38 \\ \hline 9 & 0.807 & 0.572 & 0.915 & -35 & -23 & -35 \\ \hline 10 & 0.519 & 1.316 & 0.389 & -32 & -17 & -15 \\ \hline \end{tabular} \end{table} Table 7: Error Analysis for Significant Wave Height for Hurricane lke. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{3}{|c|}{RMSE (m)} & \multicolumn{3}{|c|}{Relative Err. \% of Peak} \\ \hline Buoy no. & Gen1 & Gen2 & Gen3 & Gen1 & Gen2 & Gen3 \\ \hline 1 & 0.560 & 0.670 & 0.561 & +45 & +133 & +77 \\ \hline 2 & 0.597 & 0.720 & 0.651 & +54 & +138 & +116 \\ \hline 3 & NA & NA & NA & NA & NA & NA \\ \hline 4 & 0.680 & 0.800 & 0.782 & +51 & +126 & +123 \\ \hline 5 & 0.796 & 0.967 & 0.852 & +62 & +121 & +95 \\ \hline 6 & 0.745 & 0.839 & 0.794 & +16 & +51 & +33 \\ \hline 7 & 1.001 & 1.083 & 1.243 & -5 & +10 & +4 \\ \hline 8 & NA & NA & NA & NA & NA & NA \\ \hline 9 & 1.140 & 1.064 & 0.970 & -43 & -32 & -22 \\ \hline 10 & 0.986 & 0.927 & 0.816 & -51 & -41 & -31 \\ \hline \end{tabular} \end{table} Table 8: Error Analysis for Significant Wave Height for Hurricane lda. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{3}{|c|}{RMSE (sec.)} & \multicolumn{3}{|c|}{Relative Err. \% of Peak} \\ \hline Buoy no. & Gen1 & Gen2 & Gen3 & Gen1 & Gen2 & Gen3 \\ \hline 1 & 4.697 & 4.413 & 4.167 & +20 & +21 & -21 \\ \hline 2 & 4.223 & 3.742 & 2.258 & +20 & +21 & +11 \\ \hline 3 & 2.062 & 1.953 & 2.529 & -1 & -3 & -18 \\ \hline 4 & 2.670 & 2.355 & 2.447 & +8 & +2 & -10 \\ \hline 5 & 2.940 & 2.201 & 2.354 & +8 & -2 & -9 \\ \hline 6 & 1.755 & 1.798 & 2.336 & -15 & -10 & -22 \\ \hline 7 & 2.807 & 2.510 & 2.258 & -14 & -4 & -15 \\ \hline 8 & 4.009 & 3.370 & 2.653 & -2 & -4 & -15 \\ \hline 9 & 1.507 & 1.292 & 1.364 & -7 & -9 & -16 \\ \hline 10 & 1.565 & 1.780 & 1.763 & +6 & +3 & -1 \\ \hline \end{tabular} \end{table} Table 9: Error Analysis for Peak Period for Hurricane lke. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{3}{|c|}{RMSE (seg.)} & \multicolumn{3}{|c|}{Relative Err. \% of Peak} \\ \hline Buoy no. & Gen1 & Gen2 & Gen3 & Gen1 & Gen2 & Gen3 \\ \hline 1 & 7.145 & 7.241 & 11.189 & +99 & +99 & +99 \\ \hline 2 & 5.450 & 5.960 & 11.065 & +131 & +131 & +131 \\ \hline 3 & NA & NA & NA & NA & NA & NA \\ \hline 4 & 6.642 & 6.982 & 11.079 & +99 & +99 & +99 \\ \hline 5 & 4.141 & 4.750 & 9.362 & +14 & +74 & +99 \\ \hline 6 & 3.651 & 3.828 & 9.513 & +14 & +22 & +99 \\ \hline 7 & 5.408 & 5.303 & 9.592 & +115 & +114 & +115 \\ \hline 8 & NA & NA & NA & NA & NA & NA \\ \hline 9 & 3.425 & 3.363 & 9.162 & +29 & +79 & +139 \\ \hline 10 & 3.060 & 3.115 & 8.721 & +19 & +10 & +178 \\ \hline \end{tabular} \end{table} Table 10: Error Analysis for Peak Period for Hurricane Ida. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{3}{|c|}{RMSE (deg.)} & \multicolumn{3}{|c|}{Relative Err. \% of Peak} \\ \hline Buoy no. & Gen1 & Gen2 & Gen3 & Gen1 & Gen2 & Gen3 \\ \hline 1 & 41.930 & 40.440 & 34.653 & -12 & -12 & -3 \\ \hline 2 & 41.923 & 37.049 & 41.180 & -26 & -41 & +4 \\ \hline 3 & NA & NA & NA & NA & NA & NA \\ \hline 3 & NA & NA & NA & NA & NA & NA \\ \hline 4 & 61.389 & 60.815 & 71.455 & +5 & +5 & -3 \\ \hline 5 & 76.126 & 67.976 & 93.281 & +106 & +109 & +103 \\ \hline 6 & 66.706 & 66.365 & 112.883 & +151 & +151 & +147 \\ \hline 7 & 50.849 & 50.443 & 40.596 & +6 & +6 & +6 \\ \hline 8 & NA & NA & NA & NA & NA & NA \\ \hline 9 & 51.998 & 52.000 & 41.213 & -28 & -28 & -28 \\ \hline 10 & 51.899 & 52.170 & 53.782 & -4 & +27 & -4 \\ \hline \end{tabular} \end{table} Table 11: Error Analysis for Mean Wave Direction for Hurricane Ike. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{3}{|c|}{RMSE (deg.)} & \multicolumn{3}{|c|}{Relative Err. \% of Peak} \\ \hline Buoy no. & Gen1 & Gen2 & Gen3 & Gen1 & Gen2 & Gen3 \\ \hline 1 & 61.184 & 60.309 & 55.761 & -7 & -6 & -7 \\ \hline 2 & 74.884 & 71.757 & 72.863 & +4 & +4 & -13 \\ \hline 3 & NA & NA & NA & NA & NA & NA \\ \hline 4 & 61.389 & 60.815 & 71.455 & +5 & +5 & -3 \\ \hline 5 & 76.126 & 67.976 & 93.281 & +106 & +109 & +103 \\ \hline 6 & 66.706 & 66.365 & 112.883 & +151 & +151 & +147 \\ \hline 7 & 50.849 & 50.443 & 40.596 & +6 & +6 & +6 \\ \hline 8 & NA & NA & NA & NA & NA & NA \\ \hline 9 & 51.998 & 52.000 & 41.213 & -28 & -28 & -28 \\ \hline 10 & 51.899 & 52.170 & 53.782 & -4 & +27 & -4 \\ \hline \end{tabular} \end{table} Table 12: Error Analysis for Mean Wave Direction from Hurricane Ida. Figure 15: Significant wave height of NOAA buoys 1-4 along with ADCIRC+SWAN outputs for all configurations during Hurricane like. Figure 16: Significant wave height of NOAA buoys 5-8 along with ADCIRC+SWAN outputs for all configurations during Hurricane like. Figure 17: Significant wave height of NOAA buoys 9-10 along with ADCIRC+SWAN outputs for all configurations during Hurricane like. Figure 18: Peak period of NOAA buoys 1-4 along with ADGIRC+SWAN outputs for all configurations during Hurricane like. Figure 19: Peak period of NOAA buoys 5-8 along with ADGIRC+SWAN outputs for all configurations during Hurricane like. Figure 20: Peak period of NOAA buoys 9-10 along with ADCIRC+SWAN outputs for all configurations during Hurricane like. Figure 21: Mean wave direction of NOAA buoys 1-4 along with ADCIRC+SWAN outputs for all configurations during Hurricane like. Figure 22: Mean wave direction of NOAA buoys 5-8 along with ADCIRC+SWAN outputs for all configurations during Hurricane like. Figure 23: Mean wave direction of NOAA buoys 9-10 along with ADCIRC+SWAN outputs for all configurations during Hurricane like. Figure 24: Wind velocity magnitude of NOAA buoys 1-4 along with ADCIRC+SWAN wind forcing during Hurricane like. Figure 25: Wind velocity magnitude of NOAA buoys 5-8 along with ADCIRC+SWAN wind forcing during Hurricane like. Figure 26: Wind velocity magnitude of NOAA buoys 9-10 along with ADCIRC+SWAN wind forcing during Hurricane Ike. Figure 27: Significant wave height of NOAA buoys 1.2,4.5 along with ADCIRC+SWAN outputs for all configurations during Hurricane Ida. Figure 28: Significant wave height of NOAA buoys 6,7,9,10 along with ADCIRC+SWAN outputs for all configurations during Hurricane Ida. Figure 29: Peak period of NOAA buoys 1,2,4,5 along with ADICRC+SWAN outputs for all configurations during Hurricane Ida. Figure 30: Peak period of NOAA buoys 6.7,9,10 along with ADCIRC+SWAN outputs for all configurations during Hurricane Ida. Figure 31: Mean wave direction of NOAA buoys 1,2,4,5 along with ADCIRC+SWAN outputs for all configurations during Hurricane Ida. Figure 32: Mean wave direction of NOAA buoys 6.7,9.10 along with ADCIRC+SWAN outputs for all configurations during Hurricane Ida. Figure 33: Wind velocity magnitude of NOAA buoys 1-4 along with ADCRC+SWAN wind forcing during Hurricane Ida. Figure 34: Wind velocity magnitude of NOAA buoys 5-8 along with ADCIRC+SWAN wind forcing during Hurricane Ida. Figure 35: Wind velocity magnitude of NOAA buoys 9-10 along with ADCIRC+SWAN wind forcing during Hurricane Ida. ## 6 Conclusions From this study, it was observed that varying the source term packages in SWAN as part of a validated ADCIRC+SWAN model significantly impacted run times as well as results in wave outputs. Upgrading from Gen1 or Gen2 source terms to the ST6 Gen3 source terms resulted in about a 40 percent increase in run time. The effects of the choice of source terms (Gen1, Gen2 or Gen3) on average water surface elevations at NOAA gauges only changed RMSE relative to the gauges by about.007 m. However, more significant differences in wave statistics were observed at the NOAA buoy locations based on source term choice. These differences greatly depended on the scenario, as during Hurricane Ike Gen3 source terms showed roughly a 20 percent improvement in the accuracy of significant wave height, and a 15 percent improvement in accuracy of peak period, and a 14 percent improvement in accuracy mean wave direction relative to Gen1 or Gen2 source terms while during Hurricane Ida no such improvements in accuracy were observed. This study shows the possible trade-off between accuracy and run time for the choice of source term complexity in SWAN in the coupled ADCIRC+SWAN setting. The small differences in accuracy compared to observed water levels of the source term configurations may warrant more investigation into the use of reduced order source term packages such as Gen1 or Gen2. For instance, it may be worth the savings in computation if only water surface elevations are of primary interest as opposed to wave statistics. Furthermore, this study also showed that it may be possible that the highly detailed Gen3 source terms may not improve accuracy in resulting wave statistics especially when only parametric winds are used as opposed to more high fidelity hindcasted windfields. Some limitations of this study is that this was for a single geographic region, only two storm were run, and the source terms were not specifically tuned for this region so these results may vary significantly for a different domain or accuracy of the wave model could be improved if the source terms were tuned specifically for this region. ## Acknowledgements This work has been supported by the United States Department of Homeland Security Coastal Resilience Center research project "Accurate and Fast Wave Modeling and Coupling with ADCIRC". The authors also would like to gratefully acknowledge the use of the"DMS23001" and "DMS21031" allocations on the Frontera supercomputer at the Texas Advanced Computing Center at the University of Texas at Austin. ## Appendix A Source Terms Overvew ### Wind Input \(S_{in}\) In the 1950s there were two popular theories on wave generation from winds that emerged, one by Phillips and one by Miles [46, 43]. These general theories were adapted and put into a form that is compatible with the Wave Action Balance Equations. The theory of Phillips describes monochromatic waves generated from resonance caused by fluctuations in pressure over the ocean surface due to wind which results in a linear growth term [45]. The theory of Miles considers the change in air flow due to the presence of existing waves, resulting in a feedback mechanism and in an exponential growth term [45]. Most operational models to this day use a sum of both the linear growth from the theory of Phillips and the exponential growth from the theory of Miles in order to define source term \(S_{in}\) which is of the abstract form: \[S_{in}=\alpha+\beta E(x,y,\sigma,\theta,t). \tag{39}\] Here \(\alpha\) would be some coefficient representing the linear wave growth due to the theory of Phillips and \(\beta E(x,y,\sigma,\theta,t)\) represents the exponential wave growth due to the theory of Miles. A full derivation of \(S_{in}\) in the above form can be found in many text books [28, 34, 63]. These two theories can be thought of as constitutive relations that allow for the complex phenomenon of turbulent flow of the air at the ocean surface generating waves to be written in closed form. There has been documented disagreement between the theories of Phillips and Miles with laboratory and field experiments in addition to doubts on the validity of drag coefficients during extreme weather events [1]. However, these theories can produce reasonable results in observational wind wave models and variations of which are still used in popular wind wave models such as SWAN [6], WAVEWATCH III [57], MIKE21 [60], and WAM [34]. The model in this dissertation will use the default 3rd generation source term package used in SWAN cycle III Version 41.41 [54]. The default wind input term is that from WAM Cycle III and first outlined in [35]. The input term is defined as in (14) with \(\alpha=0\) and \(\beta\) defined as: \[\beta=\max(0,0.25.\frac{\rho_{air}}{\rho_{water}}(28\frac{u_{s}}{c}\cos(\theta- \theta_{wind})-1))\sigma, \tag{40}\] \(\rho_{air}\) is the density of the air which is by default set to \(1.225\frac{kg}{m^{3}}\). \(\rho_{water}\) is the density of water which by default is set to \(997\frac{kg}{m^{3}}\). \(c\) is the phase velocity which is a consequence of the dispersion relation from (8). \(\theta\) is the direction which is an independent variable. \(\theta_{wind}\) is the mean wind direction. \(\sigma\) is the relative radian frequency which is another one of the independent variables. \(u_{s}\) is known as the friction velocity and is defined as follows: \[u_{s}^{2}=U_{10}^{2}C_{D}. \tag{41}\] Here \(U_{10}\) is the relative wind speed at 10 meters above the surface of the ocean, this is commonly used in operational ocean models. It is important to note that in the presence of ambient currents, \(U_{10}\) is defined as the wind speed minus the current vector. \(C_{D}\) is the drag coefficient and is taken from a study by Wu et.al. 1982 [62]: \[C_{D}=\left\{\begin{array}{ll}1.2875\times 10^{-3}&U_{10}<7.5m/s\\ (0.8+0.065U_{10})\times 10^{-3}&U_{10}\geq 7.5m/s.\end{array}\right. \tag{42}\] The default SWAN actually uses a second order polynomial fit to better account for empirically observed drop off of wind growth at higher wind speeds which isn't accounted in the model from Wu [54]. In this case \(C_{D}\) is set as: \[C_{D}=(0.55+0.297\frac{U_{10}}{U_{ref}}-1.49(\frac{U_{10}}{U_{ref}})^{2})\times 1 0^{-3}. \tag{43}\] \(U_{ref}\) is the reference wind speed which by default set at \(31.5\frac{m}{s}\). In the last decade or so beginning with the work of Ardhuin, Babanin and many others, the state of the art source term package has changed to become semi-empirical in an aim to agree better with observed data and specifically to be more robust to situations with extreme weather [3]. This work culminated into new source terms for \(S_{in}\) and \(S_{sec}\), and is often referred to as ST6 which was first fully defined in a paper by Rogers et. al. 2012 [49]. This source term package is available and widely used in both SWAN and WAVEWATCH III. The wind input term for the ST6 package is defined as follows, \(\alpha=0\) and \(\beta\) from (14) is: \[\beta=\gamma\sigma\frac{\rho_{air}}{\rho_{water}}, \tag{44}\] where \(\gamma\) is a parameter that depends on location, direction, and frequency as: \[\gamma(x,y,f,\theta)=G(x,y,f,\theta)\sqrt{B_{n}(x,y,f)}W(x,y,f,\theta). \tag{45}\] In this context \(f=\frac{\sigma}{2\pi}\) and \(G\), \(B_{n}\), \(W\) are defined as follows: \[G=2.8-(1+\tanh{(10\sqrt{B_{n}(x,y,f)}W(x,y,f,\theta)-11)}), \tag{46}\] \[B_{n}(x,y,f)=\frac{A(x,y,f)}{2\pi}E(x,y,f)k^{3}c_{g}, \tag{47}\] \[W(x,y,f,\theta)=(\max{(0,\frac{U_{10}}{c}cos(\theta-\theta_{wind})-1)})^{2}. \tag{48}\] In this case \(E(x,y,f)\) is the integrated spectrum \(E(x,y,f)=\int_{\theta}E(x,y,f,\theta)d\theta\), where \(E(x,y,f,\theta)=E(x,y,\sigma,\theta)2\pi\). \(c_{g}\) is the group velocity as previously defined in (10). \(A(x,y,f)\) is the wave steepness defined by: \[\frac{1}{A(x,y,f)}=\int_{\theta}E_{n}(x,y,f,\theta)d\theta. \tag{49}\] Where \(E_{n}(x,y,f,\theta)\) is \(\frac{E(x,y,f,\theta)}{E^{\prime}(x,y,f)}\) and \(E^{\prime}(x,y,f)\) is: \[E^{\prime}(x,y,f)=\max_{\theta}{(E(x,y,f,\theta))}. \tag{50}\] To be clear, the SWAN default \(S_{in}\) is a modification on the WAM Cycle 3 terms as defined in (40) - (43) while the ST6 \(S_{in}\) described in (44) - (50) terms are considered the state of the art and used in this study. ### Dissipation \(S_{diss}\) The change in variance density due to dissipation, \(S_{diss}\), can take many forms depending on which wind wave model is being discussed. Changes in the spectrum due to dissipation are still not well understood, in fact according to the proceedings of a recent conference on wind waves, "Theoretical and experimental knowledge of the spectral wave dissipation is so insufficient that, to fill the gap, spectral models have been used to guess the spectral dissipation function as a residual term of tuning the balance of better known source functions to fit known wave spectrum features" [1]. In general, \(S_{diss}\) is separated as a sum of several terms which can include wave breaking in deep water (whitecapping), bottom friction, and depth-induced breaking. There are many other dissipation source terms available in operational models like SWAN and WAVEWATCH III that will not be included in this section. For the purposes of this dissertation, only the most commonly used dissipation source terms will be included which are whitecapping, bottom friction, and depth-induced breaking: \[S_{diss}=S_{wc}+S_{bf}+S_{br}. \tag{51}\] #### 0.a.2.1 Whitecapping \(S_{wc}\) The term for whitecapping used in many wind wave models was first developed by Hasselmann in 1974 [24]. The default source term for whitecapping used in SWAN is based on the theory of Hasselmann and is taken from Komen et. al. 1984 and WAMID in 1988 [35, 18]. The form is: \[S_{wc}=-\Gamma\tilde{\sigma}\frac{k}{\tilde{k}}E(x,y,\sigma,\theta,t). \tag{52}\] In this case: \[\Gamma=C_{ds}((1-\delta)+\delta\frac{k}{\tilde{k}})(\frac{\tilde{s}}{\tilde{s }_{PM}})^{p}. \tag{53}\] Here \(\tilde{s}\) represents wave steepness and is defined as \(\tilde{s}=\tilde{k}\sqrt{E_{tot}}\) and \(\tilde{s}_{PM}\) is just \(\tilde{s}\) of the Pierson-Moskowicz spectrum which is by default \(\tilde{s}_{PM}=\sqrt{3.02\times 10^{-3}}\). \(C_{ds}\), \(n\), and \(p\) are empirically tuned coefficients while \(\tilde{\sigma}\) and \(\tilde{k}\) are the mean frequency and wavenumbers defined in [18]: \[E_{tot}(x,y)=\int_{\theta}\int_{\sigma}E(x,y,\sigma,\theta)d\theta d\sigma, \tag{54}\] \[\tilde{\sigma}(x,y)=[E_{tot}^{-1}\int_{\theta}\int_{\sigma}\sigma^{-1}E(x,y, \sigma,\theta)d\theta d\sigma]^{-1}, \tag{55}\] \[\tilde{k}(x,y)=[E_{tot}^{-1}\int_{\theta}\int_{\sigma}k^{-1/2}E(x,y,\sigma, \theta)d\theta d\sigma]^{-2}. \tag{56}\] \(C_{ds}\) by default is set to \(2.36\epsilon-5\), \(p\) is set to \(2\), \(\delta\) is \(1\). For the ST6 source term package, the whitecapping term is defined in the following way: \[S_{wc}(x,y,f,\theta)=\begin{cases}0,&\text{if }E(x,y,f,\theta)<E_{T}(x,y,f, \theta)\\ T_{1}(x,y,f,\theta)+T_{2}(x,y,f,\theta)&\text{if }E(x,y,f,\theta)\geq E_{T}(x,y,f, \theta),\end{cases} \tag{57}\] Where the threshold spectral density, \(E_{T}\), is defined as: \[E_{T}(x,y,f,\theta)=\frac{2\pi B_{nt}}{A(x,y,f)c_{g}k^{3}}, \tag{58}\] with \(B_{at}\) is a constant \(1.225e-3\) and \(A(x,y,f)\) as in (22). The two contributions of whitecapping are defined as follows: \[\begin{split}& T_{1}(x,y,f,\theta)=a_{1}A(x,y,f)f\left[\frac{ \Delta(x,y,f)}{E(x,y,f)}\right]^{L}E(x,y,f,\theta),\\ & T_{2}(x,y,f,\theta)=a_{2}\left[\int_{f_{low}}^{f}\left[\frac{ \Delta(x,y,f^{\prime})}{E(x,y,f^{\prime})}\right]^{M}df^{\prime}\right]E(x,y, f,\theta).\end{split} \tag{59}\] Constants \(a_{1},a_{2},L,M\) are calibrated based on empirical data and the function \(\Delta=E-E_{T}\) is the difference between variance density, \(E\), and threshold variance density, \(E_{T}\). #### _A.2.2 Bottom Friction \(S_{bf}\)_ Dissipation due to bottom friction plays an important role in shallow regions. There are many different formulations used in wind wave models, including the drag law theories and eddy-viscosity models [28]. The default implementation in SWAN is from the theory of Hasselmann in 1973 [23; 34]. The bottom friction term is defined as: \[S_{bf}(x,y,\sigma,\theta,t)=-C_{b}\frac{\sigma^{2}}{\sinh^{2}(kd)}E(x,y, \sigma,\theta,t). \tag{60}\] \(C_{b}\) is a proportionality constant determined empirically and by default set to 0.038. It can be seen from the definition that the bottom friction becomes more relevant at lower wavenumber magnitude \(k\) and lower water depth \(d\). #### _A.2.3 Depth-Induced Breaking \(S_{br}\)_ Lastly we have the source term due to depth induced breaking, also known as surf-zone breaking. The most widely used form of \(S_{br}\) is derived from the theories of Battjes and Janssen in 1978 [4]. Their theory approximates the energy lost due to breaking in shallow water as a bore. The theory was extended by Eldeberky and Battjes in 1995 to include multi-directional spectra [16] and takes the following form: \[S_{br}=-\frac{a_{bj}Q_{b}(x,y)\tilde{\sigma}(x,y)}{\beta^{2}(x,y)\pi}E(x,y, \sigma,\theta,t). \tag{61}\] \(a_{bj}\) is a constant which is default set to 1, \(Q_{b}\) is the fraction of breaking waves, \(\tilde{\sigma}\) is the mean relative radian frequency, and \(\beta=\frac{H_{max}}{H_{max}}\) is a parameter that varies over geographic space. \(Q_{b}\) is defined as: \[Q_{b}=\begin{cases}0&\beta\leq 0.2\\ Q_{0}-\beta^{2}\frac{Q_{0}-\frac{Q_{0}-1}{\theta^{2}}}{\theta^{2}-e}&0.2\leq \beta\leq 1\\ 1&\beta>1.\end{cases} \tag{62}\] \(Q_{0}\) in this case is calculated as: \[Q_{0}=\begin{cases}0&\beta\leq 0.5\\ (2\beta-1)^{2}&0.5<\beta\leq 1.\end{cases} \tag{63}\] The max expected wave height, \(H_{max}\) is defined as a constant times depth, \(H_{max}=\gamma d\). By default, \(\gamma=0.73\). The root mean square wave height is calculated as \(H_{rms}=\sqrt{8m_{0}}\) with \(m_{0}\) being the zeroth moment as in (36). Lastly, the mean relative radian frequency is calculated as: \[\tilde{\sigma}(x,y)=\frac{1}{E_{tot}}\int_{\sigma}\int_{\theta}\sigma E(x,y, \sigma,\theta)d\theta d\sigma. \tag{64}\] ### Summary of \(S_{nl}\) Nonlinear interactions as represented by \(S_{nl}\) are conservative, in the sense that \(S_{nl}\) is always of zero average. This source term is responsible for redistributing action density amongst the spectrum due to wave wave interactions. The interactions can be thought of as energy exchanged due to resonance amongst wave components or if the waves are thought of as particles, collisions and exchange of momentum between each particle. The source term due to nonlinear interactions is usually split into a sum of two terms, nonlinear four wave interactions \(S_{nl4}\) and nonlinear three wave interactions \(S_{nl3}\), also known as quadruplet wave interactions and triad wave interactions respectively. #### Quadruplets \(S_{nl4}\) Four wave interactions can be thought of as when two pairs of harmonic waves resonate: \[\begin{split}\sigma_{1}+\sigma_{2}&=\sigma_{3}+ \sigma_{4},\\ \mathbf{k}_{1}+\mathbf{k}_{2}&=\mathbf{k}_{3}+ \mathbf{k}_{4}.\end{split} \tag{65}\] This results in an exchange in energy between the four harmonic waves. Similarly, three wave interactions occur when a pair of harmonic waves resonate with a single harmonic wave. Due to the dispersion relation, it can be shown that the resonance conditions for triads can only occur in shallow water if Airy Wave Theory is assumed to hold true. The theory describing the physics in the nonlinear four wave interactions was originally developed by Hasselmann [21; 22]. A similar form for the four-wave interactions was also found independently a couple of years later by Zakharov using the kinetic equation [65]. In the derivation by Hasselmann, the form of the quadruplet wave interactions was derived starting with an assumption of Airy Wave Theory and then conducting a perturbation analysis. This leads to the energy transfer amongst the spectrum to take a form of a Boltzmann integral which is of the form: \[\begin{split} S_{nl4}(\mathbf{k}_{4})=\iiint T_{1}(\mathbf{k}_{1}, \mathbf{k}_{2},\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{k}_{4})E(\mathbf{k}_{1} )E(\mathbf{k}_{2})E(\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{k}_{4})d\mathbf{k} _{1}d\mathbf{k}_{2}-\\ E(\mathbf{k}_{4})\iiint T_{2}(\mathbf{k}_{1},\mathbf{k}_{2}, \mathbf{k}_{4})E(\mathbf{k}_{1})E(\mathbf{k}_{2})d\mathbf{k}_{1}d\mathbf{k}_{ 2}.\end{split} \tag{66}\] The functions \(T_{1}\) and \(T_{2}\) are known as transfer coefficients that represent the amount of energy exchanging between each of the wavenumbers [28]. The first integral represents the passive part of the interaction with respect to wavenumber \(\mathbf{k}_{4}\) since it doesn't depend directly on \(E(\mathbf{k}_{4})\). The second integral represents the active part of the interaction, that is it does depend on \(E(\mathbf{k}_{4})\) and it represents the amount of energy wavenumber \(\mathbf{k}_{4}\) is giving to the rest of the wavenumbers. In practice, the functions \(T_{1}\) and \(T_{2}\) are difficult to compute and more importantly, the Boltzmann integral form is extremely expensive computationally. There are some implementations that approximate the full Boltzmann integrals such as the Webb-Resio-Tracy (WRT) method [58]. However, calculating the full Boltzmann integrals are prohibitively expensive and thus limits wave models to simple test cases. For operational settings, a significant reduction in computation is necessary in order to implement the four wave interactions. A popular method to reduce the cost of computing \(S_{nl4}\) is called the discrete interaction approximation (DIA) [25][26]. The DIA simplifies the full Boltzmann integral by only considering resonance conditions in only two quadruplets. That is to say, instead of considering all possible four wave resonance conditions as in (65), the following reduced set is considered: \[\begin{split}\sigma_{1}=\sigma_{2}=\sigma,\\ \sigma_{3}=\sigma(1+\lambda),\\ \sigma_{4}=\sigma(1-\lambda).\end{split} \tag{67}\] Here \(\lambda\) is an empirically determined constant that is \(\lambda=0.25\). Additionally, if the spectrum is in terms of relative radian frequency and direction (\(\sigma\), \(\theta\)) the resonance conditions are restricted to the following two sets of angles: first (*) \(\theta_{1}=\theta_{2}=\theta\), \(\theta_{3}=\theta-11.48^{\circ}\), \(\theta_{4}=\theta+33.56^{\circ}\), and then second (**) \(\theta_{1}=\theta_{2}=\theta\), \(\theta_{3}=\theta+11.48^{\circ}\), \(\theta_{4}=\theta-33.56^{\circ}\). The source term \(S_{nl4}\) as prescribed by the DIA can then be defined as a sum of the contributions from the first set quadruplet angles (*) and the second (**): \[S_{nl4}(\sigma,\theta)=S_{nl4}^{*}(\sigma,\theta)+S_{nl4}^{**}(\sigma,\theta). \tag{68}\] Each contribution is then defined itself as a sum of approximations to the Boltzmann integrals: \[S_{nl4}^{*}(\sigma,\theta)=2\delta S_{nl4}(\sigma_{1}\sigma,\theta)-\delta S _{nl4}(\alpha_{2}\sigma,\theta)-\delta S_{nl4}(\alpha_{3}\sigma,\theta). \tag{69}\] Where \(\alpha_{1}=1\), \(\alpha_{2}=1+\lambda\), \(\alpha_{3}=1-\lambda\). Each contributor \(\delta S_{nl4}\) is defined for \(i=1,2,3\): \[\begin{split}\delta S_{nl4}(\alpha_{i}\sigma,\theta)=C\left(\frac{ \sigma}{2\pi}\right)^{11}\left(E^{2}(\alpha_{i}\sigma,\theta)\left[\frac{E( \alpha_{i}\sigma_{3},\theta_{3})}{(1+\lambda)^{4}}+\frac{E(\alpha_{i}\sigma_{4 },\theta_{4})}{(1-\lambda)^{4}}\right]\right.\\ \left.-2\frac{E(\alpha_{i}\sigma,\theta)E(\alpha_{i}\sigma_{3}, \theta_{3})E(\alpha_{i}\sigma_{4},\theta_{4})}{(1-\lambda^{2})^{4}}\right). \end{split} \tag{70}\] C is an empirically tuned constant. Note that \(S_{nl4}^{**}(\sigma,\theta)\) is the exact same as (69) but using the second set of angles (**). From (70), we can see this source term is indeed nonlinear and in fact cubic with respect to the spectrum \(E(\sigma,\theta)\) at a given location. The DIA as shown above, or a variation of it, is used in the operational wind wave models SWAN, WAM, Mike21 and WAVEWATCH III. It is important to also note that there are other approximations to four-wave interactions than just the DIA and that this is a still active area of research. Some alternatives to the DIA include the two scale approximation (TSA), the RIAM method, and the Generalized Multiple DIA (MDIA) [48; 33; 56]. #### 4.3.2 Triads \(S_{nl3}\) Three wave interactions (or triads) are similar to four wave interactions, except as opposed to four wave interactions that occur in deep or shallow water, triads are typically segregated to shallow water [63]. This is because the resonance conditions for triads can not be met assuming the dispersion relation holds in deep water. The resonance conditions for three wave interactions are: \[\begin{split}\sigma_{1}\pm\sigma_{2}=\sigma_{3},\\ \mathbf{k}_{1}\pm\mathbf{k}_{2}=\mathbf{k}_{3}.\end{split} \tag{71}\] Theoretical modelling of three wave interactions in the literature are often based on the Boussinesq type equations [28]. The Boussinesq type model is phase-resolving which means on a large domain it is computationally expensive, highly nonlinear, and the triad wave interactions occur implicitly within them. In order to include the effects of three wave interactions into the spectral wind wave model, significant simplifications are needed. A common approach in modern wind wave models is the lumped-triad approximation (LTA) of Eldeberky [15] which is of the following form: \[S_{nl3}(\sigma,\theta)=S_{nl3}^{+}(\sigma,\theta)+S_{nl3}^{-}(\sigma,\theta). \tag{72}\] The first term on the right hand side being: \[S_{nl3}^{+}(\sigma,\theta)=\max\left[0,\alpha_{EB}2\pi cc_{g}J^{2}|\sin{( \beta)}|E^{2}(\sigma/2,\theta)-2E(\sigma/2,\theta)E(\sigma,\theta)\right]. \tag{73}\] \(\alpha_{EB}\) is an empirically tuned coefficient, \(c\) is the phase speed (determined by the dispersion relation), \(c_{g}\) is the group velocity as defined in (10), \(J\) is called the interaction coefficient and is a function of water depth, phase speed, and wavenumber, and \(\beta\) is called the biphase. The second term on the right hand side is defined as: \[S_{nl3}^{-}(\sigma,\theta)=-2S_{nl3}^{+}(2\sigma,\theta). \tag{74}\] The original derivation can be found in [15], a more accessible summary can be found in [28]. The LTA is implemented in operational wind wave models SWAN, WAVEWATCH III and Mike21. ## CRediT authorship contribution statement **Mark Loveland:** Methodology, Software, Validation, Writing - Original Draft. **Jessica Meixner:** Methodology, Writing - Review & Editing. **Eirik Vasheth:** Methodology, Writing - Review & Editing. **Clint Dawson:** Resources, Supervision, Project administration.
2305.15435
Possible connections between relativity theory and a version of quantum theory based upon theoretical variables
An alternative approach towards quantum theory is described, and tentative attempts to connect his approach to special and general relativity are discussed. Important concepts are gauge groups and information/entropy connected to some physical systems. Some recent results on information in connection to black holes are touched upon, and it is indicated how expected information can be argued to be conserved. This argument only depends on what happens outside the black hole. Everything connected to the interior of the black hole is inaccessible.
Inge S. Helland
2023-05-23T09:40:34Z
http://arxiv.org/abs/2305.15435v4
Possible connections between relativity theory and a version of quantum theory based on conceptual variables ###### Abstract An alternative approach towards quantum theory is described, and tentative attempts to connect his approach to special and general relativity are discussed. Important concepts ar gauge groups and information/entropy connected to some physical systems. Some recent results on information in connection to black holes are touched upon. The discussions here must be considered to be preliminary. ## 1 Introduction To find a conceptual basis from which quantum theory and general relativity both can be understood, is one of the most challenging problems in modern physics. Many researchers and several research groups have made their proposals on how to attack this problem. The most well known approaches are the following three: 1) Quantum loop theory. (For a popular partial account, see Rovelli [1]). 2) String theory. (For a brief introduction, see Susskind and Lindsay [2]). 3) The pure mathematical modelling approach. (See for instance Laudal [3]). From my point of view the operational approach by Hardy [4] may be particularly enlightening. Several relevant references can be found in the latter paper. In contrast to these references, I will rely of a new and different approach towards the axioms of quantum theory. The approach started with the book [5], and has now been further developed in a series of articles. A summary of the theory is now given in [6]. Central to the theory is a simple model of the mind of an observer, a model which may be generalized to the mind of any person. It relies on what I call conceptual variables, which may be physical variables, but in the process of planning, doing of interpreting experiments, the variables are also assumed to exist in the mind of a relevant actor. From a mathematical point of view, my only requirement to the conceptual variables, is the following: If \(\lambda\) is a conceptual variable and \(\theta=f(\lambda)\) for some fixed function \(f\), then \(\theta\) is also a conceptual variable. Some conceptual variables may be accessible, that is, by experiment or measurement, it is possible in some future to obtain as good information about them as we want to. This definition may be unclear to some readers, but again, from a mathematical point of view, I must stress: The only property that I require about my accessible variables is: If \(\lambda\) is accessible, and \(\theta=f(\lambda)\) for some fixed function \(f\), then \(\theta\) is accessible. From a physical point of view, two examples of accessible variables are theoretical position or theoretical momentum of a particle. I say theoretical, since I model measurement as a theoretical value plus random error. This is in the tradition of statisticians, who will regard my theoretical values as parameters. I have deliberately avoided the word parameter in my theory, since this word also have different meanings for a physicist. Another, and perhaps simpler, physical example of an accessible variable, is the spin component in a fixed direction \(a\) of some particle with spin. In quantum theory, this is a discrete variable, and for discrete variables, exact values can be obtained by good experiments, say, a Stern-Gerlach experiment. Now to my model of the mind of some actor: I assume that in some fixed context he has several accessible variables in his mind, say \(\theta,\eta,\xi,...\). These may be physical variables as above, but they may also be completely different conceptual variables. _As a model assumption assume that there exists an inaccessible variable \(\phi\) such that each accessible variable is a function of \(\phi\)._ In the two physical examples above, it is easy to give concrete realisms of such a \(\phi\). In the first example it can be taken to be the vector (theoretical position, theoretical momentum), which is inaccessible by Heisenberg's inequality. In the second example we can use the abstract spin vector as \(\phi\). For an electron, say, the component in direction \(a\) can be taken as \(\theta^{a}=f^{a}(\phi)=\mbox{sign}(\cos(\phi,a))\). In general, the existence of \(\phi\) must just be seen as a model, but it turns out to be a useful model. In both the examples above, the relevant accessible variables mentioned may be seen as maximal, no 'larger' accessible variables may be found by the following partial ordering: Say that \(\theta\) is 'less than or equal to' \(\lambda\) if \(\theta=f(\lambda)\) for some function \(f\). By using Zorn's lemma on this partial ordering, it follows from the model that maximal accessible variables always exist. These turn out to be important. In Section 2 below, I show that, by adding suitable symmetry assumptions to this model, essential elements of quantum mechanics emerge. In the theory that is developed here, we may think of conceptual variables in the mind of some single person. But alternatively, we may also think of conceptual variables in the joint minds of a communicating group of persons. The only difference in the latter case, is that the variables always must be defined in words, in order that communication shall be possible. In general, one may take the standpoint that every scientific theory (including what I will present below) is coupled to the mind of at least one person or to the joint minds of a group of communicating persons. In this connection, concepts must be formed in this mind (these minds), in particular what I have called conceptual variables. In the present paper these variables will be physical variables like space, time, mass, momentum, charge, spin component etc., that in most cases may be said to have an existence related to an objective reality, at least in classical theories, but in connection to measurements, to theory building and theory assessments, the variables must also be said to exist in the mind of some person and/or in the joint minds of a group of communicating persons. An alternative option may be to just see physical variables as'stand-alone' objects connected to some established mathematical model. Regardless of how we regard these variables, we need a measurement theory. It is argued in [5] that a quantum theory of measurement is much easier to understand if one takes as a basis the version of variables that exist in our minds. In chapter 4 of [5] and in [7], essential elements of quantum mechanics are deduced from some concrete theorems regarding these conceptual variables. Our task here will be to try to connect such variables also to relativity theory, and to look at some consequences of such connections. ## 2 Quantum theory from conceptual variables This author has a background as a statistician, but from this background he has worked with the foundation of quantum mechanics for many years. The result of this work is the book [5] and several published papers in physics journals and on the arXiv. The work is now summarized in [6], and more recently, and more thoroughly in [33]. One main result from [5], as generalized in [7], is the following: **Theorem 1**_Consider a situation where there are two maximal accessible conceptual variables \(\theta\) and \(\xi\). Make the following assumptions:_ _(i) On one of these variables, \(\theta\), there can be defined group actions from a transitive group \(G\) with a trivial isotropy group and with left invariant measure \(\rho\) on the space \(\Omega_{\theta}\)._ _(ii) There exists a unitary multivariate representation \(U(\cdot)\) of the group \(G\) defined on \(\theta\) such that the coherent states \(U(g)|\theta_{0}\rangle\) are in one-to-one correspondence with \(g\in G\) and hence with the values of \(\theta\)._ _(iii) The two maximal accessible variables \(\theta\) and \(\xi\) can both be seen as functions of an inaccessible variable \(\phi\in\Omega_{\phi}\). There is a transformation \(k\) acting on \(\Omega_{\phi}\) such that \(\xi(\phi)=\theta(k\phi)\)._ _Then there exists a Hilbert space \(\mathcal{H}\) connected to the situation, and to every accessible conceptual variable there can be associated a unique symmetric operator on \(\mathcal{H}\)._ Of course the Hilbert space \(\mathcal{H}\) here is the one associated with the representation (ii) in the theorem. The most important result is that to every accessible conceptual variable there is associated a unique operator on this Hilbert space. Explicit formulas for the operators are given in [5], [7] and [33]. To understand this theorem, some definitions are necessary; for these, see the Introduction above. To repeat: Mathematically, to prove the theorem, we only need the following conditions: If \(\lambda\) is a conceptual variable, then \(\theta=f(\lambda)\) for some fixed function \(f\), then \(\theta\) is also a conceptual variable. And if \(\lambda\) is accessible, then also \(\theta\) is accessible. But in the interpretation of the theorem, I choose to connect the variables to the mind of an observer or to the joint minds of a group of communicating observers. The notion of a conceptual variable can then be seen as a generalization of the statistician's parameter notion. As such it is crucial, at least in the continuous case, to distinguish between data and variables. Future data can be modeled as (theoretical) variable plus random noise. It is important that there in situations related to quantum theory as approached in [5], also exist inaccessible conceptual variables, like the full spin vector of a particle or the vector (theoretical position, theoretical momentum). Thus Heisenberg's uncertainly relation is an important assumption behind the theorem, essentially the only physical assumption that is needed. The assumption (iii) can be satisfied under weak conditions, as shown in [7]. When it is satisfied, we say that the variables \(\theta\) and \(\xi\) are _related_. When no \((\phi,k)\) can be found such that \(\xi(\phi)=\theta(k\phi)\), we say that \(\theta\) and \(\xi\) are essentially different. The operators of related conceptual variables have a close relationship. To formulate this precisely, we first need a definition. **Definition 1**_The function \(\theta(\cdot)\) on a space \(\Omega_{\phi}\) upon which a group of transformations \(K\) is defined, is said to be permissible if the following holds: \(\theta(\phi_{1})=\theta(\phi_{2})\) implies \(\theta(k\phi_{1})=\theta(k\phi_{2})\) for all \(k\in K\)._ This notion is studied thoroughly in [8]. The main conclusion is that if \(\theta(\cdot)\) is permissible, then there is a group \(G\) acting on the image space \(\Omega_{\theta}\) such that \(g(\theta(\phi))\) is defined as \(\theta(k\phi)\); \(k\in K\). The mapping here from \(K\) to \(G\) is an homomorphism. If \(K\) is transitive on \(\Omega_{\phi}\), then \(G\) is transitive on \(\Omega_{\theta}\). (Lemma 4.3 in [5].) **Theorem 2**_Assume that the function \(\theta(\cdot)\) is permissible with respect to a group \(K\) acting on \(\Omega_{\phi}\). Assume that \(K\) is transitive and has a trivial isotropy group. Let \(T(\cdot)\) be an irreducible unitary representation of \(K\) such that the coherent states \(T(k)|\psi_{0}\rangle\) are in one-to-one correspondence with \(k\). For any transformation \(t\in K\) and any such unitary representation \(T\) of \(K\), the operator \(T(t)^{\dagger}A^{\theta}T(t)\) is the operator corresponding to \(\theta^{\prime}\) defined by \(\theta^{\prime}(\phi)=\theta(t\phi)\)._ Theorem 2 is proved in the Appendix of [9], an article on the Bell experiment. (For a shorter and more precise version of the latter aspect, see [10].) In Chapter 4 of [5] it is proved that, in the discrete case, essential elements of ordinary quantum mechanics follows from variants of Theorem 1 and Theorem 2 above. In general, the assumption that there can be defined a transitive group \(G\) acting upon \(\theta\) is crucial. It can easily be satisfied when the range of \(\theta\) is finite or is the whole line \(\mathbb{R}^{1}\), but is also relevant when \(\theta\) is a vector. As an example, assume that \(\theta\) takes all values in some Euclidean space \(\mathbb{R}^{p}\). Then all the necessary assumptions are satisfied be the translation group: \(\theta\mapsto\theta+\alpha\), where \(\alpha\) is some arbitrary vector in \(\mathbb{R}^{p}\). In the finite case, the group \(G\) can be taken to be the cyclic group on \(\Omega_{\theta}\). When \(\theta\) and \(\theta^{\prime}\) take two values, say \(-1\) and \(+1\), they can be taken to be spin components, and the group \(K\) can be defined can be defined as the group of rotations in the plane determined by the two components. Thus Theorem 1 and Theorem 2 can be used in a new foundation of quantum theory, and this foundation is in no way limited to finite-valued or scalar variables. However, in the discrete case, more can be proved [5]: The set of eigenvalues of the operator \(A^{\theta}\) equals the set of possible values of \(\theta\). The accessible variable \(\theta\) is maximal if and only if each eigenvalue is single, that is, each eigenspace is one-dimensional. This gives a nice interpretation of the eigenvectors of operators with a physical interpretation. In the present article I will limit the concept of state vector to vectors that can be given such an interpretation. It is shown in [7] that also certain entangled states may be interpreted in this way. In [33] now all the mathematical proofs are collected. It is also prove there that in the finitedimensional case, explicie constructions of the groups \(G\) and \(K\) and of the transformation \(k\) can be given. This greatly simplifies the theory. In general, the eigenspaces of the operators connected to variables \(\lambda\) are in one-to-one correspondence with questions: 'What will be the value of \(\lambda\) if I measure it?', together with a sharp answer '\(\lambda=u\)'. If and only if the accessible variable \(\lambda\) is maximal, the eigenspaces are one-dimensional. This gives a concrete, very simple interpretation of many unit vectors in the Hilbert space. The difficult problem of determining when all relevant unit vectors in some concrete situation can have such a representation, is briefly taken up in [11]. Having established this important foundation, the main other foundational result to prove, is the Born formula. In [5] and in Article 5 this formula is proved under the following three assumptions: 1) The likelihood principle from statistics holds (this principle is motivated in Chapter 2 of [5]). 2) The actor performing the relevant experiment or measurement has ideals which can be modelled by a perfectly rational abstract being. 3) The state in the mind of this actor describing the physical system before the measurement or experiment, is coupled to a maximal accessible variable. It can be shown (Article 5) that the Born formula can be given a form where the last assumption can be dispensed with, but then we have to assume that the relevant conceptual variable \(\theta\) is dominated by a maximal accessible variable \(\eta\) such that the conditional distribution of \(\eta\), given \(\theta\) is uniform. In [7] also, several socalled paradoxes of quantum mechanics are briefly discussed. In particular, in connection to the Schrodinger cat paradox, it is argued for a version of quantum theory where the state vector concept is limited to eigenvectors of physically meaningful operators. In this version it is possible to link pure states to question-and answer pairs as above. As said, in the present article I will limit my discussion of quantum theory to situations where the above link can be assumed. For further consequences of this theory, see [5]. ## 3 Causality, inference, and reality The book [5] is concentrating on epistemic processes, processes to obtain knowledge through experiments or measurements. (Of course, there are also other ways to obtain knowledge; this is largely ignored in [5].) A very important problem that remains to be discussed, is to what extent the results of such epistemic processes can be associated with some sort of reality, a'real' world. The only statement about this given in [5] is the following: 'If all real and imagined observators can be said to agree on the result of some experiment or measurement, then this is a strong argument to the effect that this result can be coupled to some reality. This conclusion is strengthened if the experiment is done in a 'proper' scientific way.' Recently, a deeper discussion of the reality-question was attempted by Schmid et al. [12]. Two weaknesses of that paper, however, are first that statistical inference is limited to Bayesian inference, and next that the paper in some sense mixes the concept of ontology with something related to cause-and effect relations. Another answer to the question of whether classical ontology can be made compatible with quantum mechanics, is given by Evans [13], a paper where also [13] is criticized. My own views on this question are now given in [14]. ## 4 Conceptual variables related to relativity theory It is sometimes said that one of the obstacles for combining quantum theory and general relativity theory is that in quantum field theory [15], say, time and space are independent variables, while in general relativity theory, time and space are the basic constituences. I want to tune down this difference here: To me, here _time and space are physical variables, but also conceptual variables associated with the mind of some actor or to the joint minds of a group of actors_. In a given situation, some actors may focus on the theories where time and space are independent variables (relativity theory), while other actors may focus on theories like quantum field theories. Most people do not focus on any of these theories at all, but researchers trying to think deeply do. In the following I will not rely on any of the deep conceptual variables that recent researchers have invented in attempts to understand the general situation. In particular, I will not mention strings, loops, nor multiple universes. I will only take as my points of departure simple variables, in particular space and time, momentum and energy. It is interesting, however, to ask whether my approach towards quantum mechanics can be generalized to modern quantum field theories, in the way these theories are developed as a background for the standard model in physics. Field theories and gauge groups I will start with classical field theories, where I by 'classical' also include special and general relativity theory. The field theories will be seen as models in physics, and as such, they also exist in the joint minds of a communicating group of physicists. In a concrete setting, important variables are space and time. A concrete event can be always be thought of as taking place at a specific time-space point \(\tau=(t,x,y,z)\), where \(t\) is the time as measured by some actor, and \((x,y,z)\) are the space variables as measured by the same actor. In general, \(\tau\) is a conceptual variable, and it varies in, say \(\Omega_{\tau}\). A field is then defined as a function from \(\Omega_{\tau}\) to another mathematical space \(\Omega_{\psi}\): \[\tau\mapsto\psi(\tau). \tag{1}\] In agreement with my previous theory, I assume that some field are accessible. Physical examples are electrical and magnetical fields. As my basic model, I assume the existence of a large inaccessible field \(\phi=\phi(\tau)\) such that all accessible ones are functions of this field, say \(\theta(\tau)=f_{\theta}(\phi(\tau))\). I assume just that \(\phi\) takes values in some mathematical space \(\Omega_{\phi}\). But I also assume that a group \(K\) is defined acting on \(\Omega_{\phi}\). If \(\theta(\cdot)\) defined by \(\tau\mapsto\theta(\tau)\) is accessible, then \(G\) is assumed to be a group acting on \(\Omega_{\theta}\). It may or may not be that the function \(f_{\theta}\) is permissible with respect to \(K\). If it is permissible, then \(G\) may be defined by \(gf_{\theta}(\phi(\tau))=f_{\theta}(k\phi(\tau))\) for \(k\in K\). A local variant of this will appear if the group elements \(g\in G\) and \(k\in K\) depend on the time-space point \(\tau\). In any case, a quantum version of the field theory may tentatively be defined by appealing to Theorem 1 and Theorem 2 above. The basic assumption behind this version of Theorem 1 is that we have two related maximal accessible fields \(\theta\) and \(\xi\), and that the group \(G\) acting upon \(\theta\) has certain properties. Specifically it should be transitive and have a trivial isotropy group, and it should have an irreducible representation \(U(\cdot)\) such that the coherent states \(U(g)|\theta_{0}\rangle\) for some fixed state vector \(|\theta_{0}\rangle\) are in one-to-one correspondence with \(g\). If this is the case, quantum operators \(A^{\theta(\tau)}\)and \(A^{\xi(\tau)}\) can be defined for each \(\tau\). In the global case, these operators will be independent of \(\tau\), in the local case they will depend on \(\tau\) I will not here go into concrete applications of this other than those associated with relativity theory, but I will define in general what I mean by a gauge group. **Definition 2**_The gauge group is a subgroup \(H\) of the group \(K\), and it is defined with respect to all (maximal) accessible fields and variables \(\theta(\cdot),\xi(\cdot),\lambda(\cdot)....\) Specifically, it is defined as the maximal group such that all \(\theta(\phi),\xi(\phi),\lambda(\phi)...\) are constant: \(\theta(h\phi)=\theta(\phi)\) and so on._ _As before, we can have a local variant where the elements \(h\) depend on the time-space variable \(\tau\)._ It is enough to verify the criterion of constancy for the maximal variables. And if the maximal variables are related, it is enough to verify this criterion for one variable. Note that a change of gauge \(h\) will not affect any accessible variables, so the physics will be the same. Gauge theories are central in modern physics, in particular in connection to quantum field theory; I will not go further into any of these themes here, but refer to [15]. However, I am interested in a possible gauge theory associated with special and general relativity; this will be very briefly discussed later, but already here it is convenient to introduce a Lagrangian density and a Lagrangian. Assume that derivatives with respect to the four-vector \(\tau\) can be defined in the space \(\Omega_{\phi}\), and let the components be \(\partial_{\mu}\phi\) for \(\mu=1,...4\). Denote the space in which this four-vector varies as \(\Omega_{\pi}\), and define \(\Omega_{\psi}=\Omega_{\phi}\otimes\Omega_{\pi}\). The Lagrangian density is then defined as some function on \(\Omega_{\psi}\), and the Lagrangian as the integral of this function over four-space, which can be seen as a function on the field \(\psi(\cdot)\). In order to include the Lagrangian, we now extend Definition 2 to the field \(\psi(\cdot)\), and assume that the group \(K\) can be defined to act on the whole space \(\Omega_{\psi}\). When we will be interested in local gauge theories, we concentrate on the Lagrangian density instead of the Lagrangian. The accessible variables \(\theta(\cdot),\xi(\cdot),\lambda(\cdot)...\) may also be defined on \(\Omega_{\psi}\) in general. ## 6 Information and entropy Since Shannon [16, information has been coded in bits. However, the term information also has wide connotations, one person can have information about other persons or about phenomena in the real world. In his mind, this is coded in terms of conceptual variables. In this article, I will consider a situation where an actor or a group of communicating actors have focused on one particular maximal accessible variable \(\theta\). The information connected to this situation can be formulated in terms of the bits associated with the different values of \(\theta\). Note again that this information depends upon the particular actor/ group of actors. In a concrete relativistic setting, \(\theta\) may be the spacetime values connected to some physical system, and we may also be interested in the complementary variable \(\xi\), the energy-momentum vector connected to the physical system. The information associated with these complementary variables will in general be different. We consider first the case where \(\theta\) and \(\xi\) take a finite or countable set of values. The Shannon information associated with a variable assumes a probability distribution over this variable. We will assume that our knowledge of the maximal accessible variable \(\theta\) is given by a mixed state \[\rho=\sum_{i}p_{i}P_{i}, \tag{2}\] where \(P_{i}=|\psi_{i}\rangle\langle\psi_{i}|\) are orthogonal one-dimensional projection operators, and \(p_{i}\) are probabilities. Then the Shannon information is given by \[H^{\theta}=-\sum_{i}p_{i}\text{log}(p_{i}), \tag{3}\] where the logarithm is with respect to the basis 2. Assume now a complementary, maximal accessible variable \(\xi\), which through Theorem 1 is associated with an operator \(A^{\xi}=\sum_{j}a_{j}Q_{j}\), where the \(Q_{j}=|\phi_{j}\rangle\langle\phi_{j}|\) constitute another orthogonal set of one-dimensional projection operators. Then through Born's rule the probabilities of the different values are \[q_{j}=\langle\phi_{j}|\rho|\phi_{j}\rangle, \tag{4}\] and the associated Shannon information is \[H^{\xi}=-\sum_{j}q_{j}\text{log}(q_{j}). \tag{5}\] Analoguous formula hold for the continuous case. For a random variable with probability density \(f(x)\), the Shannon information is \[H=-\int_{x}f(x)\text{log}(f(x))dx. \tag{6}\] From a physical point of view, it is very important that Shannon information is proportional to the thermodynamic concept of entropy. The proportionality constant is Bolzman's constant \(k_{B}\) (when natural logarithms are used). This connection was first made by Ludwig Bolzmann, and expressed by his equation for entropy \[S=k_{B}\text{ln}(W), \tag{7}\] where \(W\) is is the number of microstates that can give a given macrostate. It is assumed that each microstate is equally likely, so that the probability of a given microstate is \(p_{i}=1/W\). According to Jaynes [17], thermodynamic entropy, as explained by statistical mechanics, should be seen as an application of Shannon's information theory: The thermodynamic entropy is defined as being proportional to the amount of further information needed to define the detailed microscopic state of the system. Adding heat to a system increases the thermodynamic entropy because it increases the number of possible microstates. Maxwell's demon can hypothetically reduce the thermodynamic entropy of a system by using information about the states of individual molecules, but as shown by Landauer [18] and later coworkers, to function, the demon himself must increase the thermodynamic entropy in the process by at least the amount of Shannon information the demon uses in the process. Landauer's principle imposes a lower bound on the amount of heat a computer must generate to a process a given amount of information. Information systems have been studied by several authors. As an example, I will mention the article [19] by Liang et al., who obtain relationships between information entropy and certain knowledge measures. It is a basic principle of physics that the entropy of a closed system can never decrease. Example Imagine two scientists \(A\) and \(B\), both busy with writing articles on thermodynamics, information and entropy. Both want to illustrate their theories by an example based on a deck of cards, 52 cards. They have certain ideas on the random process of shuffling the deck, and they want to describe these ideas in some detail. At the same time, they want to illustrate the physical law that entropy increases. So both say that the shuffling process starts in a state with low entropy: The deck is ordered. However, there is a difference between \(A\) and \(B\). To \(A\), an ordered deck starts with the ace of spades, then the two of spades, all the other spades, then the hearts, then the diamonds, and finally all the clubs. To \(B\), an ordered deck starts with all the aces, then all the two's, the three's and so on. My point is: The description of the state of the deck after one shuffling, will not be the same for \(A\) and for \(B\). In a way, the state concept depends upon the actor. In the same way I will say that the entropy in physics in principle must be based on a concept of states which may depend on an actor, an observer. In 1927, John von Neuman proposed a formula for the entropy connected to a quantum mechanical mixed state \(\rho\): \[S=-k_{B}\mbox{trace}(\rho\mbox{ln}(\rho)). \tag{8}\] Apart from the constant \(k_{B}\) and the base of the logarithm, this equals (3) when the state is given by (2). It is connected to the distribution of the maximal accessible variable \(\theta\) that the actor in question has knowledge of. Any other, complementary variable \(\xi\) will be connected to another entropy (compare (5)). Algorithmic randomness is defined by the size in binary digits of the shortest message that can reproduce the microstate of a system uniquely in some given setting. This definition was used by Zurek [20] to discuss algorithmic randomness to measure disorder without any recourse to probabilities. Gibbs and Boltzmann's entropy, as well as Shannon' information theoretic entropy then provide estimates of the expected value of the algorithmic randomness. In [21] there is a thorough comparison between on the one hand Kolmogorov's fundamental concept of complexity, which is the length in bits of the shortest computer program that prints a given sequence of symbols and then halts, and Shannon's concept of information on the other hand. Although their primary aim is quite different, and they are functions defined on different spaces, there is a close relationship between the two concepts. It is also pointed out that there is a relationship to the statistical notion of a sufficient statistics. Shannon information has two interpretations, one axiomatic connected to \(H\) as a function of probabilities, and one coding interpretation. The latter derives from entropy as the minimum average length in bits needed to encode outcomes in some sample space. There is also a connection between Shannon information and Kolmogorov complexity: Expected Kolmogorov complexity equals Shannon entropy. Both concepts lead to a notion of mutual information \(I\) between two variables \(\theta\) and \(\xi\): \[I(\theta,\xi)=H(\theta)-H(\xi|\theta). \tag{9}\] In a statistical setting one can talk about the mutual information between data \(x\) and parameter \(\theta\), related to the probabilistic model for data, given a parameter, and a possible prior for this parameter. A function of data \(S(x)\) is _sufficient_ relative to the model iff \[I(\theta,x)=I(\theta,S(x)) \tag{10}\] for all prior distributions of \(\theta\). This is equivalent to \[H[x|\theta)=H(S(x)|\theta) \tag{11}\] for all \(\theta\). ## 7 Special relativity Both special and general relativity theory discuss how variables change when the observers change. For space and time this is essential: Special relativity theory is concerned with observers that move with a uniform speed relative to each other; in general relativity theory relative acceleration is allowed. However, none of these theories take up the problems associated with the fact that concepts can be related to the minds of people. Here I want to discuss some aspects of this. I will first concentrate on special relativity theory. Take as a point of departure an observer \(A\) with space coordinates \((0,0,0)\), and let the time \(t\) run. Relative to this observer, a given physical system, say, a particle, may be characterized by special values of the four-vector \(\theta=(t,x,y,z)\), which for instance may give the location of the particle at time t for this observer. This may be accessible at some fixed time \(t\), but is inaccessible as a process. Alternatively, one may look upon the 'particle' as a wave, specify its frequency \(f\) and its wavevector \(k\), hence its energy \(E=hf\) and its momentum \(p=hk\), that is, values of the four-momentum \(\xi=(E,p_{x},p_{y},p_{z})\). Both \(\theta\) and \(\xi\) are maximal accessible variables, can be seen as physical values, but may also be associated with the mind of the observer \(A\). We can show that by Theorem 1, these two variables imply a Hilbert space \(\mathcal{H}\), and on this Hilbert space, \(\theta\) has an operator \(A^{\theta}\), and \(\xi\) has an operator \(A^{\xi}\). Both these operators change when the observer changes. Crucially for the proof of Theorem 1 are the definition of the group \(G\) on the \(\theta\)-space, the definition of the inaccessible variable \(\phi\), and the construction of a suitable transformation \(k\) in the \(\phi\)-space. For \(G\) we may take the group of 4-dimensional translations. We can just take \(\phi=(\theta,\xi)\) and let \(k\) be a suitable element of the Weyl-Heisenberg group acting on \(\phi\). It is also crucial here that \(\theta\) and \(\xi\) can be looked upon as conceptual variables, not necessarily data. Earlier, all conceptual variables were denoted by greek letters, it is hoped that the latin letters above do not lead to any misunderstanding. It is assumed that the _measurement_ of any function of \(\theta\), say the \(x\)-coordinate, can be modeled by the conceptual variable \(x\) plus some random noise. Note that the Poincare group \(P\) also can be seen as acting on the four-momentum \(\xi\). Let \(B\) be an observer which moves relative to \(A\) with a constant speed \(v<c\). Then both \(\theta\) and \(\xi\) change according to actions of the Poincare group \(P\). This group is transitive on the relevant spaces. Its group elements \(p\) can be seen as a combination of a translation \(g\) and a member \(l\) of the Lorentz group. The Lorentz group in turn consists of rotations and Lorentz boosts, coordinate frames moving with constant velocity along the positive \(x\)-axis. Assume that the given event is in the future light cone both for \(A\) and for \(B\) at time \(0\). The clocks are calibrated such that \(t=0\) for \(A\) coincides with \(t^{\prime}=0\) for \(B\). Unitary representations of the translation group \(G\) are discussed in textbooks. I will not go into details here. It suffices to say that irreducible representations \(U(g)\) can be found that the coherent states \(U(g)|\psi\rangle\) are in one-to-one correspondence with the group elements \(g\). Thus from Theorem 1 operators \(A^{\theta}\) and \(A^{\xi}\) acting on a suitable Hilbert space may be constructed. If \(p\) is the element of the Poincare group transforming \(\theta\) for observer \(A\) into the corresponding coordinate \(\theta^{\prime}\) for \(B\), then \(A^{\theta^{\prime}}=V(p)^{\dagger}A^{\theta}V(p)\), where \(V\) is a unitary irreducible representation of the Poincare group. Such representations where discussed by Wigner in 1939 [22]. To study the change of the operator \(A^{\xi}\) when \(A\) is replaced by \(B\), we first need some group element \(t\) in a larger group \(K\) acting on the vector \(\phi=(\theta,\xi)\) such that \((\xi,\theta)=t(\theta,\xi)\). This can be achieved by considering a variant of the Weyl-Heisenberg group connected to the observer \(A\). Let \(p\) in the Poincare group also be seen as a member of \(K\) by \(p(\theta,\xi)=(p\theta,\xi)\). Then \((\theta^{\prime},\xi^{\prime})=h(\theta^{\prime},\xi)\) is found from \(h=tpt\) since \(t^{2}\) is the identity. By Theorem 2, we then get \(A^{\xi^{\prime}}=T(h)^{\dagger}A^{\xi}T(h)\) for some unitary irreducible representation \(T\) of the large group \(K\). It is left to prove that the relevant functions are permissible with respect to the group \(K\), but this I will leave as an open mathemaical problem. Operators associated with groups can be constructed in many ways. One well known is as generators of Lie algebras connected Lie groups. This approach is taken in [23] for the Poincare group and several related groups. An interesting feature is that all the groups are derived there through symmetries of the commutation relations associated with Heisenberg's uncertainty relations. But go back to the two observers and their conceptual variables in the way this was introduced above. We have two possibilities: The relationship between the observers may be timelike or it may be spacelike. In the first case, assume that \(B\) is in the future light cone for \(A\). Both observe the event which given by \(A\) as happening at time \(t\) and space coordinates \((x,y,z)\). Both have in principle two possibilities: They can measure \(\theta=(t,x,y,z)\), respectively \(\theta^{\prime}\), or they can measure the complementary variables \(\xi=(E,p_{x},p_{y},p_{z})\), respectively \(\xi^{\prime}\). Look first at the timelike case. Assume an ideal situation such that \(A\) imme diately after his measurement is able to send his result to \(B\) with a light signal travelling with speed \(c\). Then \(B\) knows the value of either \(\theta\) or \(\xi\), and by using his knowledge of the Poincare transformation, he can find the corresponding \(\theta^{\prime}\), respectively \(\xi^{\prime}\). By Heisenberg's uncertainty relation he is not allowed to know both these variables exactly. But that must mean that he at the same time is not able to choose to measure the other variable. Hence we seem to conclude, by using both relativity theory and quantum mechanics in our reasoning, that \(B\) in this case is limited in his choice of measurement. However, this contradicts the axiom of free choice. Hence one must modify the reasoning behind this paradox. The simplest modification is to assume that \(A\) always must have a shorter or longer delay in time from the moment he receives the result of his measurement to the moment where he is able to send his results away. In the other case, when \(A\) and \(B\) have a spacelike separation, they are not able to communicate, and we are not able to use the argument above. Hence in this case it is clear that \(B\) can choose his measurement freely. Next have a brief look at a kind of gauge group connected to relativity theory. Let us take the point of view that the combined set of laws of physics constitute our accessible 'variables'. Since the Lagrangian of some system determines the dynamics, this must mean in particuler that the Lagrangian is accessible. As noted in Section 5, the gauge group is the group \(H\) where all the accessible variables are constant. The group \(K\) acting on \(\phi=(\theta,\xi)\) can be taken to be a fourdimensional version of the Weyl-Heisenberg group. This group is transitive and has a trivial isotropy group. As a starting point, I take the Lagrangian \(\lambda\) to be a function of \(\phi\), assumed to be permissible with respect to \(K\). Then this induces a group \(L\) acting on \(\lambda\). The property of permissibility implies the following: The inverse image og the function \(\lambda(\cdot)\) induces a subgroup \(K^{L}\) of \(K\), and this will, following the definition in Section 5, will be the relevant gauge group. Informably we can write \(K^{L}=K/L\). However, \(\theta\) and \(\xi\) are also accessible variables, and the groups associated with these are two fourdimensional translation groups \(T\) and \(S\). So this should imply that the resulting gauge group can be taken to be \(H=K/(L\otimes T\otimes S)\). I assume here that both \(\theta(\cdot)\) and \(\xi(\cdot)\) are permissible with respect to \(K\). More realistic gauge theories assume a Lagrangian which also depends on space and time derivatives of the field \(\phi\). Then one has to introduce the larger space \(\Omega_{\psi}\) defined in Section 5, and let \(K\) be a group acting upon this space. If again the Lagrangian \(\lambda(\cdot)\) can be seen as a permissible function with respect to \(K\), the gauge group can be defined as before. This gives a global gauge theory. In the arguments above I have refered to special relativity theory. But parts of the arguments can be extended to a more general case. ## 8 General relativity; a summary The core of general relativity theory is the equivalence principle: Seen locally, an observer in a closed box is not able to distinguish between the effect of gravity and the effect of acceleration. One consequence of this is that, locally, one can always choose at least one coordinate system such that, with respect to this coordinate system, the laws of special relativity hold. But this can be in principle be used to construct a local gauge theory for general relativity, also. Let again \(K\) be a group acting upon the space \(\Omega_{\psi}\), and let \(K^{L}=K/L\) now be the subgroup where the Lagrangian density \(\lambda\) is constant. Fix a time-and-space vector \(\tau=\theta\), and let \(S\) be the translation group in four-momentum \(\xi\). Assume that both both \(\lambda(\cdot)\) and \(\xi(\cdot)\) are permissible with respect to \(K\). Then a local gauge group may be taken as \(H=K/L\otimes S\). A technical problem here might be to construct a version of general relativity based upon waves as input instead of spacetime. (Energy and momentum are determined from the wave.) To proceed with this problem, let us assume a theory based upon waves with frequency \(f\) and wave vector \(k\), equivalently on the energy \(E=hf\) and the momentum \(p=hk\), hence based on \(\xi=(E,p_{x},p_{y},p_{z})\). Assume that the Lagrangian density can be found as a function of \(\xi\) and the partial derivatives with respect to \(\xi\). This gives a local gauge group for general relativity as above. The gauge theory of general relativity is a continuum field theory. I will not go into details here, but refer to the literature. Central to general relativity is the metric tensor \(\mathbf{g}=\{g_{\alpha\beta}\}\). In the special local coordinate system where the laws of special relativity hold, this can be taken as \[\mathbf{g}=\mathbf{\eta}=\left(\begin{array}{cccc}-1&0&0& 0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{array}\right). \tag{12}\] In a general coordinate system, say \(\{\theta\}\), the metric matrix \(\mathbf{g}=(g_{\alpha\beta})\) can be any symmetric matrix with trace 2 which can be changed by a local coordinate transformation to \(\eta\). For change of coordinates from \(\{\theta\}\) to \(\{\theta^{\prime}\}\) in this space introduce the Jacobian tensor \(\mathbf{\Lambda}=\{\Lambda^{\alpha}_{\ \beta^{\prime}}\}=\partial \theta/\partial\theta^{\prime}\). Then \[\mathbf{g^{\prime}}=\mathbf{\Lambda}^{T}\mathbf{g} \mathbf{\Lambda}, \tag{13}\] or, by using the common summation convention: \[g^{\prime}_{\alpha^{\prime}\beta^{\prime}}=\Lambda^{\ \mu}_{\ \alpha^{\prime}}g_{\mu \nu}\Lambda^{\nu}_{\ \beta^{\prime}}. \tag{14}\] For a moving particle with coordinates \(\theta\), it is convenient to introduce a proper time \[d\tau^{2}=-g_{\alpha\beta}\theta^{\alpha}\theta^{\beta}, \tag{15}\] and also the four-velocity \(U_{\alpha}=d\theta_{\alpha}/d\tau\) and the momentum \(p_{\alpha}=mU_{\alpha}\), where \(m\) is the mass. For any vector \(V\) in any coordinate system we have \[V_{\alpha}=g_{\alpha\beta}V^{\beta},\quad V^{\alpha}=g^{\alpha\beta}V_{\beta} \tag{16}\] and similarly for tensors. Furthermore, by change of coordinates \[V^{\prime}_{\alpha^{\prime}}=\Lambda_{\alpha^{\prime}}^{\ \beta}V_{\beta}. \tag{17}\] The partial derivative of a vector is denoted by a comma, and is defined by the partial derivative of each component: \[V_{\alpha,\mu}=\partial V_{\alpha}/\partial\mu. \tag{18}\] In a similar way one can define the derivative of any scalar or tensor. The derivative of the determinant \(g\) of the matrix \((g_{\alpha\beta})\) is \[g_{,\mu}=gg^{\alpha\beta}g_{\beta\alpha,\mu}. \tag{19}\] Basis vectors are denoted by \(e_{\alpha}\), and derivatives of these by \(e_{\alpha,\nu}=\partial e_{\alpha}/\partial\theta^{\nu}\). This leads to the important Christoffel symbol defined by \[e_{\alpha,\beta}=\Gamma^{\mu}_{\ \alpha\beta}e_{\mu}. \tag{20}\] One can show [24] that one always have \(\Gamma^{\mu}_{\ \alpha\beta}=\Gamma^{\mu}_{\ \beta\alpha}\), and \[\Gamma^{\mu}_{\ \alpha\beta}=\frac{1}{2}g^{\mu\nu}(g_{\nu\alpha,\beta}+g_{ \nu\beta\alpha,\beta}-g_{\alpha\beta,\nu}). \tag{21}\] For any vector or tensor one can define covariant differentiation taking into account the derivatives of the unit vector. For instance \[T^{\alpha\beta}_{\ \ ;\gamma}=T^{\alpha\beta}_{\ \,\gamma}+\Gamma^{\alpha}_{ \ \mu\gamma}T^{\mu\beta}+\Gamma^{\beta}_{\ \mu\gamma}T^{\alpha\mu}, \tag{22}\] while for a vector \[V^{\alpha}_{\ ;\mu}=V^{\alpha}_{\,\mu}+\Gamma^{\alpha}_{\ \mu\nu}V^{\nu}. \tag{23}\] The process of going from commas to semicolons is important in deriving equations of general relativity. For instance, if we know for a vector \(V\) that \(V^{\mu}_{\,\mu}=0\) holds in the special coordinate system determined locally by the metric tensor \(\eta_{\alpha\beta}\), then this is equivalent in this system to \(V^{\mu}_{\ ;\mu}=0\), which can be generalized to any coordinate system. An important special tensor is the Riemann curvature tensor \(R\), which describes how a vector changes under parallell transport around a loop. It can be defined [24] as \[R^{\alpha}_{\ \beta\mu\nu}=-\Gamma^{\alpha}_{\ \beta\mu,\nu}+\Gamma^{\alpha}_{ \ \beta\nu,\mu}+\Gamma^{\alpha}_{\ \sigma\mu}\Gamma^{\sigma}_{\ \beta\nu}-\Gamma^{\alpha}_{\ \sigma\nu}\Gamma^{\sigma}_{\ \beta\mu}. \tag{24}\] Alternatively it can be defined in terms of second derivatives of the metric matrix \(g\). The tensor \(R\) is zero for a flat manifold. Contraction of indices in \(R\) can be defined by using the summation convention. The Ricci tensor and the Ricci scalar are defined by \[R_{\alpha\beta}=R^{\mu}_{\ \alpha\mu\beta}\ \ \ R=g^{\mu\nu}R_{\mu\nu}. \tag{25}\] The Rieman curvature tensor satisfies some simple identites and also the Bianchi identities: \[R_{\alpha\beta\mu\nu;\lambda}+R_{\alpha\beta\lambda\mu;\nu}+R_{\alpha\beta\nu \lambda;\mu}=0. \tag{26}\] The Einstein tensor is defined by \[G^{\alpha\beta}=R^{\alpha\beta}-\frac{1}{2}g^{\alpha\beta}R, \tag{27}\] and the Einstein field equations (with a vanishing cosmological constant) are now simply \[G^{\alpha\beta}=8\pi T^{\alpha\beta}, \tag{28}\] where \(\mathbf{T}=(T^{\alpha\beta})\) is the so-called stress-energy tensor. In the frame with metric \(\eta\), \(T^{\alpha\beta}\) is defined [24] as the flux of component \(\alpha\) of the four-momentum \(\xi=(E,p_{x},p_{y},p_{z})\) across a surface of constant component \(\beta\) of \(\theta=(t,x,y,z)\). In particular, \(T^{00}\) can be interpreted as the energy density. Note that this definition assumes that both \(\theta\) and \(\xi\) are accurately known, something that is in contradiction to quantum theory. The definition can be extended to all coordinate systems by using a generalization of equations (14) and (17): \[T^{\prime\alpha^{\prime}\beta^{\prime}}=\Lambda^{\alpha^{\prime}}_{\ \alpha}T^{\alpha\beta}\Lambda_{\beta}^{\ \beta^{\prime}}, \tag{29}\] where again \(\mathbf{\Lambda}=\partial\theta/\partial\theta^{\prime}\). From the Bianchi identities one can show \[T^{\alpha\beta}_{\ \ ;\beta}=G^{\alpha\beta}_{\ \ ;\beta}=0, \tag{30}\] which is the equation of local conservation of energy and momentum. ## 9 General relativity; two different observers Both \(\theta\) and \(\xi\) are four-vectors, and by a change of coordinate system, their components change according to the equation (17). A particular case of this is the change of observer. One observer may use the coordinates \(\theta\), the other observer the coordinates \(\theta^{\prime}\). The crucial tensor is then given by \(\mathbf{\Lambda}=\{\Lambda^{\alpha}_{\ \beta^{\prime}}\}=\partial\theta/ \partial\theta^{\prime}\). But in addition, each observer must make a choice of what variable to focus on in his experiments. By Heisenberg's uncertainty relation he cannot choose both \(\theta\) anf \(\xi\), but must concentrate on one of them. This choice is made independently for each observer. Let observer Alice have the choice between \(\theta\) and \(\xi\), while observer Bob has the choice between \(\theta^{\prime}\) and \(\xi^{\prime}\). Let a new observer Charlie observe both Alice and Bob. We can always arrange it in such a way that Alice and Bob are in the past 'light cone' of Charlie. So Charlie has all the data of all experiments made by Alice and Bob. He can try to make up a joint model describing all these experiments. According to the analysis made in [9], the actor Charlie will be limited in his attempts to model the situation. In agreement with the simple quantum model of [6] assume that all accessible variables are functions of some underlying inaccessible variable \(\phi\). An accessible variable \(\eta\) is called maximal if it cannot be extended to a wider accessible variable. Two maximal accessible variables \(\eta\) and \(\zeta\) are said to be related if \(\eta=f(\phi)\) and \(\zeta=f(k\phi)\) for a fixed function \(f\) and some transformation \(k\) in \(\phi\)-space. Two variables that can not be related in this way, are said to be essentially different. In [7], important elements of quantum mechanics are derived from such a situation assuming two related maximal accessible variables. The full derivation here relies on a group \(K\) acting on \(\phi\)-space, and the concept of permissibility (; see Definition 1). In [9, 10] the following theorem is proved: **Theorem 3**_Assume that an observer Charlie has two maximal accessible related variables \(\eta=f(\phi)\) and \(\zeta=f(k\phi)\) in his mind. Assume that \(k\in K\) for some group \(K\), and that \(\eta\) and \(\zeta\) are permissible with respect to \(K\). Then Charlie can not have in his mind another maximal accesible variable which is related to \(\eta\) but essentially different from \(\zeta\)._ Going back to the situation above, we then have: Look at particle pairs emerging in the vicinity of a black hole. Alice is all the time able to observe the particles that are escaping and leaving the region (the sources of Hawking radiation; see below), while Bob is only able to study the particles absorbed by the black hole. This is of course an ideal thought experiments, but much of the literature in this area is based on thought experiments. Assume further that the particle pairs are entangled with respect to the two properties \(\eta\) and \(\zeta\), where \(\eta\) is a fixed function of (ideal) position \(\theta\), while \(\zeta\) is fixed function of (ideal) momentum \(\xi\). Both Alice and Bob are interested in finding some measure of entropy related to their observations. As discussed in Section 6, entropy is closely related to Shannon information, and Shannon information may depend upon which variable we have a probability distribution over. So, by observing many particles, Alice can have two measures of entropy, one based upon the observations \(\eta\), and another based on the complementary observations \(\zeta\). Similarly, Bob has two measures of entropy, one based on his observations \(\eta^{\prime}=\eta^{\prime}(\theta^{\prime})\) on the absorbed particles, and one based upon his complementary observations \(\zeta^{\prime}=\zeta^{\prime}(\xi^{\prime})\). Assume now that Alice and Bob are space like separated, so that they cannot communicate, but both are observed by Charlie. Then Charlie, in his modelling attempts can include both the variables \(\theta\) and \(\xi^{\prime}\), but he is then not able to include \(\theta^{\prime}\), and he is not able to include \(\xi\). At the same time, from results of [7], the observer Alice can make a quantum model based on her two maximal variables \(\theta\) and \(\xi\), while Bob can make a quantum model based on \(\theta^{\prime}\) and \(\xi^{\prime}\). The predictions from these models may be sent to Charlie. From this, he has probability statements, based on quantum theory, both for \(\theta^{\prime}\) and for \(\xi\), in fact for all maximal variables involved, so from this, he can make a joint quantum model for all observations \(\eta,\zeta,\eta^{\prime}\) and \(\zeta^{\prime}\), and from this calculating the von Neuman entropy \[S=-k_{B}\mbox{trace}(\rho\mbox{ln}(\rho)). \tag{31}\] This formula depends in a crucial way of the density matrix are perceived by Charlie. It can be written as \(\rho=\rho_{A}\otimes\rho_{B}\), where \(\rho_{A}\) is the density matrix as perceived by Alice, and \(\rho_{B}\) is the density matrix as perceived by Bob. ## 10 General relativity; Schwarzschild geometry For a spherical symmetrical system, like a star or a black hole, it is convenient to change from coordinates \((t,x,y,z)\) to spherical coordinates \((t,r,\theta,\phi)\), where \(x=r\mbox{sin}(\theta)\mbox{cos}(\phi)\), \(y=r\mbox{sin}(\theta)\mbox{sin}(\phi)\) and \(z=r\mbox{cos}(\theta)\). In the region outside the star/ black hole, one can then show [23] that the most general metric tensor depends on a mass \(M\) and is given by \[g_{tt}=-(1-\frac{2GM}{cr}),\] \[g_{rr}=(1-\frac{2GM}{cr})^{-1},\] \[g_{\theta\theta}=r^{2},\] \[g_{\phi\phi}=r^{2}\mbox{sin}^{2}(\theta),\] and with all cross-terms vanishing. Here, \(G\) is Newton's gravity constant, and \(c\) is the velocity of light. This metric is called the Schwarzschild metric. I will concentrate on the black hole case, where the metric has a singularity (the horizon) for \(r=2GM/c\), and where \(g_{tt}\) and \(g_{rr}\) change sign in the interior \(r<2GM/c\). In my terminology, the coordinates \((t,r,\theta,\phi)\) are inaccessible variables in the interior of a black hole. There is no mechanism by which these variables can be measured by an external observer. So I will concentrate on the outside region \(r>2GM/c\), where the coordinates are accessible. ## 11 On the theories of black holes An important new insight into a possible theory combining quantum mechanics came when Hawking [25] argued that black holes create and emit particles as they were hot bodies; see also the historical overview by Hawking and Isreal [26]. In [25] it is proposed that quantum mechanical effects cause black holes to create and emit particles as if they were hot bodies with temperature \(\hbar\kappa/2\pi k_{B}\), where \(\kappa\) is the surface gravity of the black hole and \(k_{B}\) is Bolzmann's constant. The generalized entropy of the universe can, according to Hawking be taken as \(S+\frac{k_{B}c^{3}}{4G\hbar}A\), where \(S\) is the entropy outside black holes, \(A\) is the sum of the surface areas of all black holes, and \(G\) is Newton's gravity constant. The same formula can be used for the generalized entropy associated with a particular black hole, where \(A\) now is the area of the horizon of that particular black hole, and \(S\) is the entrpy outside this black hole. This generalized entropy never decreases. As a side remark, the fact that the entropy of a black hole is proportional to the area of its horizon, is related to the holographic principle proposed by t'Hooft and Susskind [27,28]: To describe particle states in the vicinity of black holes, a two-dimensional function is required, the distribution over a two-dimensional coordinate on the horizon. I will not here go into the details of black hole thermodynamics, which is reviewed in [29]. The entropy of Hawking radiation is discussed in detail in the recent article [30]. In this article, the Central Dogma of black holes is emphasized: _As seen from the outside, a black hole can be described in terms of a quantum system with entropy \(\frac{k_{B}c^{3}}{4G\hbar}A\) that evolves unitarily under time evolution._ I will base my treatment of black holes partly on the latest theoretical developments as they are discussed in [29] and in a recent articles [30] in Scientific American. According to [30, 31], both the socalled firewall paradox and the information paradox (black holes had seemed to contradict the basic physical principle that information is never lost) can be solved by considering a theory of wormholes: As a consequence of general relativity there is a non-vanishing probability that different black holes may be connected, a mechanism that was proposed already by Einstein and Rosen [32] in 1935. I will not speak against these theoretical result, but I will take a closer look at the information paradox. As described in Section 6, information and entropy are two sides of the same coin, and the basic physical principle is that entropy never decreases. And, as I see it, the amount of (Shannon) information in a physical system depend in a crucial way upon the observer(s) of the system. ## 12 Discussion The purpose of this paper has been to sketch a new, and in my opinion quite promising, attempt to understand parts of quantum theory and general relativity theory from a common basis. One important background for us is that relativity theory is developed by the mind of a single person, while quantum theory, in the way that it has existed up to now, is a patchwork of contributions from many different persons. Empirically, both theories have been verified to an impressive degree, but the foundation of and interpretation of quantum theory has been the source of much confusion. The book [5] is an attempt to develop the epistemic side of a new foundation, and from this propose a new interpretation: In every application, but also more generally, it is connected to the mind of a single actor or to the joint mind of a group of communicating actors. One important background for the development of [5] has been that, in my opinion there was too little communication between researchers working with the foundation of quantum theory and researchers from different communities, say the statistics society. My book has been an attempt to develop elements of a future common culture. What is culture? According to the author and philosopher Ralph D. Stacey it is a set of attitudes, opinions and convictions that a group of people share, about how one should act towards each other, how things should be evaluated and done, which questions that are important and answers that may be accepted. The most important elements in a culture are unconscious, and cannot be forced upon one from the outside. One hope now, is that results like Theorem 1 and Theorem 2 above, or similar approaches, in some future may be a part of a common culture among researchers in quantum foundation and in theoretical statistics. If this happens, I feel that it also should be easier to arrive at some joint understanding of physical theories describing the microscopic world and physical theories describing the macroscopic world. The foundation of quantum theory described here, seems to be particularly relevant to such an understanding. This is discussed elsewhere [5,6].
2306.02419
Bad Habits: Policy Confounding and Out-of-Trajectory Generalization in RL
Reinforcement learning agents tend to develop habits that are effective only under specific policies. Following an initial exploration phase where agents try out different actions, they eventually converge onto a particular policy. As this occurs, the distribution over state-action trajectories becomes narrower, leading agents to repeatedly experience the same transitions. This repetitive exposure fosters spurious correlations between certain observations and rewards. Agents may then pick up on these correlations and develop simplistic habits tailored to the specific set of trajectories dictated by their policy. The problem is that these habits may yield incorrect outcomes when agents are forced to deviate from their typical trajectories, prompted by changes in the environment. This paper presents a mathematical characterization of this phenomenon, termed policy confounding, and illustrates, through a series of examples, the circumstances under which it occurs.
Miguel Suau, Matthijs T. J. Spaan, Frans A. Oliehoek
2023-06-04T17:51:37Z
http://arxiv.org/abs/2306.02419v2
# Bad Habits: Policy Confounding and ###### Abstract Reinforcement learning agents may sometimes develop habits that are effective only when specific policies are followed. After an initial exploration phase in which agents try out different actions, they eventually converge toward a particular policy. When this occurs, the distribution of state-action trajectories becomes narrower, and agents start experiencing the same transitions again and again. At this point, spurious correlations may arise. Agents may then pick up on these correlations and learn state representations that do not generalize beyond the agent's trajectory distribution. In this paper, we provide a mathematical characterization of this phenomenon, which we refer to as policy confounding, and show, through a series of examples, when and how it occurs in practice. ## 1 Introduction _This morning, I went to the kitchen for a coffee. When I arrived,_ _I forgot why I was there, so I got myself a coffee--_ How often do you do something without paying close attention to your actions? Have you ever caught yourself thinking about something else while washing the dishes, making coffee, or cycling? Acting out of habit is a vital human skill as it allows us to concentrate on more important matters while carrying out routine tasks. You can commute to work while thinking about how to persuade your boss to give you a salary raise or prepare dinner while imagining your next holidays in the Alps. However, unlike in the above example, habits can also lead to undesired outcomes when we fail to recognize that the context has changed. You may hop in your car and start driving towards work even though it is a Sunday and you actually want to go to the grocery store, or you may flip the light switch when leaving a room even though the lights are already off. Here we show how reinforcement learning (RL) agents may also suffer from this phenomenon. Agents can exploit spurious correlations (Pearl et al., 2016) between observed variables and rewards to build simple habits that require little effort to carry out. Such correlations are induced by the agent's policy and hence can be relied upon so long as said policy is followed consistently. However, as we shall see, even minor trajectory deviations can result in catastrophic outcomes. Ideally, the agent should only pick up on correlations that are stable across policies. That is, independently of the trajectories being followed. We refer to this objective as _out-of-trajectory_ (OOT) generalization. ContributionsThis paper characterizes _policy confounding_, a term we use to name the above-described phenomenon. To do so, we introduce a mathematical framework that helps us investigate different types of state representations. Moreover, we provide a series of clarifying examples that illustrate how, as a result of policy confounding, the agent may learn representations based on spurious correlations that do not guarantee OOT generalization. Unfortunately, we do not have a complete answer for how to prevent policy confounding. However, we suggest a few off-the-shelf solutions that may help mitigate its effects. We hope this paper will create awareness among the RL community about the risks of policy confounding and inspire further research on this topic. ## 2 Example: Frozen T-Maze We now provide an example to illustrate the phenomenon of policy confounding and motivate the need for careful analysis. The environment shown in Figure 1 is a variant of the popular T-Maze environment (Bakker, 2001). The agent receives a binary signal, green or purple, at the start location. Then, it needs to move to the right and reach the correct goal at the end of the maze (ignore the blue cells and the black vertical arrow in the middle of the maze for now). The agent obtains a reward of \(+1\) for moving to the green (purple) goal when having received the green (purple) signal and a reward of \(-1\) otherwise. At first sight, one may think that the only way the agent can solve the task is if, at every cell along its trajectory, it can recall the initial signal. However, once the agent figures out the shortest path to each of the two goals (depicted by the green and purple arrows), the agent may safely forget the initial signal. The agent knows that whenever it is at any of the cells along the green (purple) path, it must have received the green (purple) signal. Hence, it can simply move toward the right goal on the basis of its own location. Sticking to this habit is optimal so long as the agent commits to always taking these two paths.1 It is also essential that the environment's dynamics remain the same since even the slightest change in the agent's trajectories may erase the spurious correlation induced by the agent's policy between the agent's location and the correct goal. To show that this actually occurs in practice, we train agents in the original environment (train env) and evaluate them on a variant of the same (eval env), where some ice (blue) has appeared in the middle of the maze. The ice makes the agent slip from the upper cell to the bottom cell and vice versa. The plot on the right of Figure 1 shows the return averaged over 10 trials. The performance drop in the evaluation environment (blue curve) suggests that the agents' policies do not generalize. The ice confuses the agents, who, after being pushed away from their preferred trajectories, can no longer select the right goal. More details about this experiment are provided in Section 7. Footnote 1: Note that the two paths highlighted in Figure 1 are not the only optimal paths. However, for the agent to be able to ignore the initial signal, it is important that the paths do not overlap. ## 3 Related Work The presence of spurious correlations in the training data is a well-studied problem in machine learning. These correlations often provide convenient shortcuts that a model can exploit to make predictions (Beery et al., 2018). However, the performance of a model that relies on them may significantly deteriorate under different data distributions (Quionero-Candela et al., 2009; Arjovsky, 2021). Langosco et al. (2022) show that RL agents may use certain environment features as proxies for choosing their actions. These features, which show only in the training environments, happen to be spuriously correlated with the agent's objectives. In contrast, we demonstrate that, as a result of policy confounding, agents may directly take part in the formation of spurious correlations. A few prior works have already reported empirical evidence of particular forms of policy confounding, showing that in deterministic environments, agents can rely on information that correlates with the agent's progress in an episode to determine the optimal actions. This strategy is effective because under fixed policies, features such as timers (Song et al., 2020), agent's postures (Lan et al., 2023), or previous action sequences (Machado et al., 2018) can be directly mapped to the agent's state. These works provide various hypotheses to justify their experimental observations. Here, we contribute an overarching theory that explains the underlying causes and mechanisms behind these results, along with a series of examples illustrating other types of policy confounding. Please refer to Appendix C for more details on related work. Figure 1: Left: An illustration of the Frozen T-Maze environment. Right: Learning curves when evaluated in the Frozen T-Maze environment with (blue curve) and without (red curve) ice. Preliminaries Although, as we shall see in the experiments, policy confounding can occur even when states are fully observable, in order to understand the idea, it is useful to formulate the setting as partially observable (Kaelbling et al., 1996). Moreover, since we model values and policies using (parametric) functions rather than tables, we use state variables or state factors to represent the different states of the environment (Boutilier et al., 1999). **Definition 1** (FPOMDP).: A factored partially observable Markov decision process (FPOMDP) is a tuple \(\langle S,F,A,T,R,O,X,Y\rangle\) where \(S\) is the set of states, \(F\) is the set of state variables (or state factors) \(F=\{f^{1},...,f^{l}\}\) so that every state \(s_{t}\in S=\times_{i=1}^{l}f^{i}\) is represented as a vector \(s=\langle f^{1},...,f^{l}\rangle\), \(A\) is the set of actions \(a_{t}\), \(T(s_{t+1}\mid s_{t},a_{t})\) is the transition probability function, \(R(s_{t},a_{t})\) is the reward function which determines the immediate reward \(r_{t}\), and \(O(s_{t})\) is the observation or emission function, which selects a subset of observed variables \(X_{t}\subseteq F\) (which may be different depending on the state \(s_{t}\)), and discards the hidden variables \(Y_{t}=F\setminus X_{t}\), such that the agent's observations \(o_{t}\in\times_{i=1}^{m_{t}}X_{t}\) are represented as vectors \(o_{t}=\langle x_{t}^{1},...,x_{t}^{m_{t}}\rangle\) with \(m_{t}\leq l\). In this setting, the agent must keep track of past actions and observations to make the right action choices (Singh et al., 1994). The optimal policy is a mapping from the past action-observation history, \(h_{t}=\langle o_{1},a_{1},...,a_{t-1},o_{t}\rangle\), to a probability distribution \(\Delta(A)\) over actions \(A\), \(\pi:H\rightarrow\Delta(A)\), where \(H\) is the set of all possible histories of any length. We use the random variable \(\tau=\langle o_{1},a_{1},...,a_{T-1},o_{T}\rangle\) to denote the agent's trajectory in an episode, with \(T\) being the episode's horizon. Knowing that the full history constitutes a Markov representation, we can reformulate the FPOMDP into a factored history MDP (FHMDP). **Definition 2** (Fhmpd).: A factored history Markov decision process (FHMDP) is a tuple \(\langle H,\Theta,A,T_{h},R_{h}\rangle\), where \(H\) is the set of all possible histories of any length, \(\Theta\) denotes the set of variables in the history, with \(\Theta_{t}\) denoting the set of actions \(A\) and observation variables \(X\) in a history of length t, \(\Theta_{t}=\{x_{1}^{1},...,x_{1}^{m_{1}},a_{1},...,x_{1}^{1},...,x_{t}^{m_{t} },a_{t}\}\), such that we write their Cartesian product, \(H_{t}=\{x_{1}^{1}\times...\times x_{1}^{m_{1}}\times a_{1}\times...\times x_{ t}^{1}\times...\times x_{t}^{m_{t}}\times a_{t}\}\), simply as \(H_{t}=\times\Theta_{t}\), \[T_{h}(h_{t+1}=\langle h_{t},a_{t},o_{t+1}\rangle\mid h_{t},a_{t})\triangleq \sum_{s_{t+1},s_{t}\in S}O(s_{t+1})T(s_{t+1}\mid s_{t},a_{t})\Pr(s_{t}\mid h_ {t})\] is the history transition function,2 and Footnote 2: Note that we sum over \(s_{t+1}\) because multiple states may emit the same observation \(o_{t+1}\). \[R_{h}(h_{t},a_{t})\triangleq\sum_{s_{t}\in S}R(s_{t},a_{t})\Pr(s_{t}\mid h_{t})\] is the history reward function. This formulation is convenient because it allows solving the POMDP using MDP methods. Yet, due to combinatorial explosion, learning a policy that conditions on the full history is generally infeasible. Fortunately, in many problems, not all the information is strictly relevant; the agent can usually find compact representations of the history, that are sufficient for solving the task (McCallum, 1995). ## 5 History representations Factored representations are useful because they readily define relationships between (states) histories. Histories can be compared to one another by looking at the individual values the different variables take. Removing some of the variables in \(\Theta_{t}\) has the effect of grouping together those histories that share the same values for the remaining ones. Thus, in contrast with most of the theoretical work in RL, which treats histories (states) as independent entities, we can define history (state) abstractions at the variable level instead of doing so at the history (state) level (Li et al., 2006). **Definition 3** (History representation).: A history representation is a function \(\Phi:H_{t}\rightarrow\bar{H}_{t}\), with \(H_{t}=\times\Theta_{t}\), \(\bar{H}_{t}=\times\bar{\Theta}_{t}\), and \(\bar{\Theta}_{t}\subseteq\Theta_{t}\). Intuitively a history representation \(\Phi(h_{t})\) is a context-specific projection of a history \(h_{t}\in H_{t}=\times\Theta_{t}\) onto a lower dimensional space \(\bar{H}_{t}=\times\bar{\Theta}_{t}\) defined by a subset of its variables, \(\bar{\Theta}_{t}\subseteq\Theta_{t}\). We use \(\{h_{t}\}^{\Phi}=\{h_{t}^{\prime}\in H_{t}:\Phi(h_{t}^{\prime})=\Phi(h_{t})\}\) to denote \(h_{t}\)'s equivalence class under \(\Phi\). ### Markov history representations As noted in Section 4, the agent should strive for history representations with few variables. Yet, not all history representations will be sufficient to learn the optimal policy; some may exclude variables that contain useful information for the task at hand. **Definition 4** (Markov history representation).: A history representation \(\Phi(h_{t})\) is said to be Markov if, for all \(h_{t},h_{t+1}\in H\), \(a_{t}\in A\), \[R_{h}(h_{t},a_{t})=R_{h}(\Phi(h_{t}),a_{t})\quad\text{and}\quad\sum_{h_{t+1}^{ \prime}\in\{h_{t+1}\}^{\Phi}}T_{h}(h_{t+1}^{\prime}\mid h_{t},a_{t})=\Pr(\Phi(h _{t+1})\mid\Phi(h_{t}),a_{t}),\] where \(R_{h}(\Phi(h_{t}),a_{t})=\{R(h_{t}^{\prime},a_{t})\}_{h_{t}^{\prime}\in\{h_{t} \}^{\Phi}}\) is the reward at any \(h_{t}^{\prime}\in\{h_{t}\}^{\Phi}\). The above definition is equivalent to the notion of bisimulation (Dean and Givan, 1997; Givan et al., 2003) or model-irrelevance state abstraction (Li et al., 2006). Representations satisfying these conditions are guaranteed to be equivalent to the original representation. That is, for any given policy and initial history, the expected return (i.e., cumulative reward; Sutton and Barto, 2018) is the same when conditioning on the full history or on the Markov history representation. Note that a history representation \(\Phi\) such that \(\Phi(h_{t})=h_{t}\), for all \(h_{t}\in H\), is, in itself, Markov. **Definition 5** (Minimal history representation).: A history representation \(\Phi^{*}:H_{t}\to\bar{H}_{t}^{*}\) with \(\bar{H}_{t}^{*}=\times\bar{\Theta}_{t}^{*}\) is said to be _minimal_, if all other history representations \(\Phi:H_{t}\to\bar{H}_{t}\) with \(\bar{H}_{t}=\times\bar{\Theta}_{t}\) and \(|\bar{\Theta}_{t}|\subset|\bar{\Theta}_{t}^{*}|\), for at least one \(h_{t}\in H\), are not Markov. In other words, \(\Phi_{t}^{*}(h_{t})\) is _minimal_ when none of the remaining variables can be removed while the representation remains Markov. Hence, we say that a minimal history representation \(\Phi_{t}^{*}(h_{t})\) is a sufficient statistic of the full history. **Definition 6** (Superfluous variable).: Let \(\{\bar{\Theta}_{t}^{*}\}_{\cup\Phi^{*}}\) be the union of variables in all possible minimal history representations. A variable \(\Theta_{t}^{i}\in\Theta_{t}\) is said to be superfluous, if \(\Theta_{t}^{i}\notin\{\bar{\Theta}_{t}^{*}\}_{\cup\Phi^{*}}\). ### \(\pi\)-Markov history representations Considering that the agent's policy will rarely visit all possible histories, the notion of Markov history representation seems excessively strict. We now define a relaxed version that guarantees the representation to be Markov when a specific policy \(\pi\) is followed. **Definition 7** (\(\pi\)-Markov history representation).: A history representation \(\Phi^{\pi}(h_{t})\) is said to be \(\pi\)-Markov if, for all \(h_{t},h_{t+1}\in H^{\pi}\), \(a_{t}\in\mathrm{supp}(\pi(\cdot\mid h_{t}))\), \[R_{h}(h_{t},a_{t})=R_{h}^{\pi}(\Phi^{\pi}(h_{t}),a_{t})\quad\text{and}\quad \sum_{h_{t+1}^{\prime}\in\{h_{t+1}\}^{\Phi}}T_{h}(h_{t+1}^{\prime}\mid h_{t}, a_{t})=\Pr^{\pi}(\Phi^{\pi}(h_{t+1})\mid\Phi^{\pi}(h_{t}),a_{t}),\] where \(H^{\pi}\subseteq H\) denotes the histories visited under \(\pi\), \(R_{h}^{\pi}(\Phi^{\pi}(h_{t}),a_{t})=\{R_{h}(h_{t}^{\prime},a_{t})\}_{h_{t}^{ \prime}\in\{h_{t}\}^{*}}\), \(\{h_{t}\}_{\pi}^{\Phi}=\{h_{t}^{\prime}\in H_{t}^{\pi}:\Phi^{\pi}(h_{t}^{\prime })=\Phi^{\pi}(h_{t})\}\), and \(\Pr^{\pi}\) is probability under \(\pi\). **Definition 8** (\(\pi\)-minimal history representation).: A history representation \(\Phi^{\pi*}:H_{t}^{\pi}\to\bar{H}_{t}^{\pi*}\) with \(\bar{H}_{t}^{\pi*}=\times\bar{\Theta}_{t}^{\pi*}\) is said to be \(\pi\)_-minimal_, if all other history representations \(\Phi:H_{t}^{\pi}\to\bar{H}_{t}^{\pi}\) with \(\bar{H}_{t}^{\pi}=\times\bar{\Theta}_{t}\) and \(|\bar{\Theta}_{t}|\subset|\bar{\Theta}_{t}^{\pi*}|\), for at least one \(h_{t}\in H^{\pi}\), are not \(\pi\)-Markov. ## 6 Policy Confounding We are now ready to describe how and when policy confounding occurs, as well as why we should care, and how we should go about preventing it. The proofs for all theoretical results are deferred to Appendix A. Policy confounding arises naturally as the agent improves its policy. Normally, at the beginning of training, the agent takes exploratory actions to determine which ones yield high rewards. It is only after the agent has committed to a particular policy that we start seeing how some of the variables in its history become irrelevant for predicting future states and rewards. The agent may then choose to ignore these variables and exclude them from its representation if keeping them takes extra 'effort'. The next result demonstrates that a \(\pi\)-Markov history representation \(\Phi^{\pi}\) requires at most the same variables, and in some cases fewer, than a minimal history representation \(\Phi^{*}\), while still satisfying the Markov conditions for those histories visited under \(\pi\), \(h_{t}\in H^{\pi}\). **Proposition 1**.: _Let \(\mathbf{\Phi}^{*}\) be the set of all possible minimal history representations, where every \(\Phi^{*}\in\mathbf{\Phi}^{*}\) is defined as \(\Phi^{*}:H_{t}\rightarrow\bar{H}_{t}^{*}\) with \(\bar{H}_{t}^{*}=\times\bar{\Theta}_{t}^{*}\). For all \(\pi\) and all \(\Phi^{*}\in\mathbf{\Phi}^{*}\), there exists a \(\pi\)-Markov history representation \(\Phi^{\pi}:H_{t}^{\pi}\rightarrow\bar{H}_{t}^{*}\) with \(\bar{H}_{t}^{\pi}=\times\bar{\Theta}_{t}^{*}\) such that for all \(h_{t}\in H^{\pi}\), \(\bar{\Theta}_{t}^{\pi}\subseteq\bar{\Theta}_{t}^{*}\). Moreover, there exist cases for which \(\bar{\Theta}_{t}^{\pi}\) is a proper subset, \(\bar{\Theta}_{t}^{\pi}\neq\bar{\Theta}_{t}^{*}\)._ Although the result above seems intuitive, its truth may appear incidental. While it is clear that \(\Phi^{\pi}\) will never require more variables than the corresponding minimal history representation \(\Phi^{*}\), whether or not \(\Phi^{\pi}\) will require fewer, seems just an arbitrary consequence of the policy being followed. Moreover, since the variables in \(\bar{\Theta}_{t}^{*}\) are all strictly relevant for predicting transitions and rewards, one may think that a policy \(\pi\) inducing representations such that \(\bar{\Theta}_{t}^{\pi}\subset\bar{\Theta}_{t}^{*}\) can never be optimal. However, as shown by the following example, it turns out that the histories visited by a particular policy, especially if it is the optimal policy, tend to contain a lot of redundant information. This is particularly true in environments where future observations are heavily influenced by past actions and observations. In such cases, the current observation often reveals a lot about the agent's trajectory. **Example 1**.: **(Frozen T-Maze)** Let us consider the Frozen T-Maze again (Section 2). Figure 3 shows a dynamic Bayesian network (DBN; Murphy, 2002) describing the dynamics of the environment. Observation variables are denoted by \(x\), while hidden variables are denoted by \(y\). The nodes labeled as \(x^{2}\) represent the agent's location from \(t=0\) to \(t=8\). All intermediate nodes between \(t=0\) and \(t=7\) are omitted for simplicity. The nodes labeled as \(y\) indicate whether the goal is to go to the green or the purple cell (see Figure 1). Note that \(y\) always takes the same value at all timesteps within an episode (either green or purple). The information in \(y\) is hidden and only passed to the agent at the start location through the node \(x_{0}^{1}\). On the one hand, if actions are not specified by any particular policy, but simply sampled at random (left diagram), to determine the reward \(r_{8}\) at \(t=8\), one needs to know the signal \(x_{0}^{1}\) received at \(t=0\) and the agent's current location \(x_{8}^{2}\). These are highlighted by the green circles in the left DBN. This is because the actions \(\langle a_{0},...,a_{7}\rangle\) appear as exogenous variables and can take any possible value. Hence, the reward could be either \(-0.1\), (per timestep penalty), \(-1\) (wrong goal), or \(+1\) (correct goal) depending on the actual values of \(x_{1}^{1}\) and \(x_{8}^{2}\). On the other hand, when actions are sampled from the optimal policy \(\pi^{*}\) (right DBN), knowing \(x_{8}^{2}\) (green circle) is sufficient to determine \(r_{8}\). In this second case, \(\pi^{*}\) makes the action \(a_{0}\), and thus all future agent locations, dependent on the initial signal \(x_{0}^{1}\). This occurs because, under the optimal policy (green and purple paths in Figure 1), the agent always takes the action'move up' when receiving the green signal or'move down' when receiving the purple signal, and then follows the shortest path towards each of the goals. As such, we have that, from \(t=1\) onward, \(\Phi^{\pi^{*}}(h_{t})=x_{t}^{2}\) is a \(\pi\)-Markov history representation since it constitutes a sufficient statistic of the history \(h_{t}\) under \(\pi^{*}\). Finally, note that, for the same reason, from \(t=1\), actions may also condition only on \(x^{2}\). The phenomenon highlighted by the previous example is the result of a spurious correlation induced by the optimal policy between the agent's locations \(\langle x_{0}^{2},...,x_{8}^{2}\rangle\) and the reward \(r_{8}\). Generally speaking, this occurs because policies act as confounders, opening backdoor paths between future histories/rewards and the variables in the current history \(h_{t}\)(Pearl, 2000). This is shown by the DBN depicted in Figure 9, where we see that the policy influences both the current history and also future histories/rewards, hence potentially affecting the conditional relationships between some of their variables. For instance, in the above example, \(R^{\pi^{*}}(x_{8}^{2}=\) 'agent at green goal'\()=+1\) when following \(\pi^{*}\), while for an arbitrary \(\pi\), \(R(x_{8}^{2}=\) 'agent at green goal'\()=\pm 1\). Figure 3: A DBN illustrating the phenomenon of policy confounding. The policy opens backdoor path that can affect conditional relations between the variables in \(h_{t}\) and \(h_{t+1}\) Figure 2: Two DBNs representing the dynamics of the Frozen T-Maze environment, when actions are sampled at random (left), and when they are determined by the optimal policy (right). **Definition 9** (Policy Confounding).: A history representation \(\Phi:H_{t}\to\bar{H}_{t}\) is said to be confounded by a policy \(\pi\) if, for some \(h_{t},h_{t+1}\in H\), \(a_{t}\in A\), \[R^{\pi}(\Phi(h_{t}),a_{t})\neq R^{\pi}(\mathrm{do}(\Phi(h_{t})),a_{t})\quad \text{or}\quad\Pr^{\pi}(\Phi(h_{t+1})\mid\Phi(h_{t}),a_{t})\neq\Pr^{\pi}(\Phi( h_{t+1})\mid\mathrm{do}(\Phi(h_{t})),a_{t})\] The operator \(\mathrm{do}(\cdot)\) is known as the do-operator, and it is used to represent physical interventions in a system (Pearl, 2000). These interventions are meant to distinguish cause-effect relations from mere statistical associations. In our case, \(\mathrm{do}(\Phi(h_{t}))\) means setting the variables forming the history representation \(\Phi(h_{t})\) to a particular value and considering all possible histories in the equivalence class, \(h^{\prime}_{t}\in\{h_{t}\}^{\Phi}\). That is, independently of what policy is being followed. It is easy to show that the underlying reason why a \(\pi\)-Markov history representation may require fewer variables than the minimal history representation (as in Example 1) is indeed policy confounding. **Theorem 1**.: _Let \(\Phi^{*}:H_{t}\to\bar{H}_{t}^{*}\) with \(\bar{H}_{t}^{*}=\chi\bar{\Theta}_{t}^{*}\) be a minimal history representation. If, for some \(\pi\), there is a \(\pi\)-Markov history representation \(\Phi^{\pi}:H_{t}^{\pi}\to\bar{H}_{t}^{*}\) with \(\bar{H}_{t}^{\pi}=\chi\bar{\Theta}_{t}^{\pi}\), such that \(\bar{\Theta}_{t}^{\pi}\subset\bar{\Theta}_{t}^{*}\) for some \(h_{t}\in H\), then \(\Phi^{\pi}\) is confounded by policy \(\pi\)._ Finally, to conclude this section, we demonstrate that even though, in Example 1, the variables included in the \(\pi\)-minimal history representation are a subset of the variables in the minimal history representation, \(\bar{\Theta}_{t}^{\pi*}\subset\bar{\Theta}_{t}^{*}\), this is not always the case, as \(\bar{\Theta}_{t}^{\pi*}\) may contain superfluous variables (Definition 6). An example illustrating this situation is provided in Appendix B (Example 4). **Proposition 2**.: _Let \(\{\bar{\Theta}_{t}^{*}\}_{\cup\Phi^{*}}\) be the union of variables in all possible minimal history representations. There exist cases where, for some \(\pi\), there is a \(\pi\)-minimal history representation \(\Phi^{\pi*}:H_{t}^{\pi}\to\bar{H}_{t}^{\pi*}\) with \(\bar{H}_{t}^{\pi*}=\chi\bar{\Theta}_{t}^{\pi*}\) such that \(\bar{\Theta}_{t}^{\pi*}\setminus\{\bar{\Theta}_{t}^{*}\}\cup\Phi^{*}\neq\emptyset\)._ ### Why should we care about policy confounding? Leveraging spurious correlations to develop simple habits can be advantageous when resources such as memory, computing power, or data are limited. Agents can disregard and exclude from their representation those variables that are redundant under their policies. However, the challenge is that some of these variables may be crucial to ensure that the agent behaves correctly when the context changes. In the Frozen T-Maze example from Section 2, we observed how the agent could no longer find the correct goal when the ice pushed it away from the optimal trajectory. This is a specific case of a well-researched issue known as out-of-distribution (OOD) generalization (Quionero-Candela et al., 2009; Arjovsky, 2021). We refer to it as _out-of-trajectory_ (OOT) generalization to highlight that the problem arises due to repeatedly sampling from the same policy and thus following the same trajectories. In contrast to previous works (Kirk et al., 2023) that address generalization to environments that differ from the training environment, our objective here is to generalize to trajectories the agent never (or only rarely) takes.3 Footnote 3: Note that in the Frozen T-Maze environment, the ice does change the environment dynamics. However, its purpose is to compel the agent to take trajectories different from the optimal ones. The way we implemented it, the effect of the ice would be equivalent to forcing the agent to move down twice when in the top cell or move up twice when in the bottom cell. These trajectories are feasible in the original environment. Ideally, the agent should aim to learn representations that enable it to predict future rewards and transitions even when experiencing slight variations in its trajectory. Based on Definition 4, we know that, in general, only a Markov history representation satisfies these requirements. However, computing such representations is typically intractable (Ferns et al., 2006), and thus most standard RL methods usually learn representations by maximizing an objective function that depends on the distribution of trajectories \(P^{b}(\tau)\) visited under a behavior policy \(b\) (e.g., expected return, \(\mathbb{E}_{\tau\sim P^{b}(\tau)}\left[G(\tau)\right]\); Sutton and Barto, 2018). The problem is that \(b\) may favor certain trajectories over others, which may lead to the exploitation of spurious correlations in the learned representation. ### When should we worry about OOT generalization in practice? The previous section highlighted the generalization failures of representations that depend on spurious correlations. Now, let us delve into the circumstances in which policy confounding is most prone to cause problems. Function approximationFunction approximation has enabled traditional RL methods to scale to high-dimensional problems with long-term memory dependencies, where storing values in lookup tables is infeasible. Using parametric functions (e.g., neural networks) to model policies and value functions, agents can learn abstractions by grouping together histories if these yield the same transitions and rewards. As mentioned before, abstractions occur naturally when histories are represented by a set of variables since the functions simply need to ignore some of these variables. However, this also implies that value functions and policies are exposed to spurious correlations. If a particular variable becomes irrelevant due to policy confounding, the function may learn to ignore it and remove it from its representation (Example 1). This is in contrast to tabular representations, where, every history takes a separate entry, and even though there exist algorithms that perform history (state) abstractions in tabular settings (Andre and Russell, 2002; Givan et al., 2003), these abstractions are normally formed offline before learning (computing) the policy, hence avoiding the risk of policy confounding. Narrow trajectory distributionsIn practice, agents are less prone to policy confounding when the trajectory distribution \(P^{b}(\tau)\) is broad (i.e., when \(\bar{b}\) encompasses a wide set of trajectories) than when it is narrow. This is because the spurious correlations present in certain trajectories are less likely to have an effect on the learned representations. On-policy methods (e.g., SARSA, Actor-Critic; Sutton and Barto, 2018) are particularly troublesome for this reason since the same policy being updated must also be used to collect the samples. Yet, even when the trajectory distribution is narrow, there is no reason why the agent should pick up on spurious correlations while its policy is still being updated. Only when the agent commits to a particular policy should we start worrying about policy confounding. At this point, lots of the same trajectories are being used for training, and the agent may _'forget'_(French, 1999) that, even though certain variables may no longer be needed to represent the current policy, they were important under previous policies. This generally occurs at the end of training when the agent has converged to a particular policy. However, if policy confounding occurs earlier during training, it may prevent the agent from further improving its policy (Nikishin et al., 2022; please refer to Section C for more details). ### What can we do to improve OOT generalization? As mentioned in the introduction, we do not have a complete answer to the problem of policy confounding. Yet, here we offer a few off-the-shelf solutions that, while perhaps limited in scope, can help mitigate the problem in some situations. These solutions revolve around the idea of broadening the distribution of trajectories so as to dilute the spurious correlations introduced by certain policies. Off-policy methodsWe already explained in Section 6.2 that on-policy methods are particularly prone to policy confounding since they are restricted to using samples coming from the same policy. A rather obvious solution is to instead use off-policy methods, which allow using data generated from previous policies. Because the samples belong to a mixture of policies it is less likely that the model will pick up the spurious correlations present on specific trajectories. However, as we shall see in the experiments, this alternative works only when replay buffers are large enough. This is because standard replay buffers are implemented as queues, and hence the first experiences coming in are the first being removed. This implies that a replay buffer that is too small will contain samples coming from few and very similar policies. Since there is a limit on how large replay buffers are allowed to be, future research could explore other, more sophisticated, ways of deciding what samples to store and which ones to remove (Schaul et al., 2016). Exploration and domain randomizationWhen allowed, exploration may mitigate the effects of policy confounding and prevent agents from overfitting their preferred trajectories. Exploration strategies have already been used for the purpose of generalization; to guarantee robustness to perturbations in the environment dynamics (Eysenbach and Levine, 2022), or to boost generalization to unseen environments (Jiang et al., 2022). The goal for us is to remove, to the extent possible, the spurious correlations introduced by the current policy. Unfortunately, though, exploration is not always without cost. Safety-critical applications require the agent to stay within certain boundaries (Altman, 1999; Garcia and Fernandez, 2015). When training on a simulator, an alternative to exploration is domain randomization (Tobin et al., 2017; Peng et al., 2018; Machado et al., 2018). The empirical results reported in the next section suggest that agents become less susceptible to policy confounding when adding enough stochasticity to the environment or to the policy. Yet, there is a limit on how much noise can be added to the environment or the policy without altering the optimal policy ( Sutton and Barto, 2018, Example 6.6: Cliff Walking). ## 7 Experiments The goal of the experiments is to: (1) demonstrate that the phenomenon of policy confounding described by the theory does occur in practice, (2) uncover the circumstances under which agents are most likely to suffer the effects of policy confounding and fail to generalize, and (3) evaluate how effective the strategies proposed in the previous section are in mitigating these effects. ### Experimental setup Agents are trained with an off-policy method, DQN (Mnih et al., 2015) and an on-policy method, PPO (Schulman et al., 2017). To be able to analyze the learned representations more easily, we represent policies and value functions as feedforward neural networks and use a stack of past observations as input in the environments that require memory. We report the mean return as a function of the number of training steps. Training is interleaved with periodic evaluations on the original environments and variants thereof used for validation. The results are averaged over 10 random seeds. Please refer to Appendix F for more details about the experimental setup. ### Environments We ran our experiments on three grid-world environments: the **Frozen T-Maze** from Section 2, and the below described **Key2Door**, and **Diversion** environments. We use these as pedagogical examples to clarify the ideas introduced by the theory. Nonetheless, in Appendix C, we refer to previous works showing evidence of particular forms of policy confounding in high dimensional domains. **Example 2**.: **Key2Door.** Here, the agent needs to collect a key placed at the beginning of the corridor in Figure 4 (left) and then open the door at the end. The observations do not show whether the key has already been collected. Thus, to solve the task in the minimum number of steps, the agent must remember that it already got the key when going to the door. Yet, since during training, the agent always starts the episode at the first cell from the left, when moving towards the door, the agent can forget about the key once it has reached the third cell. As in the Frozen T-Maze example, the agent can build the habit of using its own location to tell whether it has or has not got the key yet. This, can only occur when the agent consistently follows the optimal policy, depicted by the purple arrow. Otherwise, if the agent moves randomly through the corridor, it is impossible to tell whether the key has or has not been collected. In contrast, in the evaluation environment, the agent always starts at the second to last cell, this confuses the agent, which is used to already having the key by the time it reaches said cell. A DBN describing the dynamics of the environment is provided in Appendix D. **Example 3**.: **Diversion.** Here, the agent must move from the start state to the goal state in Figure 4 (right). The observations are length-\(8\) binary vectors. The first \(7\) elements indicate the column where the agent is located. The last element indicates the row. This environment aims to show that policy confounding can occur not only when the environment is partially observable, as was the case in the previous examples, but also in fully observable scenarios. After the agent learns the optimal trajectory depicted by the green arrow, it can disregard the last element in the observation vector. This is because, if the agent does not deviate, the bottom row is never visited. Rather than forgetting past information, the agent ignores the last element in the current observation vector for being irrelevant when following the optimal trajectory. We train the agent in the original environment and evaluate it in a version with a yellow diversion sign in the middle of the maze that forces the agent to move to the bottom row. A DBN describing the dynamics of the environment is provided in Appendix D. Figure 4: Illustrations of the Key2Door (left) and Diversion (right) environments. ### Results On-policy vs. off-policyThe results in Figure 6 reveal the same pattern in all three environments. PPO fails to generalize outside the agent's preferred trajectories. After an initial phase where the average returns on the training and evaluation environments increase ('PPO train' and 'PPO eval'), the return on the evaluation environments ('PPO eval') starts decreasing when the agent commits to a particular trajectory, as a result of policy confounding. In contrast, since the training samples come from a mixture of policies, DQN performs optimally in both variants of the environments ('DQN train' and 'DQN eval') long after converging to the optimal policy.4 A visualization of the history representations learned with PPO, showing that the policy does ignore variables that are necessary for generalization, is provided in Appendix E.1. Footnote 4: The small gap between ‘DQN train’ and ‘DQN eval’ is due to the \(-0.1\) penalty per timestep. In all three environments, the shortest path is longer in the evaluation environment than in the training environment. Large vs. small replay buffersWe mentioned in Section 6.3 that the effectiveness of off-policy methods against policy confounding depends on the size of the replay buffer. The results in Figure 6 (left) confirm this claim. The plot shows the performance of DQN in the Frozen T-Maze environment when the size of the replay buffer contains \(100\)K experiences and when it only contains the last \(10\)K experiences. We see that in the second case, the agents performance in the evaluation environment decreases (red curve left plot). This is because, after the initial exploration phase, the distribution of trajectories becomes too narrow, and the spurious correlations induced by the latest policies dominate the replay buffer. Similar results for the other two environments are provided in Appendix E.2. Exploration and domain randomizationThe last experiment shows that if sufficient exploration is allowed, DQN may still generalize to different trajectories, even when using small replay buffers (blue curve right plot Figure 6). In the original configuration, the exploration rate \(\epsilon\) for DQN starts at \(\epsilon=1\) and decays linearly to \(\epsilon=0.0\) after \(20\)K steps. For this experiment, we set the final exploration rate \(\epsilon=0.1\). In contrast, since exploration in PPO is normally controlled by the entropy bonus, which makes it hard to ensure fixed exploration rates, we add noise to the environment instead. The red curve in Figure 6 (right) shows that when we train in an environment where the agent's actions are overridden by a random action with \(20\%\) probability, the performance of PPO in the evaluation environment does not degrade after the agent has converged to the optimal policy. This suggests that the added noise prevents the samples containing spurious correlations from dominating the training batches. However, it may also happen that random noise is not sufficient to remove the spurious correlations. As shown in Figure 13 (Appendix E.2), in the Key2Door environment, neither forcing the agent to take random actions \(20\%\) of the time nor setting \(\epsilon=0.1\), solves the OOT generalization problem. Similar results for Diversion are provided in Appendix E.2. ## 8 Conclusion This paper described the phenomenon of policy confounding. We showed both theoretically and empirically how as a result of following certain trajectories, agents may pick up on spurious correlations, and build habits that are not robust to trajectory deviations. We also uncovered the circumstances under which policy confounding is most likely to occur in practice and suggested a few ad hoc solutions that may mitigate its effects. We conceive this paper as a stepping stone to explore more sophisticated solutions. An interesting avenue for future research is the integration of tools from the field of causal inference (Pearl et al., 2016; Peters et al., 2017) to aid the agent in forming history representations that are grounded on causal relationships rather than mere statistical associations (Lu et al., 2018; Zhang et al., 2020; Sontakke et al., 2021; Saengkyongam et al., 2023). ## Acknowledgements This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 758824 --INFLUENCE)
2305.03257
Data-driven and Physics Informed Modelling of Chinese Hamster Ovary Cell Bioreactors
Fed-batch culture is an established operation mode for the production of biologics using mammalian cell cultures. Quantitative modeling integrates both kinetics for some key reaction steps and optimization-driven metabolic flux allocation, using flux balance analysis; this is known to lead to certain mathematical inconsistencies. Here, we propose a physically-informed data-driven hybrid model (a "gray box") to learn models of the dynamical evolution of Chinese Hamster Ovary (CHO) cell bioreactors from process data. The approach incorporates physical laws (e.g. mass balances) as well as kinetic expressions for metabolic fluxes. Machine learning (ML) is then used to (a) directly learn evolution equations (black-box modelling); (b) recover unknown physical parameters ("white-box" parameter fitting) or -- importantly -- (c) learn partially unknown kinetic expressions (gray-box modelling). We encode the convex optimization step of the overdetermined metabolic biophysical system as a differentiable, feed-forward layer into our architectures, connecting partial physical knowledge with data-driven machine learning.
Tianqi Cui, Tom S. Bertalan, Nelson Ndahiro, Pratik Khare, Michael Betenbaugh, Costas Maranas, Ioannis G. Kevrekidis
2023-05-05T03:09:33Z
http://arxiv.org/abs/2305.03257v1
# Data-driven and Physics Informed Modelling ###### Abstract Fed-batch culture is an established operation mode for the production of biologics using mammalian cell cultures. Quantitative modeling integrates both kinetics for some key reaction steps and optimization-driven metabolic flux allocation, using flux balance analysis; this is known to lead to certain mathematical inconsistencies. Here, we propose a physically-informed data-driven hybrid model (a "gray box") to learn models of the dynamical evolution of Chinese Hamster Ovary (CHO) cell bioreactors from process data. The approach incorporates physical laws (e.g. mass balances) as well as kinetic expressions for metabolic fluxes. Machine learning (ML) is then used to (a) directly learn evolution equations (black-box modelling); (b) recover unknown physical parameters ("white-box" parameter fitting) or--importantly--(c) learn partially unknown kinetic expressions (gray-box modelling). We encode the convex optimization step of the overdetermined metabolic biophysical system as a differentiable, feed-forward layer into our architectures, connecting partial physical knowledge with data-driven machine learning. ## 1 Introduction Chinese hamster ovary (CHO) cells are broadly used in biological and medical research, acting as the most common mammalian cell line used for the production of therapeutic proteins [1]. The advantage of using CHO cells is that the correct (i.e., mammalian-specific) glycosylation patterns are achieved for the protein therapeutics (e.g., therapeutic antibodies). Compared with conventional batch culture, fed-batch fermentation is more commonly used in this type of cell line, since it allows for easier control of the concentrations of certain nutrients that can affect the yield or productivity of the desired protein therapeutic molecule by ensuring the availability of precursor amino acids [2]. However, lack of a complete, clear, quantitative model of the metabolism becomes an obstacle to achieving accurate and precise system simulation and control. In the past several decades, mathematical models that incorporate physical knowledge have been extensively applied in the analysis of cell metabolism [3, 4]. Metabolic Flux Analysis (MFA) leveraging stable carbon (i.e., 13C) labelled substrates techniques is the only technique that can provide information on internal fluxes [5, 6, 7]. Flux Balance Analysis (FBA), on the other hand provides a global inventory of carbon and energy resources throughout metabolism. By applying optimization principles, maximum theoretical yields for biomass formation or other products (e.g., metabolites or proteins) can be derived [8, 9]. Sometimes, for certain metabolic steps, detailed kinetic expressions are available that given the metabolite concentrations and enzyme levels can accurately estimate the flux through the metabolic reaction [10]. This requires the identification of the values of a number of enzymatic parameters. However, these expressions are usually available only for a subset of reactions, necessitating a hybrid modeling approach, where optimization is used to identify metabolic fluxes for the remainder of reactions that lack kinetic expressions. This gives rise to a system of ordinary differential equations (ODEs) determined by the stoichiometry of the reactions. In addition, given the fact that the metabolic reactions usually have relatively fast time constants (e.g. in the order of milliseconds to seconds) compared with other cellular processes like growth and death of cells, the pseudo-steady-state assumption (PSSA) suggests that the accumulation rate of any and every intracellular metabolite can be usefully approximated as zero. For some reactions, (ir)reversibility can be posited based on thermodynamics considerations; for others, reaction rates can be estimated from chemical kinetics considerations [11]. Data-driven approaches are today increasingly employed for identification of complex system dynamics, including traditional regression methods as well as neural networks and their variants [12; 13; 14; 15; 16]. It is known, since the early 1990s, that neural networks embedded within numerical integrators can fruitfully approximate differential equations, and even learn corrections to approximate physical models, supplementing/enhancing them [13; 12; 17; 18; 14]. They can also be used to directly infer the evolution of the system variables when underlying physics are unclear [19; 20; 21; 22; 23]. Physics-Informed Neural Networks (PINNs) [24], Systems-Biology-Informed Neural Networks (SBINNs) [25; 26], and similar architectures [27], can and have been used to solve supervised learning tasks while respecting known laws of physics, system biology, et al [28]. Nevertheless, as we will discuss below, the ambiguous structure of metabolic models creates nontrivial technical difficulties in exploiting partially known physical information from experimental fed-batch culture metabolic data; and can drastically affect the training process for gray box neural networks trying to infer such models from experiments. Our goal in this paper is to elucidate the nature of these modeling ambiguities, demonstrating the ways in which they necessitate modifications of the architectures -and of the training- of traditional neural networks used for the identification task; and to implement networks capable of usefully identifying metabolic kinetics/parameters exploiting a synergy between physical modeling and scientific computation in neural network training. ## 2 Methods ### Structure of the Biophysical Model In a nutshell, the hybrid Chinese hamster ovary (CHO) bioreaction model we will use below (incorporating certain modifications (see appendix D and appendix E,) to the model presented in [10], which constitutes our starting point) describes a continuous-time dynamical system (the terms are defined in table 1: \[\frac{\mathrm{d}\mathbf{C}}{\mathrm{d}t}=\mathbf{f}_{\text{eqncode}}(\mathbf{C};\mathbf{v}( \mathbf{C};\mathbf{\alpha})); \tag{1}\] These evolution equations appear at first sight as simple ordinary differential equations (see appendix B for expressions of eq. (1)); yet, since evaluating the right-hand-side involves--as we will see--solving an optimization problem, we need another temporary label for the nature of the equations. Connecting with existing literature [29; 30; 9] we will here refer to these as **D**ynamic **F**lux **B**alance **A**nalysis (DFBA) equations. Here, \(\mathbf{C}\in\mathbb{R}^{K}\)\((K=14)\) are variables tracked by experiments (which, though they might include concentrations of metabolites, cell densities, or other variables, we will simply refer to as "concentrations" for simplicity, see table 3); \(\mathbf{v}\in\mathbb{R}^{N}\)\((N=E+I=35)\) are all fluxes (reaction rates, see appendix C for all reaction expressions) including \(I\) intracellular fluxes (which can be precomputed from the \(\mathbf{C}\)) \(\mathbf{v}_{I}\in\mathbb{R}^{I}\)\((I=14)\) and \(E\) extracellular fluxes \(\mathbf{v}_{E}\in\mathbb{R}^{E}\)\((E=21)\). Some of extracellular fluxes are assumed to be irreversible (\(\mathbf{v}_{E,ir}\in\mathbb{R}^{E_{ir}}\)\((E_{ir}=14),\mathbf{v}_{E,ir}\geq 0\)), while others are assumed reversible (\(\mathbf{v}_{E,r}\in\mathbb{R}^{E_{r}}\)\((E_{r}=7)\)); \(v\) is a function of \(\mathbf{C}\) and \(\mathbf{\alpha}\), where \(\mathbf{\alpha}\in\mathbb{R}^{P}\)\((P=45)\) are the kinetic parameters. \begin{table} \begin{tabular}{c|c|c} \hline \hline Notation & Variable & Dimension \\ \hline \(\mathbf{C}\) & Variables tracked by experiments & \(K=14\) \\ \(\mathbf{\alpha}\) & Kinetic parameters & \(P=45\) \\ \(\mathbf{v}_{I}\) & Intracellular fluxes & \(I=14\) \\ \(\mathbf{v}_{E,r}\) & Reversible (extracellular) fluxes & \(E_{r}=7\) \\ \(\mathbf{v}_{E,ir}\) & Irreversible (extracellular) fluxes & \(E_{ir}=14\) \\ \(\mathbf{v}_{E}\) & Extracellular fluxes & \(E=E_{ir}+E_{r}=21\) \\ \(\mathbf{v}\) & All fluxes & \(N=E+I=35\) \\ \(\mathbf{S}\) & Stoichiometric matrix & \(M\times N=24\times 35\) \\ \hline \hline \end{tabular} \end{table} Table 1: Notation and dimensions for all variables used (the fact that here, \(K=14,I=14\) and \(E_{ir}=14\) is a coincidence). Given \(\mathbf{C}\) and \(\mathbf{\alpha}\), the evaluation of \(\mathbf{f}_{\mathbf{eqn:node}}\) in eq. (1) is typically done in one of two very different ways. Both involve the following steps, but differ in the particular combination of objective/constraints and the optimization approach used to enforce them. Given \(\mathbf{\alpha}\), and an initial set of values \(\mathbf{C}_{0}\) for the concentrations, the time derivatives of the concentrations (e.g. RHS of Equation eq. (1)) can be computed via the following steps. 1. Compute preliminary updates of intracellular flux rates \(\hat{\mathbf{v}}_{I}\in\mathbb{R}^{I}\)\((I=14)\) according to the concentrations \(\mathbf{C}\) and given formulas of kinetic equations \[\hat{\mathbf{v}}_{I}=\mathbf{f}_{\mathbf{kin}}(\mathbf{C};\mathbf{\alpha}),\] (2) where \(\mathbf{f}_{\mathbf{kin}}:\mathbb{R}^{K\times P}\mapsto\mathbb{R}^{I}\) (see appendix D for formulas of all kinetic expressions and appendix E for the changes of kinetic expressions we made based on the model in [10]). 2. The fluxes have to satisfy some constraints: * Known kinetic expressions, i.e. Equation eq. (2). * The pseudo steady state assumption, which requires \[\mathbf{S}\cdot\mathbf{v}=0,\] (3) involving the stoichiometric matrix \(\mathbf{S}\in\mathbb{R}^{M\times N}\) (\(M=24\) is the number of metabolites at steady state, see appendix F for all entries of \(\mathbf{S}\)). If we split the columns of \(\mathbf{S}\) according to the \(I\) and \(E\) components (that is, \(\mathbf{S}_{I}=\mathbf{S}\cdot\mathbf{B}_{I},\mathbf{S}_{E}=\mathbf{S}\cdot \mathbf{B}_{E}\) where \(\mathbf{B}_{I}\in\mathbb{R}^{N\times I}\) and \(\mathbf{B}_{E}\in\mathbb{R}^{N\times E}\) are two indicator matrices showing the intracellular and extracellular indices of all reactions), we have an equivalent form of eq. (3), \[\mathbf{S}_{I}\cdot\mathbf{v}_{I}+\mathbf{S}_{E}\cdot\mathbf{v}_{E}=0\] * Among all 21 extracellular fluxes \(\mathbf{v}_{E}\), 14 of them are known to be irreversible, which requires \[\mathbf{v}_{E,ir}=\mathbf{B}_{ir}\cdot\mathbf{v}_{E}\geq 0,\] (4) where \(\mathbf{B}_{ir}\in\mathbb{R}^{E_{ir}\times E}\) is an indicator matrix containing the indices of irreversible fluxes among all extracellular ones. Notice that the combination of Equations eq. (2) and eq. (3) consists of 38 independent linear equations, while the unknown variable \(\mathbf{v}\) is only 35-dimensional, leading to an overdetermined system. To address this issue, one can choose to satisfy some equations exactly, and others approximately (e.g. in a least squares sense). This leads to two substantially different approaches, the "kinetic-based" and the "stoichiometric-based", for computing intracellular flux rates \(\mathbf{v}_{I}\) and extracellular \(\mathbf{v}_{E}\). It is important to state that these two approaches will, in general, lead to substantially different dynamic evolution for the same initial conditions of a metabolic kinetic scheme. 1. The kinetic-based approach: we satisfy the kinetic equations eq. (2), and approximately satisfy the stoichiometric equations eq. (3), which leads to \[\begin{cases}\mathbf{v}_{I}=\hat{\mathbf{v}}_{I},\\ \mathbf{v}_{E}=\operatorname*{argmin}_{\mathbf{v}_{E}}||\mathbf{S}_{I}\cdot\hat{\mathbf{v }}_{I}+\mathbf{S}_{E}\cdot\mathbf{v}_{E}||_{2}^{2},\text{ s.t. }\mathbf{B}_{ir}\cdot\mathbf{v}_{E}\geq 0.\end{cases}\] (5) Here, we realize that the optimization problem is a linear least-squares problem with constraints (which implies it is actually a convex optimization problem). Moreover, if we ignored the constraints, we would be able to obtain an analytical solution for \(\mathbf{v}_{E}\) by computing the pseudo inverse of \(\mathbf{S}_{E}\): \[\mathbf{v}_{E}=-(\mathbf{S}_{E})^{+}\cdot\mathbf{S}_{I}\cdot\hat{\mathbf{v}}_{I}.\] 2. The stoichiometric-based approach [10]: we satisfy the stoichiometric equations exactly eq. (3), and then approximately satisfy the kinetic equations eq. (2), which leads to \[(\mathbf{v}_{I},\mathbf{v}_{E})=\operatorname*{argmin}_{\mathbf{v}_{I},\mathbf{v}_{E}}||\mathbf{v }_{I}-\hat{\mathbf{v}}_{I}||_{2}^{2},\text{ s.t. }\mathbf{S}_{I}\cdot\mathbf{v}_{I}+\mathbf{S}_{E}\cdot\mathbf{v}_{E}=0, \mathbf{B}_{ir}\cdot\mathbf{v}_{E}\geq 0.\] (6) This is a least squares optimization problem with linear constraints. If we ignored the inequality constraints, we would obtain an analytical solution of \((\mathbf{v}_{I},\mathbf{v}_{E})\) by the Lagrange multiplier approach, see appendix G for details. To help with the numerics of the two embedded optimization problems, we in fact rescale the supplied \(\hat{\mathbf{v}}_{I}\) values: divide them by \(1000\) to adjust their numerical range to \(\sim 1-10\) upon entering either optimization problem, and multiply the resulting fluxes \(\mathbf{v}_{I}\) and \(\mathbf{v}_{E}\) by the same factor before exiting. This does not change the solution, but improves the numerical conditioning. After the optimization step we have \[\mathbf{v}=\mathbf{B}_{I}\cdot\mathbf{v}_{I}+\mathbf{B}_{E}\cdot\mathbf{v}_{E}=\mathbf{f}_{ \mathrm{optim}}^{(\ell)}(\hat{\mathbf{v}}_{I}), \tag{7}\] where \(\ell\in\{k,s\}\) indicates whether we are using the kinetic or stoichiometric approach to finding fluxes. 3. Compose eq. (1), eq. (7), and eq. (2) to create \[\frac{\mathrm{d}\mathbf{C}}{\mathrm{d}t}=\mathbf{f}_{\mathbf{eqn:ode}}(\mathbf{C};\mathbf{f}_{ \mathrm{optim}}^{(\ell)}(\mathbf{f}_{\mathbf{kin}}(\mathbf{C};\mathbf{\alpha}))).\] (8) As the fluxes \(\mathbf{v}\) are the reaction rates for each of the \(E+I\) reactions, this follows directly from the stoichiometry of these reactions. Whether using the first or the second approach, the resulting set of equations can subsequently be integrated using an error-controlled integrator to obtain a full time series of all concentrations, for example, \[\mathbf{C}(t=t_{n+1})=\mathbf{f}_{\mathbf{int}}(\mathbf{C}(t=t_{n});\mathbf{f}_{\mathbf{eqn:ode }})=\mathbf{C}(t=t_{n})+\int_{t_{n}}^{t_{n+1}}\mathbf{f}_{\mathbf{eqn:ode}}(\mathbf{C};\bm {v}(\mathbf{C};\mathbf{\alpha}))\;\mathrm{d}t, \tag{9}\] where \(\{t_{i}:i=0,1,2,\cdots\}\) is the set of equal-spaced timestamps. It is important however to note that rate discontinuities potentially can (and actually do) arise at time instances when different constraints become active (see fig. 3). Note also that typical operating protocols of bioreactors often call for the addition of species (e.g. nutrients) at particular time instances, thus leading to temporal discontinuities in the system states. We will illustrate both these types of discontinuities below in section 3.1. Several important contributions on which this paper is based were established in previous work, beginning with applications to _E. coli_[9] and then proceeding to the more recent mammalian biomanufacturing targets [10]. Beyond the constraints on our inner (optimization) problem that we showed above, additional constraints were imposed in [9] on their outer (time-integration) problem, such as non-negative metabolites and limits on the rate-of-change of fluxes. In their dynamic (resp. static) optimization approach (DOA, resp. SOA) they determined fluxes over an entire trajectory (resp. one trajectory segment, with constant fluxes). Our simulations can be thought of as a form SOA, with the segment being a single integration step (as also in [10]). Before we start, a note on the computation of model gradients: many accurate integrators require the system Jacobian as well as sensitivities w.r.t. parameters. This is also important for the integration of differential-algebraic systems of equations (differential equations with equality constraints). Furthermore, these gradients (w.r.t. state variables and/or parameters) are crucial in identification tasks: training neural networks to approximate the system equations and/or estimate their parameters from data. As we described above, our evolution equations are not simple explicit ordinary differential equations, but rather, their right-hand side arises as the result of solving an optimization problem, depending on the current state. This renders the accurate evaluation of these ODEs (as well as their sensitivity and variational computations) less straightforward than the explicit right-hand-side case. ### Black-box Model Our black-box model is a multi-layer perceptron (MLP) embedded within a numerical integrator scheme (e.g. the forward-Euler scheme or the Runge-Kutta template), where the MLP (\(\mathrm{NN}_{\mathrm{b}}(\cdot;\mathbf{\theta})\)) is used to learn the right-hand-side (RHS) of the ODE: \[\tilde{\mathbf{C}}(t=t_{n+1})=\mathbf{f}_{\mathbf{int}}(\mathbf{C}(t=t_{n});\mathrm{NN}_{ \mathrm{b}})=\mathbf{C}(t=t_{n})+\int_{t_{n}}^{t_{n+1}}\mathrm{NN}_{\mathrm{b}}( \mathbf{C};\mathbf{\theta})\;\mathrm{d}t. \tag{10}\] The details of generating the datasets can be found in section 3.1. Note that here the right-hand-side depends only on the system state; it is also possible to make the Neural Network eq. (10) dependent on physical input parameters (such as feeding conditions or basal gene expression rates), by including these parameters as additional inputs to the NN function. This will be important if, at a later stage, one wishes to optimize operating conditions towards some additional global objective (e.g. maximal biomass production). This possibility has been demonstrated in older work [17]; it will not be repeated here. ### White-box and Gray-box Models #### 2.3.1 Model Structures In contrast with the black-box model which is purely data-driven, white-box and gray-box models benefit from existing physical knowledge, leaving only the unknown parts of the model trainable. In this paper, these two models have structure similar to that of what we deem the ground-truth biophysical model (see fig. 1), with changes limited to the computation of preliminary intracellular fluxes \(\hat{\mathbf{v}}_{I}\). While the white-box model assumes that some of the kinetic parameters \(\mathbf{\alpha}\) are unknown or need calibration, the gray-box model suggests that part of the kinetic expressions have no known functional form and therefore replaces them with neural network approximations. It is natural to also construct a mixed version of the hybrid model that contains both unknown ("white-box") kinetic parameters and unknown ("black-box") kinetic expressions, resulting in an overall gray-box model. Though the white-box model superficially resembles typical parameter-fitting problems, due to the presence of the inner optimization step in its evaluation, traditional fitting approaches like general linear least-squares cannot be well-adapted into our framework. Instead, a gradient-based fitting approach will be employed using a differentiable convex optimization layer as described in section 2.3.2. We remind the reader here that, since the original biophysical model includes two different approaches for computing fluxes (see eq. (7)), we also make our white- or gray-box frameworks in two versions: that is, we use kinetic-version models to learn on the dataset generated from the kinetic approach, and stoichiometric-version models on the dataset that came from the stoichiometric approach. #### 2.3.2 Computation of Gradients in Convex Optimization For this section in particular, we need to define some terms. The _model_ refers to the differentiable program used to make predictions: this includes RHS evaluations (white-, gray-, or black-box), perhaps necessitating an embedded convex optimization program eq. (5) or eq. (6) (_ECOP_); as well as the use of these RHS in numerical integration steps. The _inputs_ for this ECOP include both \[\begin{array}{ccc}\text{constants}&\mathbf{S}_{I},\mathbf{S}_{E},\text{ and }\mathbf{B}_{ir};&\text{and}\\ \text{outputs from upstream modules}&\hat{\mathbf{v}}_{I}&\text{(function evaluations)}. \end{array} \tag{11}\] All of these will be considered constant for the purpose of solving the ECOP for each call to the RHS. _Parameters_ here refers to those quantities which could be modified by our outer training loop, including both kinetic parameters \(\boldsymbol{\alpha}\) of the kinetic equations eq. (2) (when we perform white box parameter estimation or gray box parameter estimation); and neural network parameters \(\boldsymbol{\theta}\), i.e. trainable weights and biases (when we train gray box networks to Figure 1: **White, gray, and black inner architectures.** Operations are boxed, data or predictions are unboxed, and notable named intermediates are labeled on edges. Color and pattern are used to distinguish between model pathways that are distinct between the gray-, white-, and black-box approaches, or common to all three. recover unknown functional dependencies): \[\mathbf{\alpha}\quad\text{ and }\quad\mathbf{\theta}. \tag{12}\] We will divide these into trainable and untrainable (fixed) parameters depending on the particular experiment. The _outputs_ of the ECOP are the reported converged values of the _variables_ the problem solves for, including both \[\begin{array}{c}\text{the fluxes}\quad\mathbf{v}_{E}\text{ and }\mathbf{v}_{I}; \quad\text{ and, possibly}\\ \text{auxiliary variables (described below)}\quad\quad\quad\mathbf{r}.\end{array} \tag{13}\] For the purposes of training, we would like our loss function (see eq. (16) described for particular experiments in later sections) to be differentiable with respect to all of the trainable parameters eq. (12). This requires that _the model predictions_ be differentiable, and therefore for each step in the model to be differentiable, including the ECOP, with respect to the same. To enable this in a gradient-based computing framework such as PyTorch, we turn to the package cvxpylayers which was developed with such problems in mind [31]. This package itself uses cvxpy (a Python-embedded modeling language for convex optimization problems) [32, 33, 34] and diffcp (a Python package for computing the derivative of a cone program, which is a special case of convex programming) [35, 36, 37]. In order for these packages to correctly evaluate the gradient of the outputs eq. (13) of the ECOP with respect to all of its inputs eq. (11), certain structural characteristics must hold. Specifically, the problem needs to be rewritten conforming with the rules of Disciplined Convex Programming (DCP) and of Disciplined Parametrized Programming (DPP). DCP is a system for constructing convex programs that combines common convex functions (e.g. \(x^{2},|x|\)) with composition and combination rules (e.g. \(f\circ g\) is convex if both \(f\) and \(g\) are convex; nonnegative linear combination of convex functions is still convex). If these rules are followed, the library can automatically determine whether the full problem is indeed convex. DPP is a subset of DCP, which further requires that all expressions of the ECOP are affine with respect to the ECOP inputs eq. (11). It has been proved [34] that a DPP-supportable convex program can be invertibly transformed into a cone program (and its derivative information can be obtained from diffcp). Therefore, DPP is mainly used in input-dependent convex programming, which allows the entire program to be differentiable without actually unrolling and back-propagating through the optimization loop. Because DCP requires that the expressions in the ECOP be affine w.r.t. the problem inputs eq. (11), the product of two inputs is not an acceptable expression. This means e.g. that \((\mathbf{S}_{I}\cdot\hat{\mathbf{v}}_{I}+\mathbf{S}_{E}\cdot\mathbf{v}_{E})^{T}\cdot (\mathbf{S}_{I}\cdot\hat{\mathbf{v}}_{I}+\mathbf{S}_{E}\cdot\mathbf{v}_{E})\) in the objective function of eq. (5) has to be reformulated. This is resolved by the addition of another variable \(\mathbf{r}\) in eq. (14) and eq. (15), and then equality constraints on this additional variable, such that the newly defined problems are equivalent to the old. Further, in the kinetic case, to avoid the direct input product \(\mathbf{S}_{I}\cdot\hat{\mathbf{v}}_{I}\), we need to include \(\mathbf{v}_{I}\) as an optimization variable, but then upgrade what was a pre-optimization expression \(\mathbf{v}_{I}=\hat{\mathbf{v}}_{I}\) from eq. (5) to an actual equality constraint in eq. (14). In summary, for the kinetic approach, we rewrite eq. (5) as \[\begin{array}{c}\min_{\mathbf{r},\mathbf{v}_{I},\mathbf{v}_{E}}||\mathbf{r}||_{2}^{2}\\ \text{s.t.}\ \mathbf{B}_{ir}\cdot\mathbf{v}_{E}\geq 0\\ \mathbf{r}=\mathbf{S}_{I}\cdot\hat{\mathbf{v}}_{I}+\mathbf{S}_{E}\cdot\mathbf{v}_{E}\\ \mathbf{v}_{I}-\hat{\mathbf{v}}_{I}=0\end{array} \tag{14}\] and for the stoichiometric approach, we rewrite eq. (6) as \[\begin{array}{c}\min_{\mathbf{r},\mathbf{v}_{I},\mathbf{v}_{E}}||\mathbf{r}||_{2}^{2}\\ \text{s.t.}\ \mathbf{B}_{ir}\cdot\mathbf{v}_{E}\geq 0\\ \mathbf{r}=\mathbf{v}_{I}-\hat{\mathbf{v}}_{I}\\ \mathbf{S}_{I}\cdot\hat{\mathbf{v}}_{I}+\mathbf{S}_{E}\cdot\mathbf{v}_{E}=0\end{array} \tag{15}\] with inputs eq. (11). With these changes, the two problems are DPP-compliant; so, we are able to evaluate derivatives of the problem outputs eq. (13) (in particular, the argmins \(\mathbf{v}_{I}\) and \(\mathbf{v}_{E}\)) with respect to the problem inputs eq. (11) and also evaluate vector-Jacobian products as needed in a larger PyTorch back propagation to eventually get loss gradients w.r.t. parameters eq. (12). ### Auto-regressive Loss For supervised learning of the dynamics underlying time series data, one approach is to use the ground truth prediction/output from a prior time step as the input for the current time step, which leads to the teacher-forcing method (also known as professor-forcing in [38]). Alternatively, we could use the model prediction from the prior time step as input, which is called "autoregressive training". In fact, if contiguous data trajectories are divided into episodes of \(M\) steps each, and \(M\) reduced to \(2\), we see that the first is in fact a special case of the second. So, in general, we use an autoregressive model structure (however, see also eq. (10)), which means the forward pass of the model can be written as \[\tilde{\mathbf{C}}(t=t_{i+1})=\mathrm{Model}(\tilde{\mathbf{C}}(t=t_{i})),\tilde{\mathbf{C }}(t=t_{0})=\mathbf{C}(t=t_{0}), \tag{16}\] where \(\mathrm{Model}\) can represent the integration of the black-, white- or gray-box RHS. The mean-squared loss (MSE) between the two time series can be therefore computed as \[\mathrm{MSE}(\{\tilde{\mathbf{C}}(t=t_{i})\},\{\mathbf{C}(t=t_{i})\})=\frac{1}{KL} \sum_{j=1}^{L}||\tilde{\mathbf{C}}(t=t_{j})-\mathbf{C}(t=t_{j})||_{2}^{2}, \tag{17}\] where \(L+1\) is the length of the dataset \(\{\mathbf{C}(t=t_{i}):i=0,1,2,\cdots,L\}\) and \(K=14\) is the dimension the state vector as we have shown in table 1. ## 3 Results In this section, we will describe each of the several computational experiments tabulated in table 2 which we performed in this paper. We first begin by describing the data generation procedure that was used for each of the parameter-identification and neural-network training experiments that follow. We include an analysis of the impact of the constraints in the inner optimization problem, considering events when constraints switch to (resp. from) active (resp. inactive). We then begin our actual training experiments with a black-box example. All of our training experiments include both kinetic and stoichiometric variants. Subsequently, we perform white-box identification, in both two-free-parameter and five-free-parameter variants. Finally, we will perform a mixture of these two tasks with gray-box modeling: First we will use a neural network to replace one of the kinetic expressions in \(\mathbf{f}_{\text{kin}}\); then we will repeat this, also allowing one of the physical parameters \(\mathbf{\alpha}\) to the trainable. ### Data Generation We begin by simulating short trajectories for a variety of initial conditions, and collecting these flows as a dataset _f_or a single set of parameter values. We then implement the Neural Network model in PyTorch exactly as described in section 2.1 and trained to match these flows. The dataset consists of \(N_{\mathrm{run}}\) transients of the full model eq. (1), or equivalently, eq. (8), from initial conditions (ICs) taken as Gaussian perturbations around means; the means themselves are sampled uniformly (in time) at random along a central nominal trajectory (NT). The per-variable standard deviations are proportional to the extent of variation of that variable in the NT. That is, the sample of ICs is given by \[\left\{\mathbf{C}^{(i)}(t=0)=\left[\begin{array}{ccc}C_{1}^{(i)}(t=0)&\sim&\mathcal {N}(\bar{C}_{1}(t=t_{i}),\sigma_{1})\\ &\vdots\\ C_{K}^{(i)}(t=0)&\sim&\mathcal{N}(\bar{C}_{K}(t=t_{i}),\sigma_{K})\end{array} \right]\left|i=1,2,\cdots,N_{\mathrm{run}}\right\}, \tag{18}\] where the nominal trajectory \(\mathbf{\bar{C}}(t)=(\bar{C}_{1}(t),\bar{C}_{2}(t),\cdots,\bar{C}_{K}(t))^{T}\) starts from a particular set of initial conditions that were measured during a laboratory experiment. The feeding events were implemented as state discontinuities. This \begin{table} \begin{tabular}{c c} **Experiments** & **Section** \\ \hline Data Generation & §3.1 \\ Black-box & §3.2 \\ White-box (2 Parameters Unknown) & §3.3.1 \\ White-box (5 Parameters Unknown) & §3.3.2 \\ Gray-box (1 Expression Unknown) & §3.4.1 \\ Gray-box (1 Expression Unknown + 1 Parameter Unknown) & §3.4.2 \\ \hline \end{tabular} \end{table} Table 2: Summaries of computational experiments in this paper. nominal trajectory (in both its "kinetic" and its "stoichiometric" integrations) appears in fig. 2; the "stoichiometric" NT stops just before the state goes negative (and thus before any feeding events have occurred). Each initial condition from eq. (18) is then accurately simulated (with an order \(8(5,3)\) explicit Runge-Kutta method [39] with absolute tolerance \(10^{-8}\) and relative tolerance \(10^{-7}\)) to a time horizon generally significantly shorter than the entire nominal trajectory, (circular dots in fig. 4 and fig. 5) giving us data in the form of several trajectory "windows", constituting one "episode". #### 3.1.1 Detecting transitions in constraint activity during simulation The inner optimization described in section 2.1 includes bounds on some of the fluxes computed (lower bounds indicating irreversible reactions). Depending on the current state of the simulated variables \(\mathbf{C}\), the optimal _unconstrained_ Figure 2: Nominal trajectories (kinetic and stoichiometric) overlaying sampled short-time flows from perturbed initial conditions. Trajectories for only a few of the \(K\) variables are shown. Trajectories for all variables are shown in fig. 15. Color is used to distinguish between different curves described in the legend. fluxes may not lie inside the bounded domain. The _constrained_ optimum will then instead lie on constraint boundaries (or even possibly intersections of them). At the onset--or the end--of occurrence of such events, the trajectory of \(\mathbf{v}\) may/will develop sharp corners. To explore the impact of this phenomenon on the system dynamics we report, at each timestep of a simulation, which flux bounds are active and which are not. For an event-driven version of such simulations we refer the reader to the package in [30], in which several failure modes are considered (beyond the ones arising in our work), including both an infeasible inner optimization problem, and a problem with multiple solutions (leading to a set-valued differential equation). In general, events can be either "time events" or "state events"; the first require only accurately stopping the integration at a particular time. State events, on the other hand, occur when some condition(s) of the continuous state become satisfied. In some cases, this can be detected by locating zeros of some interpolating polynomial(s) [30, 40]; further difficulty arises when the event condition cannot be described by the root of a continuous function (e.g., the case of fig. 3, in which a Boolean quantity changes at the event). We intend to explore the proper analysis of such event detection in future work (as well as the integration between events, possibly modifying the RHS evaluation between each pair of events to satisfy the active bounds by construction). We employ instead a less sophisticated visualization-based method, shown in fig. 3: We depict, in fig. 3, such changes in constraint activity status alongside the first (3c, 3a) and second (3d, 3b) time derivatives of two key simulated variables in a temporally aligned fashion. This is shown here along a short run of the stoichiometric model; a constraint turning active is marked by a short green tick, and its turning inactive by a brief red tick). Notice the jumps in the second derivative, and the sharp corners in the first derivative of the concentration evolution. During this run we observe that three fluxes (7, 32 and 34) had encounters with their corresponding lower bounds. More specifically, around \(t^{*}\approx 0.5\), when the bound for reaction 34 becomes _persistently inactive_ (thus the corresponding flux moves well away from zero), we see sharp downward discontinuities _in the second time derivative_ of the evolutions of both cysteine and glycine. Note that reaction 34 is the (irreversible) breakdown of NADH (see full stoichiometry matrix in appendix F, or the relevant parts of the reaction network diagram in [10]); and that, at this time, its flux continuously changes from 0 to positive. We therefore expect, \(\mathrm{d}y_{34}/\mathrm{d}t\) may experience a discontinuity at \(t^{*}\). Furthermore, one of the (reversible) reactions that makes GLY takes NADH as an input. If that reaction rate is positive at that given moment, one of its inputs suddenly becomes less available. Therefore, because \(\mathrm{d}\mathrm{G}\mathrm{L}\mathrm{Y}/\mathrm{d}t\sim-v_{34}\), and \(\mathrm{d}v_{34}/\mathrm{d}t\) is discontinuous, we can expect that \(\mathrm{d}(\mathrm{d}\mathrm{G}\mathrm{L}\mathrm{Y}/\mathrm{d}t)/\mathrm{d}t\) will also be discontinuous, and that is clearly visible in fig. 3b. Such a rationalization can be repeated for CYS, which also relies on NADH as an input, and which also experiences a discontinuity in its second derivative (fig. 3d). Note that CYS has a much larger discontinuity associated with the activation of the lower bound on flux 32. Note also that GLY has a second discontinuity occurring later (around \(t\)\(1.4\)); this is related to flux 7, which however involves different pathways. ## 6 Conclusion Figure 3: **Activity of bound constraints along a sample run for the stoichiometric case**, Observe the discontinuities arising in the second derivative (3b and 3d) of the concentration evolution. Time derivatives for plotting were estimated by local forward finite differences (FD). Color is used to distinguish between different curves described in the legend. ### Black-Box Neural Network Identification To demonstrate system identification with no assumed prior knowledge of the system mechanisms, we performed black-box RHS learning, in which we represent the entire system of ODEs as an end-to-end neural network. Since there are two approaches in evaluating eq. (8) as mentioned in section 2.1, we also performed two experiments: one of them used data generated from the ground-truth kinetic system (sampling every \(\Delta t=0.1\) hours over a \(t_{\max}=1.2\)-hour horizon, producing \(13\) steps for each of the \(768\) data trajectories), and the other used data generated from the ground-truth stoichiometric one (with \(\Delta t=0.1\), \(t_{\max}=1.2\), \(13\) steps, and \(768\) data trajectories). In both cases, the black-box ODE was trained by taking steps of fixed size \(0.01\) between the data samples with a Runge-Kutta 4 integrator (the black box identification "does not know" about discontinuities in the model - it smoothly interpolates between data points in time). As can be seen in fig. 4 and fig. 5, our black-box neural ODE was able to fit the data trajectories quite tightly. This validates the underlying approach, and suggests that such ODEs with inner optimization steps can be successfully approximated as (possibly slightly "smoothened") closed-form functions. Figure 4: Black-box training results (kinetic). See also fig. 16 for more detailed results. Color is used to distinguish between different curves described in the legend. ### White-Box Neural Network: Parameter Estimation To demonstrate the full-structure physical-parameter estimation setting that we term "white-box" learning, we tried to recover (a) two or (b) five of the nominal parameter values. Specifically, we performed simulations at the nominal parameter values, collected the transient data, considered forty three (resp. forty) of them known, and then used a gradient-based training method to estimate the values of the remaining two (resp. five) from the data. Our initial guess (a perturbation of the truth) is marked in fig. 7. For all four of these numerical experiments, the dataset consisted of 10 short single-Euler-step "trajectories" each 0.05 hours in length (Runge-Kutta integration gave comparable results, not shown); the network ansatz was also Euler with a step size of \(0.05\). Training was 4000 epochs of RMSprop with 1 batch per epoch. This demonstrates the use of the algorithms in [31] to carry out differentiation through the inner optimization problem of evaluating the equation right-hand-side, discussed in section 2.1, enabling gradient-based learning for this experiment, and serving as an initial validation of the algorithms before the gray-box methods of section 3.4 that follow below. #### 3.3.1 Known Model, Two Unknown Parameters In our first such white-box learning experiment, we trained with only two unknown parameters from table 4; we find (fig. 6) that we can recover the two parameter values reasonably well. A motivation for this initial experiment is to help visualize the gradient landscape in fig. 7. We see in fig. 8 that the induced gradient dynamics of the learning problem are highly stiff, making adaptive training methods such as Adam [41] an absolute necessity. The training exhibits a two-sided descent, consisting of (a) first, a fast approach to a deep trough in the parameter space, and then (b) a slower motion within the trough, with some oscillations induced by the finite learning step size. Furthermore, in the Stoichiometric (Type 2) case (fig. (b)b and fig. (b)b), we observe that the final gradient is so shallow that even Adam takes prohibitively long to move any appreciable distance within the loss trough. Note that, although the true \(\mathbf{\alpha}\) values indeed mark a minimum for the optimization problem posed, as seen in fig. (b)b, this minimum is extremely shallow, making the discovery of the true value for \(\mathbf{\alpha}_{2}\) imperfect - for all practical purposes, the entire "bottom of the trough" leads to a good fit. This is an instance of what is termed "model sloppiness" [42, 43]; along the bottom of the trough the loss function posed is not strongly sensitive to the parameters, leading to parameter nonidentifiability. Figure 5: Black-box training results (stoichiometric). See also fig. 17 for more detailed results. Color is used to distinguish between different curves described in the legend. #### 3.3.2 Known Model, Five Unknown Parameters Next, we repeated the parameter estimation experiment of the previous section but now with five, rather than two, unknown \(\mathbf{\alpha}\) values. We find that the numerical values are again recovered reasonably well (fig. 9), and with loss dynamics (fig. 10) similar to the two-parameter case. Figure 8: **Convergence of the training to the final parameter estimates for the white-box, two-parameter case. Kinetic ((a)a) vs stoichiometric ((b)b) implementation from section 2.1. See also fig. 18.** Figure 6: Parameter comparison for white-box two-parameter. Figure 7: **Gradient landscape for the white-box, two-parameter case. Kinetic ((a)a) vs stoichiometric ((b)b) implementation from section 2.1. The stiff gradient vectorfield leads to some degree of parameter nonidentifibility.** ### Partially Known Model: Gray-box Identification For the work in this section, we assumed the expression of \(\hat{v}_{I,2}\) in \(\mathbf{f}_{\text{kin}}\) was not known: instead, we only knew that it is a function of GLC and LAC, and we replaced it by a 2-8-8-1 multi-layer perceptron (MLP) with trainable weights and biases. We embedded this MLP into our gray-box computation graph visualized in fig. 1 to make predictions for loss evaluation. For all four of these experiments, the dataset consisted of 800 single-Euler-step "trajectories" each 0.05 hours in length, The network ansatz was also Euler with a step size of 0.05. Training was 500 epochs of RMSProp with 20 batches per epoch. #### 3.4.1 Partially Known Model, All Parameters Known For our first gray-box experiments we further assumed that the values of all kinetic parameters used in expressions beyond that for \(\hat{v}_{I,2}\) are correct and do not need to be calibrated. We performed the experiment twice: once with kinetic-based data (data from type-1 simulations) and with the white portion of our gray-box also based on the kinetic formulation; and once with stoichiometric data/version respectively. For each experiment, we find (fig. 10(a) and fig. 11(a), resp.) that the learned flux functions are largely reproduced correctly, but there are discrepancies (relatively flat network predictions) over some parts of their domain. The greatest percent discrepancy (given in fig. 10(b) and fig. 11(b), scaled by the spread of true values across the training data) arises, as one might expect, at locations where the flux is approximately zero. The most obvious explanation for this error is that this region (small GLC) is not frequently visited by the ground-truth dynamics used for training data, especially in the stoichiometric case. Finally, errors in small fluxes lead have less egregious consequences in what we actually minimize in the network: the prediction error for the concentration evolution. Figure 10: **Convergence of the training to the final parameter estimates for the white-box, five-parameter case. Kinetic ((a)a) vs stoichiometric ((b)b) implementation from section 2.1. See also fig. 19.** Figure 9: Parameter comparison for the white-box, five-parameter case. #### 3.4.2 Partially Known Model, Partially Known Parameters Finally, we combine the physical-parameter (white-box) fitting of section 3.3 with the gray-box fitting of section 3.4.1 to produce a model in which we train both neural and physical components jointly. For this experiment, we still assume that the expression for \(\hat{v}_{I,2}\) in \(\mathbf{f_{\text{kin}}}\) is unknown; but we also additionally assume the value of one kinetic parameter, \(\alpha_{1}\), also needs to be calibrated. As before, we also studied kinetic and stoichiometric versions of the experiment. As we can see in fig. 13, fig. 14 and table 4 the recovery of the shape of the flux function has similar characteristics to section 3.4.1; yet we are also able to rediscover the parameter value accurately, demonstrating the method's potential in joint learning with such mixed physical prior information. Figure 11: **Comparison of fluxes (ground-truth vs kinetic gray-box model). Note that the (GLC, LAC) points visited in training are scattered on the surfaces (11a) or on the base plane (11b). 11a: ground-truth and neural net approximations of the fluxes given the inputs of the neural net (GLC and LAC). 11b: normalized prediction errors (fraction of max-min of the true function, across the data). Note the relative rotation (for visual clarity) between (11a) and (11b).** Figure 12: **Comparison of fluxes (ground-truth vs stoichiometric gray-box model). 12a: ground-truth and neural net approximations of the fluxes given the neural net inputs, GLC and LAC. 12b: normalized prediction errors (fraction of max-min of the true function, across the data.) Note that the (GLC, LAC) points visited in training are scattered on the surfaces (12a) or on the base plane (12b). Note again the relative rotation (for visual clarity) between (12a) and (12b).** ## 4 Conclusions and Future Directions In this paper, we revisited a mechanistic model of the biochemical reactions arising in Chinese Hamster Ovary (CHO) cell cultures. When simulating the dynamics of the model, evaluation of the temporal derivative of this system of equations practically necessitates the solution of a constrained convex problem at each time step. This "inner optimization" can lead to: (a) discontinuities in the second time derivatives of the evolving concentrations (that is, the solution itself is C1 as shown in fig. 3); and (b) difficulties in computing the system Jacobian, or sensitivity gradients of evolving states with respect to system parameters. We then demonstrated how to incorporate such mechanistic physical knowledge of the model along with data-driven approaches, so as to identify or calibrate this type of systems. Our hybrid model can be black-, gray-, or white-box, depending on the portion of physical laws one is confident about _a priori_. Importantly, we implemented a modification of traditional neural network/ODE-net architecture in our white- and gray-box models based on [31]: this approach can encode the differentiable convex optimization layer within a numerical integrator (which can be considered as an unrolled recurrent neural network) so as to overcome the obstacles of computing model gradients. The potential of Figure 14: **Comparison of fluxes (stoichiometric ground-truth vs gray-box model). 14a:** ground-truth and neural net approximations of the fluxes as functions of the learned kinetic expression inputs GLC and LAC. 14b: normalized (as in fig. 10(b) and 10(b)) predicted errors. Black data points plotted as in fig. 11 and 10(c). Ground-truth value and the recovered value of the kinetic parameter \(\alpha_{1}\). Figure 13: **Comparison of fluxes (kinetic ground-truth vs gray-box model). 13a:** ground-truth and neural net approximations of the fluxes as functions of the learned kinetic expression inputs, GLC and LAC. 13b: normalized (as in fig. 10(b) and 10(b)) predicted errors. Black data points plotted as in fig. 11 and 10(c). Ground-truth value and the recovered value of the kinetic parameter \(\alpha_{1}\). this type of data-driven models to identify metabolic network dynamics from data, and perform regression tasks, was illustrated. The approaches and model architectures that we designed an implemented in this paper should be of broad applicability in fields of engineering where the right-hand-side of the evolution equations intrinsically involves an optimization problem; robotics control, or differentiable Model Predictive Control [44] come to mind. In the metabolic engineering domain, such algorithms can be usefully combined with downstream optimization problems, for the design of experiments, the optimization of feed media composition, or the design of optimal feeding/harvesting policies in bioreactor operation. **Acknowledgements:** This work was partially supported by AMBIC. The work of T.B., T.C. and I.G.K. was also partially supported by an AFOSR MURI. The work of C. M. was partially enabled by the DOE Center for Advanced Bioenergy and Bioproducts Science (U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research under Award Number DE-SC0018420) and by the DOE Office of Science, Office of Biological and Environmental Research under Award Number DE-SC0018260).
2302.07096
Does diffusion mechanism favor the emergent scenario of the universe?
In the present work, the flat FLRW Universe has been modelled with cosmic matter in the form of diffusive barotropic fluid. The diffusive fluid undergoes dissipation due to diffusion mechanism in the form of cosmological scalar field. From the perspective of non-equilibrium thermodynamics, the evolution equations of the universe have been formulated. By a suitable choice of the cosmological scalar field, emergent scenario of the universe has been obtained.
Subhayan Maity, Subenoy Chakraborty
2023-02-13T05:09:30Z
http://arxiv.org/abs/2302.07096v1
# Does diffusion mechanism favour the emergent scenario of the universe? ###### Abstract In the present work, the flat FLRW Universe has been modelled with cosmic matter in the form of diffusive barotropic fluid. The diffusive fluid undergoes dissipation due to diffusion mechanism in the form of cosmological scalar field \(\phi\). From the perspective of non-equilibrium thermodynamics, the evolution equations of the universe have been formulated. By a suitable choice of the cosmological scalar field, emergent scenario of the universe has been obtained. Diffusion can be considered as one of the basic macroscopic forces in nature. Several physical and biological processes are caused due to diffusion. Some well known examples of dynamical processes (in physics) are heat conduction, Brownian motion and various transport phenomena [1; 2; 3; 4] in biological systems where diffusion is the driving mechanism. The random collisions between the particles of the system and those of the background is caused due to diffusion mechanism at the microscopic level. On the other hand, random effects are averaged at the macroscopic scale and diffusion is characterised by heat equation or Fokker-Planck equation. Although there is a wide variety of phenomena having diffusive behaviour, still there does not exist a consistent diffusion theory in general relativity. However from cosmological point of view, it is speculated that diffusion may have a basic role in the evolution dynamics of the large scale structure formation of the universe. Further, in standard cosmology, galaxies are assumed as point particles of a fluid, undergoing velocity diffusion [1; 2; 3; 4; 5]. To consider diffusion in general relativity, one has to consider macroscopic continuum description provided by the Fokker-planck equation. So, in diffusive process, the energy - momentum tensor is not covariantly conserved (i.e. \(\nabla_{\mu}T^{\mu\nu}\neq 0\)), rather it satisfies the Fokker - Planck equation, namely [1; 2; 3; 4; 5] \[\nabla_{\mu}T^{\mu\nu}=3\sigma J^{\nu}, \tag{1}\] where \(\sigma(>0)\) is the diffusion constant and \(J^{\nu}\), the current density of the matter satisfies \[\nabla_{\mu}J^{\mu}=0. \tag{2}\] Thus one can not have usual Einstein equations i.e. \(R_{\mu\nu}-\dfrac{1}{2}Rg_{\mu\nu}=T_{\mu\nu}\) for diffusive process, due to Bianchi identity. The simplest modification of the Einstein equation is to introduce two interacting matter components of which one is the usual diffusive fluid having conservation (non-conservation) equation given by equation (1) while the simplest choice for the other component (in analogy with cosmological constant) is a cosmological scalar field. so the modified Einstein field equations take the form \[R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}+\phi g_{\mu\nu}=T_{\mu\nu}, \tag{3}\] where the scalar field \(\phi\) has the evolution equation [1; 2; 3; 4] (dimension factor in \(\phi\) has been chosen to be unity for convenience.) \[\nabla_{\mu}\phi=3\sigma J_{\mu}, \tag{4}\] and \(T_{\mu\nu}\) satisfies the above Fokker - Planck equation (given by equation (1)). Here \(3\sigma\) measures the energy transferred from the scalar field to the matter per unit time due to diffusion. Note that in vacuum or in the absence of diffusion, the above modified Einstein field equations (3) become Einstein equations with a cosmological constant while in general equation (3) may be termed as Einstein equations with variable 'cosmological constant'. The above diffusion process is usually termed as kinetic model with microscopic velocity of the fluid particles undergoing diffusion. Here the diffusion mechanism takes place on the tangent bundle of the space time and as a result Lorentz invariance of the space-time is preserved. Now choosing the cosmic fluid as perfect fluid, one has the energy-momentum tensor \[\mathcal{T}_{\mu\nu}=\rho u_{\mu}u_{\nu}+p(g_{\mu\mu}+u_{\mu}u_{\nu}) \tag{5}\] with current density \(J^{\mu}=nu^{\mu}\). Here \(n\) is the particle number density of the fluid. \(u^{\mu}\) is the four velocity of the fluid. \(\rho\) and \(p\) are the energy density and thermodynamic pressure of the fluid respectively. Now projecting equation (1) along the fluid 4-velocity \(u^{\mu}\) and on the hyper surface orthogonal to \(u^{\mu}\), one gets [1; 2; 3; 4] \[\nabla_{\mu}(\rho u^{\mu})+p\nabla_{\mu}u^{\mu}=3\sigma.n \tag{6}\] and \[(p+\rho)u^{\mu}\nabla_{\mu}u^{\nu}+(u^{\mu}u^{\nu}+g^{\mu\nu}) \nabla_{\mu}p=0, \tag{7}\] Here equation (7), the Euler equation does not change due to diffusion process as diffusion force acts along the matter flow. **It is to be noted that there are several diffusion models in the literature namely for unification of dark energy and dark matter from diffusive cosmology see ref. [6]. Ref. [7] deals with transition between bouncing hyper-inflation to \(\Lambda_{CDM}\) from diffusive scalar fields while unified DE-DM with diffusive interactions and interacting diffusive unified dark energy and dark matter from scalar fields can be found in ref.[8] and [9] respectively. In particular, a Lagrangian formulation of diffusion mechanism can be found in ref. [6].** In the background of homogeneous and isotropic flat FLRW model, the modified Friedmann equations with diffusion dynamics take the form, \[3H^{2}=\rho+\phi \tag{8}\] and \[2\dot{H}=-(\rho+p) \tag{9}\] Now equation (3) for the present geometry simplifies to \[na^{3}(t)=\mbox{constant, i.e.}\;n=n_{0}a^{-3}. \tag{10}\] Hence the modified matter conservation equation (1) for the matter field (5) takes the form, \[\dot{\rho}+3H(p+\rho)=\sigma n_{0}a^{-3}=\sigma_{0}a^{-3} \tag{11}\] which on integration yields \[\rho=a^{-3(1+\omega)}\left[\rho_{0}+\int\limits_{t_{0}}^{t}\sigma_{0}a^{3 \omega}dt\right]. \tag{12}\] Here \(\omega=\dfrac{p}{\rho}\), is the constant equation of state parameter of the fluid. \(\rho_{0}\) is the energy density at reference epoch of time \(t=t_{0}\) and \(a(t_{0})=1\). \(n(t_{0})=n_{0}\) is assumed. Now eliminating \(\rho\) between equations (8) and (9), one gets the cosmic evolution equation as \[2\dot{H}+3(1+\omega)H^{2}=\phi(1+\omega) \tag{13}\] On the other hand, the above modified Friedmann equations (i.e. equations (8) and (9)) for diffusive mechanism can be rewritten as, \[3H^{2}=\rho_{d}\,\,2\dot{H}=-(\rho_{d}+p_{d}+\pi_{d}) \tag{14}\] while the conservation equation(11) becomes \[\dot{\rho_{d}}+3H(\rho_{d}+p_{d}+\pi_{d})=0, \tag{15}\] with \(\rho_{d}=\rho+\phi\), \(p_{d}=p\) and \(\pi_{d}=-\phi\). **Thus interacting two fluid system in diffusion mechanism [10] is equivalent to a single dissipative fluid in Einstein gravity.** Here dissipation is chosen in the form of bulk viscous pressure \(\pi_{d}\). Further one may consider the above dissipative pressure (i.e. bulk viscous pressure) due to non-equilibrium thermodynamics with particle creation mechanism. In fact, for adiabatic thermodynamic process, the dissipative pressure \(\pi_{d}\) is related linearly to the particle creation rate \(\Gamma_{d}\) as [11; 12] \[\pi_{d}=-\frac{\Gamma_{d}}{3H}(\rho_{d}+p_{d}). \tag{16}\] Using the 1st friedmann equation in (14) of equivalent Einstein gravity into the above equation (16) with \(p_{d}=p=\omega\rho\) and \(\pi_{d}=-\phi\), the cosmological scalar field is related to the particle creation rate as \[\Gamma_{d}=\frac{3H\phi}{3H^{2}(1+\omega)-\omega\phi}. \tag{17}\] **Hence the present interacting diffusive mechanism [10] with cosmological scalar field can be considered as non-equilibrium thermodynamic description of Einstein gravity with particle creation formalism.** **To overcome the classical singularity of Einstein gravity, cosmologists propose two models namely the bouncing Universe or the emergent Universe. In the present work, for non-singular solution we shall consider the model of emergent scenario as it is very much relevant as pre-inflationary era. An emergent Universe [12; 13; 14; 15; 16; 17; 18] is a modelled Universe with no time like singularity having static Einstein era in the infinite past (i.e. \(t\rightarrow-\infty\)). The present work examines whether emergent scenario is possible or not in the present cosmological scalar field diffusion mechanism. In order to have a cosmological solution one may choose phenomenologically the form of \(\phi\) as** \[\phi=3\alpha H, \tag{18}\] with \(\alpha\), a constant. Using this choice of \(\phi\) in the field equations (8),(9)and (11) one gets \[\sigma_{0}a^{-3}=-3\alpha\dot{H}, \tag{19}\] **which shows that \(\alpha\) and \(\sigma_{0}\) are of same sign (due to \(\dot{H}<0\)).** For this choice of \(\phi\), the solutions of the cosmic evolution equation (13) yield the form of Hubble parameter and scale factor as, (i) For \(\alpha\geq H_{0}\) : \[H=\frac{\alpha}{1+\left(\frac{\alpha}{H_{0}}-1\right)e^{-\frac{3}{2}\alpha(1+ \omega)(t-t_{0})}}\ \, \tag{20}\] \[a=\left[\frac{\left(\frac{\alpha}{H_{0}}-1\right)+e^{\frac{3}{2}\alpha(1+ \omega)(t-t_{0})}}{\left(\frac{\alpha}{H_{0}}-1\right)+1}\right]^{\frac{2}{3( 1+\omega)}} \tag{21}\] 2. For \(0<\alpha<H_{0}\) : \[H=\frac{\alpha}{1-\left(1-\frac{\alpha}{H_{0}}\right)e^{-\frac{3}{2}\alpha(1+ \omega)(t-t_{0})}}\ \,\] (22) \[a=\left[\frac{\left(1-\frac{\alpha}{H_{0}}\right)-e^{\frac{3}{2}\alpha(1+ \omega)(t-t_{0})}}{\left(1-\frac{\alpha}{H_{0}}\right)-1}\right]^{\frac{2}{3( 1+\omega)}}\] (23) and (iii) For \(\alpha<0\) : \[H=\frac{|\alpha|}{1-\left(\frac{|\alpha|}{H_{0}}+1\right)e^{-\frac{3}{2}\alpha( 1+\omega)(t-t_{0})}}\ \,\] (24) \[a=\left[\frac{e^{\frac{3}{2}\alpha(1+\omega)(t-t_{0})}-\left( \frac{|\alpha|}{H_{0}}+1\right)}{1-\left(\frac{|\alpha|}{H_{0}}+1\right)} \right]^{\frac{2}{3(1+\omega)}}.\] (25) Here \(H_{0}\) is the value of \(H\) at reference epoch of time \(t_{0}\). For \(\alpha<0\), the above cosmological solution (25) has a big-bang singularity at the epoch, \[t_{s}=t_{0}-\frac{2}{3(1+\omega)|\alpha|}\ln\left(1+\frac{|\alpha|}{H_{0}} \right). \tag{26}\] **Note that \(\alpha<0\) (i.e. \(\sigma_{0}<0\)) is not physically realistic, so we shall present the above solution for \(\alpha<0\) only for mathematical completeness.** Again for \(0<\alpha<H_{0}\), big-rip singularity exists for the cosmological solution (23) at the epoch, \[t_{s}=t_{0}+\frac{2}{3(1+\omega)\alpha}\ln\left[\left(1-\frac{\alpha}{H_{0}} \right)\right]. \tag{27}\] In the case \(H_{0}<\alpha\), the cosmological solution (21) has no singularity at any real time. Clearly, this solution [(20), (21)] yields the Emergent scenario as it follows the following criteria [12] : \[H\to 0\,\ a\rightarrow\left[\frac{\alpha-H_{0}}{\alpha}\right]^{ \frac{2}{3(1+\omega)}}\mbox{when $t\rightarrow-\infty$} \tag{28a}\] \[H\to 0\,\ a\rightarrow\left[\frac{\alpha-H_{0}}{\alpha}\right]^{ \frac{2}{3(1+\omega)}}\mbox{when $t<<t_{0}$ and}\] (28b) \[H\sim\alpha\,\ a\simeq\left[\frac{H_{0}}{\alpha}\right]^{\frac{2}{3(1+ \omega)}}\exp\left[\alpha(t-t_{0})\right]\ \mbox{when $t>>t_{0}$} \tag{28c}\] So evidently the explicit solution for emergent scenario should be in the form (also considering, \(\alpha=H_{0}+\delta\) with \(\delta\geq 0\)) : \[H^{(E)}=\frac{(H_{0}+\delta)H_{0}}{H_{0}+\delta e^{-\frac{3}{2}(H_{0}+\delta)( 1+\omega)(t-t_{0})}} \tag{29a}\] \[\mbox{and $a^{(E)}=\left[\frac{\delta+H_{0}e^{\frac{3}{2}(H_{0}+\delta)( 1+\omega)(t-t_{0})}}{H_{0}+\delta}\right]^{\frac{2}{3(1+\omega)}}$} \tag{29b}\] under the diffusive non-singular scalar field, \[\phi=3(H_{0}+\delta)H. \tag{30}\] The nature of corresponding particle creation rate can be found from equation (17) as, \[\Gamma_{d}=\frac{3H}{1-(1+\omega)(1-\frac{H}{\alpha})}. \tag{31}\] Figure 1: Evolution of different physical parameters namely (a) Scale factor \(a\) (top left), (b) Hubble parameter \(H\) (top right), (c) Cosmological scalar field \(\phi\) (bottom left), (d) Particle creation rate \(\Gamma_{d}\) (bottom right) for \(\omega=-0.5\), \(\alpha=0.9\), \(t_{0}=1\) with three different values of \(H_{0}\) : \(0.5\), \(0.4\), \(0.3\). Further, one can write down the evolution equation (13) as the evolution of Hubble parameter with the scale factor as \[\frac{dH}{da}+\frac{3}{2}(1+\omega)\frac{H}{a}=\frac{3}{2}\alpha(1+\omega)\frac{ 1}{a}, \tag{32}\] which on integration gives \[H=\alpha-\delta(1+z)^{\frac{3}{2}(1+\omega)}, \tag{33}\] where \(z\) is the amount of cosmological red shift \(\left(z=\frac{1}{a}-1\right)\). Now introducing the dimensionless density parameter, \(\Omega=\frac{\rho}{\rho_{c}}\) with \(\rho_{c}=\frac{3H^{2}}{8\pi G},\)the critical density, the above equation can be written as \[\frac{H^{2}}{H_{0}^{2}}=\Omega_{\Lambda_{0}}+\Omega_{M}(1+z)^{3(1+\omega)}+ \Omega_{MP}(1+z)^{3(1+\omega_{MP})} \tag{34}\] where \(\Omega_{\Lambda_{0}}=\left(1+\frac{\delta}{H_{0}}\right)^{2}\), \(\Omega_{M}=\left(\frac{\delta}{H_{0}}\right)^{2}\), \(\Omega_{MP}=2\frac{\delta}{H_{0}}\left(1+\frac{\delta}{H_{0}}\right)\) and \(\omega_{MP}=\left(\frac{\omega-1}{2}\right)\) with \(\Omega_{\Lambda_{0}}+\Omega_{M}+\Omega_{MP}=1\). From equation (34) one can see that as \(z\rightarrow-1\) i.e. \(a\rightarrow\infty\), the present model approaches \(\Lambda_{CDM}\) model. The evolution of the instantaneous equilibrium temperature of a system under non-equilibrium thermodynamic prescription can be written as [11], \[\frac{\dot{T}}{T}+\omega\left(3H-\Gamma_{d}\right)=0. \tag{35}\] In the emergent scenario, one has (integrating equation (35)) \[T=T_{0}(1+z)^{3\omega}e^{\beta\omega(t-t_{0})} \tag{36}\] Figure 2: Evolution of different thermodynamic parameters namely (a) Energy density \(\rho\) (left) and (b) Temperature \(T\) (right) as a functions of time \(t\) and barotropic index of the fluid \(\omega\) for \(H_{0}=0.5,t_{0}=1\) with \(\delta=0.4\). where \(T_{0}\) is the present measured value of temperature (at \(t=t_{0}\)) and \(\beta\) is a constant. So, equations (34) and (36) represent the Hubble parameter and temperature respectively in terms of today's measured value. The time evolution of \(a\), \(H\), \(\phi\) and \(\Gamma_{d}\) has been exhibited graphically in figure 1. **Also the variation of thermodynamic parameters namely energy density \(\rho\) and temperature \(T\) with time \((t)\) and with equation of state parameter ( \(\omega\)) of the cosmic fluid have been shown in a \(3d\) plot in figure 2.** ## Discussion The present work is an attempt to examine whether emergent scenario of the Universe is possible under diffusive process. Considering kinetic model of the diffusion process, cosmological scalar field is chosen linearly to the Hubble parameter to obtain emergent scenario of the cosmic evolution **and their variations with respect to time and equation of state parameter have been shown graphically (3d plot) in figure 2.** Different thermodynamic parameters like energy density and temperature also have been determined under emergent scenario. Further it has been established that such scalar field diffusion process corresponds to the particle creation mechanism [11; 12] in the non-equilibrium thermodynamic description. It is interesting to note that, for the non-singular particle creation process, [see equation (31)] the barotropic index of the fluid can be restricted to \(\omega<0\) in the present scenario. **Also for non-singular solution, the cosmological scalar field is chosen phenomenologically as proportional to the Hubble parameter and the proportionality constant is found to be positive.** Finally, this work establishes that the dissipative processes like diffusion, particle creation etc. may correspond to the evolution pattern of the universe as per the present observation. For future works, it may be attempted to find the Lagrangian formulation of such non-equilibrium thermodynamic phenomena to study the microscopic behaviour of the universe. ## Acknowledgements The author SM acknowledges UGC for awarding Research fellowship and SC thanks Science and Engineering Research Board (SERB),India for awarding MATRICS Research grant support (File no. MTR/2017/000407).
2307.02730
Fine-grained Action Analysis: A Multi-modality and Multi-task Dataset of Figure Skating
The fine-grained action analysis of the existing action datasets is challenged by insufficient action categories, low fine granularities, limited modalities, and tasks. In this paper, we propose a Multi-modality and Multi-task dataset of Figure Skating (MMFS) which was collected from the World Figure Skating Championships. MMFS, which possesses action recognition and action quality assessment, captures RGB, skeleton, and is collected the score of actions from 11671 clips with 256 categories including spatial and temporal labels. The key contributions of our dataset fall into three aspects as follows. (1) Independently spatial and temporal categories are first proposed to further explore fine-grained action recognition and quality assessment. (2) MMFS first introduces the skeleton modality for complex fine-grained action quality assessment. (3) Our multi-modality and multi-task dataset encourage more action analysis models. To benchmark our dataset, we adopt RGB-based and skeleton-based baseline methods for action recognition and action quality assessment.
Sheng-Lan Liu, Yu-Ning Ding, Gang Yan, Si-Fan Zhang, Jin-Rong Zhang, Wen-Yue Chen, Xue-Hai Xu
2023-07-06T02:30:56Z
http://arxiv.org/abs/2307.02730v3
# Fine-grained Action Analysis: A Multi-modality and Multi-task Dataset of Figure Skating ###### Abstract The fine-grained action analysis of the existing action datasets is challenged by insufficient action categories, low fine granularities, limited modalities, and tasks. In this paper, we propose a Multi-modality and Multi-task dataset of Figure Skating (MMFS) which was collected from the World Figure Skating Championships. MMFS, which possesses action recognition and action quality assessment, captures RGB, skeleton, and is collected the score of actions from 11671 clips with 256 categories including spatial and temporal labels. The key contributions of our dataset fall into three aspects as follows. (1) Independently spatial and temporal categories are first proposed to further explore fine-grained action recognition and quality assessment. (2) MMFS first introduces the skeleton modality for complex fine-grained action quality assessment. (3) Our multi-modality and multi-task dataset encourage more action analysis models. To benchmark our dataset, we adopt RGB-based and skeleton-based baseline methods for action recognition and action quality assessment. multi-modality and multi-task dataset, fine-grained action recognition, fine-grained action quality assessment. ## I Introduction With the deeper exploration in action recognition, fine-grained human action recognition has long been a question of great interest in a wide range of fields [23][38]. The content of videos with fine-grained human action is composed of different combinations of scenes, tools (fixed or non-fixed), objects (dynamic or static), and persons. In recent years, the motion-centered fine-grained action recognition datasets such as [7][45][29], pay more attention to creating new action categories with the combinations of tools and human actions [34]. Recent developments in fine-grained human action recognition have heightened the need for professional sports. Compared with the existing datasets with different scenes, professional sport is challenging because human action will play an important role in a single scene [7][45]. Meanwhile, the size of our dataset and the number of action categories are untouchable by the combination of human action and non-fixed tools (More details will be elaborated in Sec.2.). Therefore, it is easier to show more details of fine-grained actions with non-fixed tools in a single scene. The challenges of fine-grained human action datasets are mainly derived from 1) Annotation quality. 2) The impact of \(pv\) (pose variation) and \(tv\) (temporal action variation) on \(cl\) (change of label). It's worth noting that \(tv\) is influenced by the number of repeated action units and the speed variation among actions (one or both will be represented in an action sequence). Such impact can be denoted as \(P(cl|pv)\)(or \(P(cl|tv)\)), in which \(P\) indicates the probability of label changing under the condition of \(pv\) or \(tv\). The reader should bear in mind that the fine-grained action is based on small inter-class variance. We can divide the fine-grained action into fine-grained semantics and fine-grained complexity. Given the above, the disadvantages of the existing datasets can be listed as follows: **Fine-grained semantics.** The fine-grained semantics that can be simply described as \(P(cl|pv)\to 1\) and \(P(cl|tv)\to 1\) will lead to small intra-class variance. The fine-grained motion-centered action datasets place more emphasis on the quality of action annotation (requires professionalism and expert participation), the number of categories, and temporal fine-grained semantics [13]. Owing to the lack of official document1 or real-time labeling by experts, most datasets (e.g. dance [39], Taichi [37], etc) are weak in labeling, the accuracy and professionalism of labels are limited [17]. Moreover, restricted by fixed tools (e.g. pommel horse in FineGym [7]) or strategic objects (e.g. basketball [42]), the number of fine-grained categories in the existing human action datasets is insufficient (see Tab. I). In fact, the relationship between \(pv\) and \(cl\) tends to be formulated by \(P(cl|pv)\to 1\), which means the larger \(pv\) is, the more the number of categories will be. And this is also what most of the existing datasets adopt to increase the number of fine-grained categories. Yet, \(tv\) (temporal action variation), which also contributes to ensuring categories, quite goes by the board. That is, the condition \(P(cl|tv)\to 1\) is rarely met so that the fine granularity would not increase at the temporal level. Footnote 1: [https://www.face.com/face](https://www.face.com/face) **Modality.** There only exist RGB and flow features for most existing fine-grained human action datasets. It is unfortunate that the skeleton features in FineGym dataset [7], which consists of RGB, flow, and skeleton features simultaneously, is exacted incompletely. Accordingly, the development of Fine-grained skeleton-based models is limited in the field of human action recognition. Taken together, reliable action labels are expected to ensure that the change of label (\(cl\)) impacted by \(tv\) and \(pv\) is accurate. The number of fine-grained actions and the intra-class variance are limited. A small action dataset FSD-10 [21] is proposed for fine-grained action analysis with the above characteristics but without independent spatial/temporal fine-grained semantics and large-scale samples. We thus propose a new figure skating dataset named MMFS (Multi-modality Multi-task dataset of Figure Skating), collected from videos with high definition (720P) in the World Figure Skating Championships. Compared with the existing human action datasets, the advantages of MMFS can be summarized as follows: **Strong annotation.** Weak annotation is labeled by trained people. Medium annotation is indexed by trained people and official documents. Strong annotation is annotated by experts and an official document, which means MMFS is jointly annotated by both real-time expert determination and proficient annotators under the help of an official document, which can be used to guarantee the label is equipped with accuracy and professionalism. **Independently Spatial Label (SL) and Temporal Label (TL).** MMFS dataset has _spatio-temporal fine-grained semantics:_ Skates, as wearable and non-fixed tools, assist body movements to add richer pose details to actions [34], introducing more complex spatial fine-grained actions. The number of fine-grained actions will be increased by \(tv\) and \(pv\) as part of action units change for one given action (please see Fig. 1 for details). To further research action recognition at both spatial and temporal levels, we propose integrally spatial and temporal labels in MMFS. Note that the prediction of temporal labels is more difficult than spatial ones. Temporal semantics indicates more rigorous requirements than spatial semantics because the large duration and speed variance lead to the large intra-class variance. A hierarchical label structure including temporal and spatial labels is built to compare the fine-grained spatial and temporal semantics. **High complexity of spatio-temporal fine-grained action categories.** 1) In comparison with the other datasets, the large duration and speed variance of actions make temporal granularity could be adequately demonstrated. For instance, the Jump could be completed within 2s, while the StepSequence would last from 12s to 68s. The longer average duration of MMFS indicates that more action units can be included in action (see Fig. 1). 2) There are sufficient cases of \(P(cl|pv)\to 0\) and \(P(cl|tv)\to 0\) in our dataset. More action units and complex spatio-temporal features can maintain the large intra-class variance of fine-grained actions, even with the increasing number of fine-grained action categories (see Section III for details). **Multi-modality.** In addition to the RGB feature, the MMFS dataset has the full-body skeleton feature, which offers a great challenge to design remarkable multi-modality models. **Multi-task.** MMFS which includes action recognition and action quality assessment tasks, is now the largest multi-modality action quality assessment dataset. The score of skating is determined by the quality of the movement and the rules of the International Skating Union (ISU). To be specific, the score of each movement is composed of basic value (BV) and grade of execution (GOE). Therefore, the scoring system is relatively complex, which brings greater challenges to the scoring model. According to the characteristics and challenges of MMFS, extensive experiments are conducted, including state-of-the-art RGB-based and skeleton-based action recognition models with different input modalities (RGB, flow, and skeleton features). The experiments indicate that: 1) The duration and speed variance of the dataset is large, which makes it difficult to recognize tv-dominated actions; 2) The accuracy of semantic fine-grained actions could be more easily enhanced than that of fine-grained complex (\(P(cl|pv)\to 0\) or \(P(cl|tv)\to 0\)) actions by increasing the number of input frames. Overall, this work contributes to the fine-grained action field in two aspects: (1) To our best knowledge, MMFS is the first fine-grained action dataset with strong annotation, high fine-grained spatio-temporal complexity, multi-modality, and multi-task characteristics. (2) MMFS is challenging to the existing state-of-the-art action recognition models. MMFS, which can be utilized to exploit more excellent models for action-related tasks, provides inspiration for future exploration in this field. MMFS involves fine-grained action recognition and action quality assessment tasks. According to the characteristics of MMFS, extensive experiments are conducted, including mainstream RGB-based and skeleton-based action recognition Fig. 1: Examples of spatio-temporal fine-grained action categories. Spatially, Lutz and Flip can be classified by \(P(cl|pv)\to 1\). Raising a hand in 2Flip will not change the label, which indicates \(P(cl|pv)\to 0\). Temporally, \(P(cl|tv)\to 1\) denotes different turns that will change the action label. models with different input modalities (RGB and skeleton features). The experiments indicate the challenges of our benchmark, which highlights the need for further research on fine-grained action analysis. ## II Related Work **Coarse-grained Action Recognition Dataset.** Coarse-grained datasets always focus on the combination of multiple content elements of videos, such as HMDB51 [16], UCF101 [36] and ActivityNet [2] (and also includes large scale datasets something-something [11], Kinetics [3], Moments [24] and AViD [28]). The discrimination of these datasets relies on elements (scenes, objects, or tools) rather than the person [22]. In order to focus on the motion of video datasets, motion-centered research began to attract more attention. KTH [32] and Weizmann [10] are early coarse-grained motion recognition datasets without background interference. To enhance the quality of the motion in the dataset, professional sports datasets are involved for high-level human motion expression, such as UCF sport [31] and Sport-1M [14], which enhances the number of categories and the variance of action. However, the coarse-grained datasets can not be used to develop fine-grained action analysis models of sports. **Fine-grained Video Dataset.** To weaken the category discriminability of scene and object [1] and to deepen understanding of videos, researchers focus more on fine-grained action recognition (AR) datasets. Many simple sports based on balls (like football [40], basketball [42]) and body (such as Tai Chi [37] and Karate [12]) without complex rules are presented to facilitate fine-grained action dataset. Then, more complex sports datasets like MIT-skating [30], diving48 [45], FSD-10 [21] and FineGym [7] are proposed to further explore the video understanding. However, these mentioned fine-grained datasets above can not be employed to promote multi-modality and multi-task models. **Multi-modality, Multi-task Dataset.** Some fine-grained datasets are presented to generate multi-task models like MultiSports [18] (Spatio-Temporal action detection). Moreover, many datasets (such as AQA [30] AQA-7 [25] and FineDiving [43]) are come up for action quality assessment, where MTL-AQA [26] proposes a multi-task model to process action quality assessment (AQA) and action recognition. MTL-AQA is a diving dataset, but it provides limited fine-grained types (all action types are combinations of a small number of actions). Besides, the pose of action is of great concern in the AQA task, which can distinguish the key to action changes. Yet the skeleton modality only applies to action recognition like NTU [20]. In comparison, the size of MMFS is larger than MTL-AQA and an extra modality can be utilized for action quality assessment. Besides, the data and experiments on the temporal label are rarely mentioned in previous research work. The specific comparison of related datasets is listed in Tab. I. ## III Dataset MMFS, a multi-task and multi-modality dataset, is challenging for fine-grained action analysis. In this section, the construction of the MMFS dataset is introduced in detail, including data preparation, data annotation, and quality control. Then, we demonstrate the statistical properties and challenges of MMFS. ### _Dataset Construction_ **Data Preparation.** We collect 107 competition videos of the World Figure Skating Championships from 2017 to 2019 as original videos which are standardized to 30fps with high resolutions on Youtube (720p). Then, the videos are segmented according to 439 figure skaters of two individual items (men, ladies). Each segmented pre-cut video is a complete performance of one skater for checking fine-grained action annotation results and training annotators. **Data Annotation.** We annotate two semantics levels for the MMFS dataset, including 3 sets and 256 fine-grained categories(more details of 256 categories of MMFS could be found on our project page). Before annotating the original videos, all the annotators had been trained by professional annotators with figure skating knowledge combining experts' annotation information of all sampled actions in pre-cut videos. From experts' annotation to proficient annotators parsing, combining ISU technical documents is a new strong annotation structure that is an assurance for annotation of MMFS. The official document is referenced by (proficient) annotators during all annotation steps. The main steps of annotation can be summarized as follows (see Fig. 2). First, the start to the end frames of one action (as a clip) in the original videos are determined according to the provided experts' ground truth in the original videos (see Fig. 5). Then, the incomplete and redundant clips of the original videos have been removed before annotation. At last, all the clips will be annotated manually. **Quality Control.** In order to ensure the quality of the MMFS, we adopt the following control methods. 1) Before the formal annotation task, the annotators' annotation ability is evaluated to be competent in this work. 2)It is the key to ensure annotation quality by the information board in the upper left corner of videos, which can not only assist in editing videos but also provide GroundTruth for clips. 3) Professional annotators check and revise all the annotations of actions by leveraging pre-cut videos and all the clips of original videos. ### _Dataset Statistics_ MMFS contains 11671 clips captured from 107 competition videos, totaling 35.38 hours. To balance the sample distribution of MMFS, we select 63 categories out of 256 categories by filtering insufficient data. Finally, 5104 samples are selected to construct MMFS-63. The samples of the training set and the test set show the characteristics of Heavy-tailed distribution in MMFS-63 (see Fig. 3). The average duration of each category is shown as Fig. 3(b). Specifically, the total video duration of the selected samples reaches 16.35h and the average duration is 11.54s. The duration ranges of actions are from 0.83s to 84.53s with a standard deviation of 10.11s. Compared with the existing datasets [14][1], in MMFS-63, the average duration is longer and the variance of duration is larger, so more fine-grained related properties can be obtained to bring more challenges. ### _Dataset Characteristics_ **High Quality.** (1) High Video Quality. All the RGB videos in MMFS are 1080p, which benefits describe the subtle difference between clips. High video quality and non-fixed tools are two prerequisites for high-quality videos to extract skeleton features. (2) Strong annotation. Unlike the weak annotation in [13], MMFS is strongly annotated on two levels: First, joint annotations are achieved to ensure label reliability by professional annotators combining with the ISU technical document and the provided experts' real-time GroundTruth of the original videos (see Fig. 4). Second, the footage of videos always follows the skater to avoid misclassification due to irrelevant frames. **Multi-task.** Generally speaking, action datasets are used for two tasks: action recognition and segmentation. However, Action Quality Assessment [1] (AQA) would emerge as an imperative and challengeable issue in MMFS, which can be used to evaluate the action performance of skaters based on BV and GOE scores. As shown in Fig. 5(b), BV and GOE, which depend on action categories and action performance, respectively, are included in our dataset. BV depends on action types and degree of action difficulty. Besides, a 10% bonus BV score is appended in the latter half of a program. **Multi-modality.** We extract the RGB, flow, and skeleton features from the videos in MMFS. Specifically, the skeleton features are obtained using HRNet [15](see Fig. 4(b) and more details in supplementary materials). Furthermore, the audio features, which may play important roles in AQA tasks, can also be extracted from videos. Actions matched to musical structure tend to obtain higher GOE scores in the official documentation. **Hierarchical Multi-label.** All actions are labeled manually on three levels, coined as set, sub-set, and element. And the sub-set can be divided into the spatial label (SL) and temporal label(TL) as shown in Fig. 4. ### _Dataset Challenge_ For most action recognition datasets, scenes, objects, tools, and persons are essential elements. Many fine-grained actions are generated based on the combination of the person and other elements. MMFS pays more attention to fine-grained action by non-fixed tools (skates). We analyze the fine-grained semantics Fig. 4: The hierarchical label structure of the MMFS dataset. The actions of each element are fine-grained. Fig. 3: (a) Samples distribution (b) Mean duration distribution Fig. 2: The process of strong annotation. and the fine-grained complexity in the MMFS, to propose new challenges for the existing models. Figure 3 describes the differences between semantics and complexity. The specific challenges of MMFS are as follows: **Fine-grained semantics** The challenges in Fine-grained semantics can be described as the change of labels from the subtle spatio-temporal variation of action units. (1) Temporal variation (\(P(cl|tv)\to 1\)). It is a problem to determine the number of rotations from a few frames. For example, it is hard to distinguish 2xael jump and 3Axel jump through limited frames. (2) Spatial variation (\(P(cl|pv)\to 1\)). It would be difficult to recognize an action by subtle spatial variation of action units. Fig. 4(b) shows the subtle variation between the Flip jump and the Lutz jump. The subtle variation is that the edge of the ice blade is outside on Lutz and inside on Flip. (3) Spatio-temporal variation [8] (\(P(cl|pv,tv)\to 1\)). In Fig. 4(a), the classification will be confused by the similarity features in the partial spatio-temporal variation among classes. **Fine-grained complexity** The challenges in Fine-grained complexity are more reflected in the larger inter-class variance and the large duration and speed variance of actions. The detail can be seen in Fig. 7. (1)Temporal variation (\(P(cl|tv)\to 0\)). The temporal intra-class variance can be demonstrated by the samples in Fig. 5. Although the top two actions belong to the same category, a clear difference in both the action speed and the number of rotations can be detected. Although the two bottom samples in Fig. 5 have high similarity in speed, they belong to different actions \(P(tv|cl)\to 0\). (2) Spatial variation (\(P(cl|pv)\to 0\)). The enhanced intra-class variance of action features is mainly reflected by the GOE of actions. The insufficient number of turns and raising hands (Fig. 1) cause GOE deduction and bonus, respectively. More GOE deduction of one action will be caused by hand support, turnover, paralleling feet, and trips during the landing process. Except for GOE, some skaters prefer clockwise rotation while some prefer the opposite. (3) Spatio-temporal variation (\(P(cl|pv,tv)\to 0\)). The challenge can be demonstrated by the comparison of StepSequence. StepSequence1 requires at least five difficult sub-actions while StepSequence2 requires at least seven difficult sub-actions in the official document. The sub-actions of the same grade StepSequence can be differently combined by a skater. ## IV Experiment ### _Experimental Preparation_ In MMFS-63, all the samples are divided into 4113 and 991 clips for training and testing. Set-level of MMFS, sub-set-level (Temporal Label 22 (TL22) and Spatial Label 25 (SL25)). And fine-grained elements-level (MMFS-63) are annotated by the different semantic labels. We use 30 fps of RGB videos and extract skeleton features with 17 joints for each frame by leveraging HRNet in MMFS-63. To better understand the performance of prominent action recognition models on this proposed dataset, we benchmark a variety of models on MMFS and group the models into two categories: RGB-based models and skeleton-based models. **RGB-based Models.** For RGB-based action recognition, models process very high dimensional input and are more sensitive to the size of training data. Several prominent action recognition models are selected as test methods. Specifically, the RGB-based experiments are conducted utilizing I3D [4], TSN [41], TSM [19], and PAN [48] methods. As for action quality assessment, C3D-LSTM [27], C3D-AVG-MTL [26], CoRe [46], and DAE-MLP [47] are utilized for the baseline methods. Fig. 5: Fine-grained semantics. (a) Misclassification is caused by subtle spatial variation. (b) Misclassification caused by partial Spatio-temporal variation. MMFS provides information-board, including BV, GOE, and Groundtruth of classification. Fig. 6: Connections and differences between the fine-grained semantics and the fine-grained complexity. The classification of the Jump set is determined by fine-grained semantics (In fact, the intra-class variance of the jump set will be affected by fine-grained complexity.) while the classification of the Spin set and Sequence set is affected by fine-grained complexity. Fig. 7: The temporal variation of action units in fine-grained complexity: Fourteen turns in the bottom sample and ten turns both in the top and the middle samples. **Skeleton-based Models.** We adopt the skeleton-based models on this dataset, including ST-GCN [44], 2S-AGCN [33], CTRGCN [5], efficientGCN B4 [35], and PoseC3D [9]. For the skeleton-based methods, the large duration variance of clips (the length range of clips is between 25 and 2536 frames) motivates us to use the average frame number of all clips (320 frames) to construct the input2. Footnote 2: The 320 frames are extracted from equal divisions of each clip. The clip with insufficient frames (less than 320 frames) should be padded by zeros instead of skeleton features. In the benchmark, we focus on fine-grained action recognition with multi-modality, spatial and temporal semantics comparison, and the performance of mainstream methods in action quality assessment. The parameterization of all models can be found in the supplemental material. ### _Fine-grained Action Recognition and Quality Assessment_ **Multi-modality Action Recognition.** For image-based videos, RGB modality is utilized to extract the spatial content of frames while the skeleton modality could extract the full-body motion features, which have removed most spatial appearance contents. In MMFS, the accuracies of skeleton modality in Tab. III are substantially enhanced compared with the results of RGB-based modality in Tab. II. The results of Tab. II and Tab. III illustrate that MMFS is more discriminative in motion feature variation of body pose and is not sensitive to the visual scene. **The Comparison of the Action Quality Assessment task.** For action quality assessment, we adopt the Spearman correlation coefficient (SC) as the metric of experiments. As shown in Tab. V, the mainstream method has achieved effective but not excellent accuracy on our dataset, which shows that our dataset can bring new challenges to the evaluation task. ### _The Comparison of Spatial and Temporal Semantics_ **Hierarchical Label.** Different from the coarse-grained dataset, 3 sets in MMFS are divided into 63 action categories to propose a fine-grained action dataset. As shown in Tab. II and Tab. III, the performance of all the compared models drops a lot when the fine granularity is considered on MMFS. The three sets can not achieve outstanding performance with TSN [41], while ST-GCN [44] presents better results based on the features of 320 frames. However, the performance of ST-GCN [44] is also limited to the Spin and Sequence sets. We show the confusing actions in the supplemental material. And the most confusing actions are the Spin set, where more fine-grained temporal semantics will be addressed because of the longer length of duration. **The Comparison over SL and TL.** To observe which one occupies more important influence in fine-grained recognition between spatial semantics and temporal semantics, we propose TL22 and SL25 on the sub-set level. As shown in Tab. VI, the action recognition accuracy of temporal label division (TL22) achieves worse performance than that of spatial division (SL25). It illustrates that temporal action recognition is more challenging than the same task in the spatial division. The similar recognition results on TL22 and MMFS-63 demonstrate that most of the difficulties focus on the temporal action recognition task. The experimental results above demonstrate that the existing action recognition models fail to extract temporal discriminant features on both the skeleton and RGB-based modalities. **The Key Challenge in Temporal Semantics.** As shown in Fig. 8, with the increase in the number of selected frames, CTR-GCN can achieve significant growth on our data set, while FineGym99 has only achieved a small increase. This shows that despite the fine-grained datasets are more sensitive to temporal variance, the temporal feature is difficult to be extracted on our MMFS dataset. ## V Conclusion In this paper, we propose a Multi-modality and Multi-task Dataset of Figure Skating (MMFS) to further research on fine-grained analysis. Distinguishing from the existing fine-grained action datasets, MMFS contains more fine-grained semantics including spatial semantics and temporal semantics. All 11671 clips are annotated with a hierarchically multi-label structure and fine-grained analysis can be conducted on multi-modality. We evaluate the mainstream methods based on RGB-based models and skeleton-based models. In our experiments, we highlight that temporal semantics is more difficult and complex than spatial semantics for the existing models and the skeleton modality achieves better performance on fine-grained analysis. Hope a new unbalanced dataset can be presented for fine-grained analysis.
2306.05376
Anomaly Detection in Satellite Videos using Diffusion Models
The definition of anomaly detection is the identification of an unexpected event. Real-time detection of extreme events such as wildfires, cyclones, or floods using satellite data has become crucial for disaster management. Although several earth-observing satellites provide information about disasters, satellites in the geostationary orbit provide data at intervals as frequent as every minute, effectively creating a video from space. There are many techniques that have been proposed to identify anomalies in surveillance videos; however, the available datasets do not have dynamic behavior, so we discuss an anomaly framework that can work on very high-frequency datasets to find very fast-moving anomalies. In this work, we present a diffusion model which does not need any motion component to capture the fast-moving anomalies and outperforms the other baseline methods.
Akash Awasthi, Son Ly, Jaer Nizam, Samira Zare, Videet Mehta, Safwan Ahmed, Keshav Shah, Ramakrishna Nemani, Saurabh Prasad, Hien Van Nguyen
2023-05-25T19:17:39Z
http://arxiv.org/abs/2306.05376v1
# Anomaly Detection in Satellite Videos using Diffusion Models ###### Abstract The definition of anomaly detection is the identification of an unexpected event. Real-time detection of extreme events such as wildfires, cyclones, or floods using satellite data has become crucial for disaster management. Although several earth-observing satellites provide information about disasters, satellites in the geostationary orbit provide data at intervals as frequent as every minute, effectively creating a video from space. There are many techniques that have been proposed to identify anomalies in surveillance videos; however, the available datasets do not have dynamic behavior, so we discuss an anomaly framework that can work on very high-frequency datasets to find very fast-moving anomalies. In this work, we present a diffusion model which does not need any motion component to capture the fast-moving anomalies and outperforms the other baseline methods. ## 1 Introduction Greenhouse gas emissions have caused an increase in extreme weather events and climate disasters, resulting in billion-dollar losses [1]. Wildfires have become a serious issue as well with greenhouse emissions contributing to their severity. These fires destroy vast areas of forests and natural habitats, leading to the loss of biodiversity and wildlife. Additionally, wildfires contribute to soil erosion and degradation, and their smoke and ash can cause air and water pollution has well. Satellites in orbit have proven to be incredibly valuable for disaster management. They can provide valuable images and videos that can help predict and manage wildfires. However, a major issue with these satellites is the significant latency in relaying their data. The data obtained from these satellites are sometimes 2-3 hours past the location of a disaster event [1]. Consequently, it would limit their effectiveness in the early detection of fast-developing events. Wildfires are usually unpredictable phenomena; however, there have been many efforts to utilize this satellite data to combat wildfires. Early models use handcrafted thresholds to detect fire pixels [2, 3, 4], but this is only sufficient for specific regional and seasonal conditions. With the rapid growth of deep learning algorithms, recent models have attempted to effectively utilize satellite data through Convolutional Neural Networks (CNNs). For instance, Phan _et al.[5]_ used 3D-CNNs to learn the spatial and spectral patterns of streaming images. Vani _et al.[6]_ employed a transfer learning technique to perform fire versus non-fire classification. The Fully Convolutional Network (FCN) [7] is also proposed to segment smoke pixels from non-snoke pixels. Although these models yield exceptional performance in fire detection, these tools are only effective once a fire has grown to a sufficient size that is detectable by the satellites. As a result, these models have a major limitation since the ultimate objective of fire detection is to hinder the propagation of wildfires. In this paper, we aim to present a novel approach to address the limitations of CNN-based methods. We leverage a class of state-of-the-art generative models, namely diffusion models [8, 9, 10, 11, 12]. Diffusion models are developed based on the properties of partial differential equations and Brownian motion to generate samples by iteratively transforming a simple noise distribution into the target distribution. Specifically, we utilize diffusion models to learn the prior distribution and generate high-quality data of normal events. We can then identify the initiation of anomalous events such as fire or smoldering if the satellite data deviates from the prior learned distribution. The results indicate that our proposed method is able to detect small wildfires, which may rapidly grow into major fires within minutes, with high accuracy and a low false positive rate. Our approach can contribute to the development of effective and timely wildfire detection methods in order to mitigate their devastating impacts. Compared to the previous CNN-based methods, this approach does not require data from high-temporal-resolution satellite videos. It is important to note that there are other works that utilize Generative Adversarial Networks (GANs) [13] to synthesize missing or corrupted data for wildfire detection. However, this paper employs the diffusion model to generate samples for the purpose of detecting wildfires and satellite anomalies (generating-to-detecting), which has not been previously explored in the literature. Furthermore, compared to GANs, diffusion models can generate more diverse and realistic samples [14], which can improve the accuracy and robustness of our wildfire detection algorithm. Moreover, we note that previous studies rely on satellite datasets such as Landsat-8 [15], Himawari-8 [16], MODIS Collection 6 (MOD/MYD14) [17], and VIIRS 375m (VNP14IMG) [18], which are designed for fire detection and segmentation purposes. However, these existing datasets fail to meet our model's specifications, as the images within these datasets are predominantly conventional fire events rather than anomalous occurrences. If we were to train our model using these datasets, it would primarily learn to recognize standard fire patterns like the presence of flames, smoke, or thermal signatures. Regrettably, it would lack the ability to effectively predict atypical fire events such as smoldering, which is the objective we are attempting to tackle. Therefore, we have constructed our own dataset specifically for fire anomaly detection based on the data from the GOES-16 and 17 satellites operated by the NOAA. These satellites use technology that allows them to collect reflected and emitted radiation from the Earth's surface. This allows us to capture less visible events such as smoldering and other anomalies, which are the target of our dataset. In this paper, we first acknowledge the related works in section 2. Then, we provide a detailed description of our approach and diffusion models in section 3. Section 4 presents experimental results on our real-world satellite data to demonstrate the effectiveness of our method in detecting small wildfires with a high accuracy and a low false positive rate. ## 2 Related Works ### Fire Detection/Tracking on Satellite Data Satellite images and videos have been extensively used to fight wildfires, with various studies focusing on active fire detection and tracking. To address the limitations of the early thresholding models mentioned in the introduction [2, 3, 4], dynamic thresholding techniques have been utilized to adapt to local contextual conditions and minimize false alarms for smaller and cooler fires [19, 20, 21, 22]. Contextual algorithms remain the most common approach for active fire detection due to their computational efficiency [23]. Incorporating time constraints upon the dynamic thresholds can also reduce false alarms [24, 25]. Recent advancements in deep learning have enabled greater levels of exploration beyond manually designed operators. For example, a state-of-the-art study [26] proposed a CNN-based network consisting of different convolution kernel sizes to detect fires of varying sizes and shapes. Other researchers have tried using handcrafted features to improve fire tracking [27, 16] or developing algorithms that analyze the brightness of infrared images and the offset of the sunrise to the thermal sunrise time of a non-fire condition [28]. While fire detection and tracking are important, this paper focuses on the fire anomaly detection approach, which can detect fire events even when the fire is not visible in the images, such as smoldering. This approach uses diffusion models to learn the prior distribution and generate useful data on non-fire events. Thus, when asked to generate an image of an anomaly event, such as fire or smoldering, the diffusion model is capable of producing good result. At the time of publishing, this is the first paper (to the best of our knowledge) that fights wildfire by taking advantage of the anomaly detection task using video diffusion models. ### Video Diffusion Models Diffusion models are high-performing, likelihood-based models commonly used in synthetic image generation. Diffusion models designed for image generation consist of a U-Net architecture and perform better than GANs, specifically in image resolutions greater than \(64\times 64\). The GAN's ability to capture diversity is relatively poor compared to the diffusion model [14]. However, these benefits come at the expense of longer computation times due to denoising steps and lower fidelity metrics. Additionally, model collapse and non-convergence further reduce the image generation qualities in a GAN [29]. Studies have focused primarily on using diffusion models for image and audio generation; however, developing synthetic video generation algorithms is a rising interest that has sprouted from evaluating diffusion models on different data modalities [12]. Ho et al. [12] proposed the creation of a video diffusion model which used the reconstruction-guidance sampling method to approximate the conditional distributions. The Residual Video Diffusion (RVD) model, was proposed by Yang et al. [30] for video prediction. The RVD model uses a residual-based approach to model the difference between predicted and true video frames and is effective at modeling the conditional distribution of future video frames given past frames as input. Video Implicit Diffusion Models (VIDM) employ separate content and motion generation streams for artificial frame generation[31]. The content stream utilizes a modified U-Net architecture to model the distribution of video frames, while the motion stream models the changes in motion over a sequence of random frames. Further research is necessary to address the issue of discontinuous motion in VIDM. VideoGPT [32] is a novel transformer-based model used for compression and reconstruction, distinct from traditional diffusion models. This model leverages a fusion of a vector quantized variations autoencoder (VQ-VAE) and GPT architecture to efficiently compress data into a dense, discretized latent space, which is then utilized for image reconstruction. This approach enhances the computational efficiency of the system while maintaining similar evaluation metrics to the GAN. ## 3 Methodology ### Diffusion Models Diffusion models are a class of generative models that aim to model the probability distribution of a dataset. They operate by employing an iterative process called diffusion, which allows them to capture the underlying data distribution. By learning the conditional probabilities of the data based on its previous states and the applied noise at each diffusion step, diffusion models gain the ability to generate new samples that resemble the training data. The denoising diffusion probabilistic models, which are the type of diffusion models used in this paper, can be decomposed into two distinct processes: the forward process, where the training data is progressively corrupted by Gaussian noise, and the reverse process, which generates new samples from the original distribution by following the reverse steps. The forward process can be modeled through the following equation [33]: \[p(x_{t}|x_{t-1})=\mathcal{N}\left(x_{t};\frac{x_{t-1}}{\sqrt{1-\beta_{t}}},\beta_{ t}I\right),\quad\forall t\in\{1,\ldots,T\}\] This formula represents the conditional probability distribution of \(x_{t}\) given \(x_{t-1}\) in the forward process. It specifies that \(x_{t}\) is sampled from a normal distribution with mean \(\frac{x_{t-1}}{\sqrt{1-\beta_{t}}}\) and covariance \(\beta_{t}I\), where \(\{\beta_{1},\ldots,\beta_{t}\}\) are the hyper-parameters representing the variance schedule across diffusion steps, and \(I\) is the identity matrix with the same dimensions as the input image \(x_{0}\). In the reverse process, the model tries to minimize a variational lower bound of the negative log-likelihood. The objective, denoted as \(L_{\text{vib}}\), is given by the following formulation[33]: \[\begin{split} L_{\text{vib}}=&-\log p_{\theta}(x_{ 0}|x_{1})+\text{KL}(p(x_{T}|x_{0})\|\pi(x_{T}))\\ &+\sum_{t>1}\text{KL}(p(x_{t-1}|x_{t},x_{0})\|p_{\theta}(x_{t-1}| x_{t}))\end{split} \tag{1}\] In this formulation, KL denotes the Kullback-Leibler divergence between two probability distributions. The first term represents the negative log-likelihood of predicting \(x_{0}\) given \(x_{1}\). The second term measures the KL divergence between the distribution of \(x_{T}\) given \(x_{0}\) and a standard Gaussian distribution \(\pi(x_{T})\). The third term sums over the KL divergences between the true posterior distribution \(p(x_{t-1}|x_{t},x_{0})\) and the predicted distribution \(p_{\theta}(x_{t-1}|x_{t})\) at each time step \(t\) of the reverse process. Using this equation, the diffusion model attempts to approximate the reverse process and generate new samples from the original distribution \(p(x_{0})\) by starting from a sample \(x_{T}\sim N(0,I)\) and following the reverse steps. Diffusion models are particularly effective in capturing complex dependencies and generating realistic samples. They can be applied to various domains, including images, videos, text, and other structured or unstructured data types. The generated samples can be used for tasks such as data synthesis, data augmentation, or generating new data points for downstream applications[33]. ### Conditioned Generation Conditioned generation is a type of generative modeling in which a model is trained to generate output samples that satisfy certain conditions or constraints. The goal is to train a model that can generate high-quality output samples that not only looks realistic but also satisfies the specified conditions or constraints. In the scope of computer vision and image synthesis, we will explore two types of conditioned generation models: Classifier Guided Diffusion and Classifier-Free Guidance. Guided DiffusionTo improve the quality of generated images, a classifier is utilized to provide information to the diffusion model about the desired target distribution. This classifier, as described in [14], takes the form of \(f_{\phi}(y|\mathbf{x}_{t},t)\) where \(\mathbf{x}_{t}\) represents the noisy image. By using gradients in the form of \(\nabla\mathbf{x}\) log \(f\phi(y|\mathbf{x}_{t})\), the diffusion sampling process is guided towards the target image by modifying the noise prediction of the original model. To achieve this, we recall the following equation [34]: \[\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t})=-\frac{1}{\sqrt{1-\bar{\alpha}t }}\mathbf{\epsilon}\mathbf{\theta}(\mathbf{x}_{t},t) \tag{2}\] Using this, we can write the score function for the joint distribution \(q(\mathbf{x}_{t},y)\) as follows: \[\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t},y)=\nabla\mathbf{x}_{t}\log q( \mathbf{x}_{t})+\nabla\mathbf{x}_{t}\log q(y|\mathbf{x}_{t}) \tag{3}\] From this, we can derive the new classifier-guided predictor in the form: \[\bar{\mathbf{\epsilon}}_{\theta}(\mathbf{x}_{t},t)=\mathbf{\epsilon}_{\theta}(x_{t},t )-\sqrt{1-\bar{\alpha}_{t}}\nabla_{\mathbf{x}_{t}}\log f_{\phi}(y|\mathbf{x}_ {t}) \tag{4}\] where \(\bar{\mathbf{\epsilon}}_{\theta}\) is the modified noise prediction. Free Guidance DiffusionContrasting with guided diffusion, free guidance diffusion does not rely on an external classifier. Instead, the contents of the image itself guide the diffusion process, which is aided by a diffusion term in the generative model. The training process involves using both a conditional model \(p_{\theta}(\mathbf{x}|y)\) and an unconditional denoising diffusion model \(p_{\theta}(\mathbf{x})\)[35]. An implicit classifier is used for training, where conditioning information is periodically discarded at random to allow the model to generate images unconditionally. The gradient for the implicit classifier can be derived from the conditional and unconditional score estimators using the following equation: \[\nabla_{\mathbf{x}_{t}}\log p(y|\mathbf{x}t)=\nabla\mathbf{x}_{t}\log p( \mathbf{x}t|y)-\nabla\mathbf{x}_{t}\log p(\mathbf{x}_{t}) \tag{5}\] ### Conditional diffusion for video Given \(\mathbf{x}_{0}\sim q(x)\), the forward process corrupts \(\mathbf{x}_{0}\) with small amount of Gaussian noise at each time stamp \(t\in[0,T]\) that satisfies Markovian transition: \[q(\mathbf{x}_{t}|\mathbf{x}_{t-1}) =\mathcal{N}(\mathbf{x}_{t},\sqrt{1-\beta_{t}}\mathbf{x}_{t-1},\beta_{t} \mathbf{I}) \tag{6}\] \[q(\mathbf{x}_{1:T}|\mathbf{x}_{0}) =\prod_{t=1}^{T}q(\mathbf{x}_{t}|\mathbf{x}_{t-1}) \tag{7}\] The \(x_{0}\) is gradually degraded as the step becomes larger and eventually \(x_{T}\) is equivalent to an isotropic Gaussian distribution. A 'nice property' of this process is that, from \(x_{0}\), \(x_{t}\) can be sampled at any arbitrary \(t\) using the re-parameterization trick [36] as: \[q_{t}(\mathbf{x}_{t}|\mathbf{x}_{0}) =\mathcal{N}(\mathbf{x}_{t};\sqrt{\bar{\alpha}}\mathbf{x}_{0},(1-\bar{ \alpha})\mathbf{I}) \tag{8}\] \[\mathbf{x}_{t} =\sqrt{\bar{\alpha}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\mathbf{\epsilon} \tag{9}\] where \(\bar{\alpha}_{t}=\prod_{i=1}^{t}(1-\beta_{i})\), and \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). To reverse the above process and generate new samples, we need to approximate \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) by learning an \(p_{\theta}\), which are tractable when conditioned on \(\mathbf{x}_{0}\): \[q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0}) =\mathcal{N}(\mathbf{x}_{t-1};\tilde{\mathbf{\mu}}_{t}(\mathbf{x}_{t},\mathbf{x}_ {0}),\tilde{\beta}_{t}\mathbf{I}) \tag{10}\] \[\text{where}\quad\tilde{\mathbf{\mu}}_{t}(\mathbf{x}_{t},\mathbf{x}_{0}) =\frac{\sqrt{\bar{\alpha}_{t-1}}\beta_{t}}{1-\bar{\alpha}_{t}}\mathbf{x }_{0}+\frac{\sqrt{\bar{\alpha}_{t}}(1-\bar{\alpha}_{t-1})}{1-\bar{\alpha}_{t}} \mathbf{x}_{t}\] (11) \[\text{and}\quad\tilde{\beta}_{t} =\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_{t} \tag{12}\] Thanks to the 'nice property', we can estimate \(\hat{\mathbf{x}}_{0}=(\mathbf{x}_{t}-\sqrt{\bar{\alpha}_{t}}\mathbf{\epsilon}_{t})/\sqrt{ \bar{\alpha}_{t}}\). Since \(\mathbf{x}_{t}\) is available from the forward process, we can re-parameterize the Gaussian noise term instead to predict \(\mathbf{\epsilon}_{t}\) by \(p_{\theta}\). Thus, the loss term would be: \[L(\theta)=\mathbb{E}_{t,\mathbf{x}_{0},\mathbf{\epsilon}}\left[\left\|\mathbf{\epsilon}- \mathbf{\epsilon}_{\theta}(\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_ {t}}\mathbf{\epsilon}\left|\right.t)\right\|_{2}^{2}\right] \tag{13}\] ### Anomaly Prediction Via Conditional Diffusion Anomaly detection is the identification of abnormal events which are not expected. Video prediction can be used to identify anomalies in the data by predicting the future frame and comparing it with the ground truth. Our diffusion-based method uses the video prediction task to identify the anomalies. The diffusion based method does not need any external motion component to capture the high-quality motion and it works for high-frequency and dynamic datasets (satellite data) to identify the anomalies. We can identify the anomalies which are caused due to motion as well as color. ### Video Prediction using Conditional Diffusion We model the conditional distribution of the video frames by incorporating the past frames. But we also perform an ablation experiment by conditioning the past and future frames. Suppose there are \(p\) past frames and \(k\) current frames. We condition the diffusion model on the past frames to predict future frames. \[L(\theta)_{vidpred}=\mathbb{E}_{t,[\mathbf{p},\mathbf{x}_{0}],\mathbf{\epsilon}}\left[ \left\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}(\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+ \sqrt{1-\bar{\alpha}_{t}}\mathbf{\epsilon}\left|\right.\mathbf{p},t)\right\|_{2}^{2}\right] \tag{14}\] After training the diffusion model, in the first window of \(p+k\) frames, we predict the \(k\) frames conditioned on the past \(p\) frames and then shift the window to the \(p+k\) frames and then repeat the same process for the whole video. For modeling \(\epsilon_{\theta}\) shown in Eq. (14) we use variants of networks shown in Fig. 2. ### Architecture U-NetDue to its effectiveness and efficiency, U-Net is a popular convolutional neural network architecture for conducting image segmentation tasks. U-Net partitions an input image into multiple classes of pixels in a process known as semantic segmentation. As explained by Ronneberger et al.[37], the network consists of two parts: a contracting path and an expansive path. The contracting path applies repeated convolutions and max pooling operations to downsample the image and capture contextual information, while the expansive path uses upsampling and skip connections to reconstruct the original image resolution while preserving the contextual information. The U-Net model uses skip connections to combine fine-grained details and structures in an image (local information) with the overall context and spatial relationships in an image (global information). This is what allows a U-Net model to effectively segment images with fine details and irregular shapes. Residual BlockPrimarily used in deep learning networks such as convolutional neural networks, residual blocks provide a solution to the degradation problem, where the accuracy of the network starts to degrade as the depth of the network increases. Residual blocks implement a way for the network to model the differences between the input and output of each block making it easier for the network to optimize deeper layers. Residual Blocks (RBs) use a method dealing with shortcut connections that add the input of a block directly to its output to allow information to flow through the network more easily and improve the gradient flow during back-propagation. Residual blocks have been widely adopted in many state-of-the-art deep learning architectures and have achieved outstanding performance on various computer vision tasks, such as image recognition, object detection, and semantic segmentation.[38] We have used the architecture from the model proposed for the video prediction [11]. U-Net backbone is used as the denoising network with some changes. This architecture uses multi-head self-attention and 2D convolution and adaptive group normalization [39]. Position encoding is used for the noise level and it is processed using transformer-style encodings. We use past frames as conditioned frames and concatenate along the channel dimension. Current noisy frames are created in the forward diffusion process and used the timestep t frames with noise as input in the denoising network. Concatenated conditional frames are passed through the network that affects the conditional normalization also known as Spatially-Adaptive (DE)normalization (SPADE) [40]. This SPADE module accounts for time and motion. The satellite dataset which we use is very dynamic in nature since the clouds move very fast in 5 minutes. Therefore, this inbuilt module helps to capture fast motion. This is better than using FlowNet which increases the computational complexity. \[\textbf{e}(t)=\left[\ldots,\cos\left(tc^{\frac{-2d}{D}}\right),\sin\left(tc^{ \frac{-2d}{D}}\right),\ldots\right]^{\mathrm{T}}, \tag{15}\] where \(d=1,...,D/2\), \(D\) is the number of dimensions of the embedding, and \(c=10000\). Each embedding vector is passed through a fully connected layer with an activation function and then another fully connected layer ## 4 Experiments We have used the diffusion model architecture for the video prediction task. We use the 2 past frames as conditioned frames and predict 5 frames at a time and then shift to the next window. For sampling, we use the DDPM sampling [9] with the 100 sampling steps with the model being trained with these 100 sampling steps. ### Datasets We used data from geostationary satellites that are synchronized with Earth's spin to hover over the same point on Earth making them ideal for monitoring environmental dynamics. The GOES-16 and 17 satellites covering the US carry the Advanced Baseline Imager that collects reflected and emitted radiation from the Earth in 16 wavelength bands. Three data products are available from NOAA: the entire Northern Hemisphere every 15 minutes, the Continental US at 5 Figure 1: Strategy to predict the future frame with the sliding window approach minutes, and the mesoscale user-directed (1000km x 1000km) at every minute. In this proof of concept study, we used the 5-minute data. For the current experiment, we have used the data from the Northern California region and divided the videos into short clips having 14 frames each. Our model is trained on the videos containing the normal frames and tested on the mixture of normal and abnormal frames containing the fires. The main challenge here is to extract the video clips containing the normal frames. It can be done manually by identifying the clips from the pool of the video dataset, but we have used the pre-trained YOLOv5 [41] to extract the abnormal and normal frames YOLOv5 is trained on the publicly available fire images and we use this pre-trained model to identify the short clips having fire. This helps us to identify the clips which do not contain any fire, smoke, or fog and these can be used as normal datasets for training the anomaly detection framework. We have extracted around 500 normal videos and 20 abnormal videos containing fire, smoke, and fog. Anomalous images are shown below in Fig. 3. Real frames of the videos are very high resolution (4k) but we have resized each frame of the video to 128x128. ### Anomaly Detection on Testing Data We have trained our model with the normal videos having no anomalous event so we assume that the model can predict the normal images well. MSE is used to calculate the difference between the predicted image and the true image but [42] shows that PSNR (Peak Signal to Noise Ratio) is an efficient way to access the image quality. \[PSNR(I,\widehat{I})=10\:log_{10}\frac{[max_{\widehat{I}}]^{2}}{\frac{1}{N}\sum _{N}^{i=0}(I_{i}-\widehat{I}_{i})^{2}} \tag{16}\] Figure 2: A U-Net model is provided with noisy current frames where the residual blocks incorporate information from fast and future frames. The U-Net model predicts the noise present in the current frames which is used to denoise the current frame. For anomalous frames, the PSNR value is high, but for the normal frames the PSNR is low. We also normalize the PSNR of each testing video between 0 and 1, following this work [43]. This normalized PSNR is called the regular score. \[S(t)=\frac{PSNR(I_{t},\widehat{I}_{t})-min_{t}\ PSNR(I_{t},\widehat{I}_{t})}{ max_{t}\ PSNR(I_{t},\widehat{I}_{t})-min_{t}\ PSNR(I_{t},\widehat{I}_{t})} \tag{17}\] Based on this regular score we can predict whether the frame is normal or abnormal. We can set the threshold to distinguish between normal or abnormal frames. ## 5 Results and Discussions ### Evaluation Matrices In previous literature on anomaly detection, the ROC curves have been established as the primary metric for algorithm performance. This is typically done by gradually adjusting the threshold for regular scores to identify anomalies[44]. To assess the performance of the anomaly detection algorithm, the area under the ROC curve (AUC) is calculated. A higher AUC indicates better performance in distinguishing between anomalous and regular events. As is shown in Table 1, we compare our method with the other deep learning baselines and our method outperforms the other methods in terms of AUC score. \begin{table} \begin{tabular}{l c} \hline \hline Method & AUC (\%) \\ \hline Future frame Prediction[45] & 73.2 \\ ConvLSTM-AE [46] & 71.5 \\ MLEP & 78 \\ Conv-AE & 68 \\ Diffusion & 80.3 \\ \hline \hline \end{tabular} \end{table} Table 1: AUC Score Comparison Figure 3: Abnormal Frames Figure 4: PSNR curves for Normal and Abnormal Videos ### PSNR Plot for the Normal and Abnormal Video We have calculated the PSNR as described above for both the normal videos which do not have any fire, smoke, or fog. As shown in Fig. 4 and 7, the PSNR for those videos is high and continuously high since no anomalous event is identified in the whole video. However, in the case of the _abnormal video1_ curve, the initial value of the PSNR is low due to the fire and smoke being very high, and it continues to decrease. In Frame 6, the fire dies, but there is a little bit of smoke resulting in the PSNR of the frame increasing afterward. Figure 5 displays two rows of frames, where the first row represents actual abnormal frames and the second-row exhibits predicted abnormal frames. The first two frames in the first row correspond to conditioned frames stemming from the initial video window, comprising two conditioned frames and five predicted frames. The predicted frames exhibit lower PSNR, as depicted in Figure 4, and suffer from blurriness, attributable to the absence of fire images in the training data. Therefore, the image quality deteriorates significantly in frames containing fire, smoke, or fog. Figure 5: (Abnormal Video) First row represents the real frames and 2 rows represent the predicted frames Figure 6: ( Normal Video ) First row represents the real frames and 2 rows represent the predicted frames ### PSNR for Multiple Anomalies Videos Above is the PSNR plot for the multiple abnormal and normal videos. The first four curves for the normal videos report high PSNR and the other four curves for the abnormal videos report low PSNR values since they include abnormalities in their frames. The PSNR varies here since the intensity of the fire and smoke changes with respect to the frame. The corresponding video frames for each abnormal video are attached in the supplementary section. We performed further experiments with conditioning on the past and future frames and the results are provided in Table 2 and additional predicted images are attached in the supplementary section. Conditioning on both past and future frames did not yield satisfactory results in identifying anomalous frames. ## 6 Conclusion Wildfire analysis is an effective method for combating extreme climatic events. While fire detection and tracking are undoubtedly important, these tasks are only helpful when the fire is already developed to a certain visualizable degree in captured satellite images. This paper presents a method to detect active fires in terms of anomaly detection tasks using diffusion models. The diffusion model generates a good image for the non-fire event by learning the prior distribution of this type of data; thus, when asked to generate a fire event, the diffusion model generates the highest AUC score compared to all other baseline models. By recognizing these results, it is visible how the proposed method can distinguish between non-fire and fire events and how the empirical results support our findings.
2304.10664
A Comparative Neural Radiance Field (NeRF) 3D Analysis of Camera Poses from HoloLens Trajectories and Structure from Motion
Neural Radiance Fields (NeRFs) are trained using a set of camera poses and associated images as input to estimate density and color values for each position. The position-dependent density learning is of particular interest for photogrammetry, enabling 3D reconstruction by querying and filtering the NeRF coordinate system based on the object density. While traditional methods like Structure from Motion are commonly used for camera pose calculation in pre-processing for NeRFs, the HoloLens offers an interesting interface for extracting the required input data directly. We present a workflow for high-resolution 3D reconstructions almost directly from HoloLens data using NeRFs. Thereby, different investigations are considered: Internal camera poses from the HoloLens trajectory via a server application, and external camera poses from Structure from Motion, both with an enhanced variant applied through pose refinement. Results show that the internal camera poses lead to NeRF convergence with a PSNR of 25\,dB with a simple rotation around the x-axis and enable a 3D reconstruction. Pose refinement enables comparable quality compared to external camera poses, resulting in improved training process with a PSNR of 27\,dB and a better 3D reconstruction. Overall, NeRF reconstructions outperform the conventional photogrammetric dense reconstruction using Multi-View Stereo in terms of completeness and level of detail.
Miriam Jäger, Patrick Hübner, Dennis Haitz, Boris Jutzi
2023-04-20T22:17:28Z
http://arxiv.org/abs/2304.10664v1
A Comparative Neural Radiance Field (NERF) 3D Analysis of Camera Poses from HoloLens Trajectories and Structure from Motion ###### Abstract Neural Radiance Fields (NeRFs) are trained using a set of camera poses and associated images as input to estimate density and color values for each position. The position-dependent density learning is of particular interest for photogrammetry, enabling 3D reconstruction by querying and filtering the NeRF coordinate system based on the object density. While traditional methods like Structure from Motion are commonly used for camera pose calculation in pre-processing for NeRFs, the HoloLens offers an interesting interface for extracting the required input data directly. We present a workflow for high-resolution 3D reconstructions almost directly from HoloLens data using NeRFs. Thereby, different investigations are considered: Internal camera poses from the HoloLens trajectory via a server application, and external camera poses from Structure from Motion, both with an enhanced variant applied through pose refinement. Results show that the internal camera poses lead to NeRF convergence with a PSNR of 25 dB with a simple rotation around the x-axis and enable a 3D reconstruction. Pose refinement enables comparable quality compared to external camera poses, resulting in improved training process with a PSNR of 27 dB and a better 3D reconstruction. Overall, NeRF reconstructions outperform the conventional photogrammetric dense reconstruction using Multi-View Stereo in terms of completeness and level of detail. Neural Radiance Fields, Microsoft HoloLens, Structure from Motion, Trajectory, 3D Reconstruction, Point Cloud ## 1 Introduction With the pioneering research on Neural Radiance Fields (NeRFs) (Mildenhall et al., 2020), that enable the rendering of new views with the so-called view synthesis based on image data and associated camera poses in space, a new epoch in computer graphics started. Novel invertions like Instant NGP (Muller et al., 2022) advanced NeRFs once again, as they reduce training and rendering time to minutes or even seconds. However, these methods also arouse interest beyond the field of computer graphics for research and development in photogrammetry. NeRFs take a sparse set of camera poses with associated images as input and train a Neural Network which estimates a density value \(\delta\) and color values c = (R,G,B) for each position X = (x,y,z). The position-dependent learning of density, and color values, is of particular interest for photogrammetry with regard to mobile 3D mapping. We consider density as a kind of pseudo-probability for the occurrence of an object in 3D space. Thus, the positions of the trained NeRF can be accessed and filtered by their density on objects, which allows the extraction of 3D point clouds of the scene. Most commonly, traditional methods like Structure from Motion (SfM) are used to calculate the camera poses in pre-processing needed for training the NeRFs. From this perspective, the HoloLens provides an interesting interface, that enables the extraction of the required input data, the camera poses and associated sensor RGB images. This allows to create a 3D reconstruction, including color information, nearly directly from the sensor data. In this study, we investigate whether the trajectory from the HoloLens is sufficient to achieve convergence of the NeRF and the potential to achieve a 3D reconstruction of the scene based on the density values of the NeRF trained with HoloLens data. Four different types of camera poses using the same HoloLens images are compared regarding their training process and the resulting 3D reconstructions. On the one hand, the internal HoloLens camera poses, as well as a including pose refinement during training are considered. On the other hand, external generated camera poses via SfM, as well as including the pose refinement are investigated. In order to compare the point clouds based on NeRFs with traditional methods, a dense Multi-View Stereo (MVS) point cloud from the camera poses is reconstructed. Firstly, it was demonstrated that the internal HoloLens camera poses and images are suitable for the convergence of the NeRF, as shown by the quantitative results of the training process in Figure 4. After a simple rotation around the x-axis, the convergence occurs from approximately 20,000 training epochs with a Peak-Signal-to-Noise-Ratio (PSNR) of 25 dB. Secondly, the trained NeRF is suitable for a 3D reconstruction of the scene, as shown by the qualitative results in Figure 5. After additional training of extrinsics alias pose refinement, HoloLens camera poses lead to comparable PSNR values of 27 dB and a comparable 3D reconstruction with respect to the separate pose determination via SfM in pre-processing. Furthermore, the reconstruction from the NeRF exhibited advantages over a conventional dense 3D reconstruction through MVS. The reconstructions from the NeRF using SfM and internal HoloLens camera poses lead to a higher point density, less artefacts, and a better mapping of untextured surfaces in terms of completeness than MVS. ## 2 Related Work In this section, we briefly summarize related work to our study. Thereby we give an overview on basic and recent research and development on NeRFs. The foundation for the Neural Radiance Fields (NeRFs) was established by the Scene Representation Networks (SNR) (Sitzmann et al., 2019). Their underlying principle is modeling the scene as a function of 3D coordinates within it. It was followed by the groundbreaking research work of Neural Radiance Fields (Mildenhall et al., 2020). These enable estimation of color values and densities for each 3D coordinate through 6D camera poses and associated 2D images by learning a deep neural network with multi-layer perceptrons (MLPs). The initial NeRF was followed by thousands of publications driving research and development in various domains. To address scalability, the augmentation to large scale scenes is achieved by Mega-NeRF (Turki et al., 2022) using a data partitioning based on visibility analysis or Block-NeRF (Tancik et al., 2022) with distance-dependent partitioning based on street segments. Other approaches, such as Bundle Adjusting Radiance Fields (BaRF) (Lin et al., 2021) and Gaussian Activated Radiance Fields (GaRF) (Ching et al., 2022), address the purpose of a camera pose estimation. In addition to neural methods, non-neural research like Plenoxels (Flordiovich-Keil et al., 2022) have been proposed. Dynamic contributions, such as (Pumarola et al., 2021), utilize time as an additional input dimension for time-dependent rendering of novel images, while (Gao et al., 2021) employ time components for preventing the occurrence of artifacts caused by dynamic pixels. Furthermore, 3D reconstruction from NeRFs are considered (Rosinol et al., 2022; Oechsle et al., 2021). Methods such as AdaNeRF (Kurz et al., 2022), FastNeRF (Garbin et al., 2021) and Instant NGP (Muller et al., 2022) aim to improve rendering or training efficiency. Thereby Instant NGP, which we utilize in our study, uses a combination of small MLPs and spatial hash table encoding for real-time training and rendering. ## 3 Methodology In Section 3.1 the principles of the methods used to generate the input data according to Figure 1 for the NeRFs are presented. The essential transformations of the internal HoloLens camera poses in order to use them for the NeRFs are explained. After that, the standard method used to determine the camera poses in pre-processing is introduced. Subsequently, in Section 3.2 the methodology for the extraction of a 3D point cloud from NeRFs as well as the used conventional photogrammetric reconstruction are described. ### Camera Poses of the Trajectories Transformation of the HoloLens camera posesAs the first and central step, the HoloLens camera poses have to be transformed into a compatible input format for the NeRF. Such a format is given by the representation as a so-called view matrix \(\text{T}_{\text{view}}\), which is a 4\(\times\)4 transformation matrix in the form of homogeneous coordinates. It consists of translation, rotation and scaling. The input to the used NeRF follows the OpenGL1 convention, with the camera placed in a right-handed coordinate system, the positive z-axis pointing away from the camera, and the positive x-axis pointing to the right when looking through the camera lens. The y-axes must face the global z-axis, since they correspond to the so-called up-vector. As the z-axis of the HoloLens camera poses \(\text{T}_{\text{HoloLens}}\) is orientated in the direction of the z-axis in the global coordinate system, see Figure 2(a), a transformation by 90 degrees around the global x-axis is required as shown in Figure 2(b) by: Footnote 1: [https://learnopengl.com/Getting-started/OpenGL](https://learnopengl.com/Getting-started/OpenGL) (last access 01/03/2023) \[\text{T}_{\text{view}}=\text{T}_{\text{x},\alpha}\text{T}_{\text{HoloLens}}, \tag{1}\] with \[\text{T}_{\text{x},\alpha}=90^{\circ} =\begin{bmatrix}1&0&0&0\\ 0&\text{cos}(\alpha)&\text{-sin}(\alpha)&0\\ 0&\text{sin}(\alpha)&\text{cos}(\alpha)&0\\ 0&0&0&1\end{bmatrix} \tag{2}\] \[=\begin{bmatrix}1&0&0&0\\ 0&0&\text{-1}&0\\ 0&1&0&0\\ 0&0&0&1\end{bmatrix}.\] Two additional transformations are performed for translation and scaling, based on those in the Instant NGP implementation2. Firstly, for translation, the transformation matrices are transformed to a common focal point (center of attention), as shown in Figure 2(c). For each camera pair, the intersection point between the optical axes is computed, resulting in the focal point. Afterwards the point is subtracted from the current position of the camera, aligning the cameras towards the focal point. This allows the camera poses to be used for the visualization of the object in focus. Figure 1: Flowchart of the applied investigations. Input data are two different types of camera poses: internal poses of the HoloLens and externally calculated poses via SfM. Subsequently, in each case a pose refinement variant is performed during the training. The four resulting point clouds are extracted from the NeRF using a global density threshold. Subsequently, scaling is performed on the camera poses to the size of the NeRF coordinate system, see Figure 2(d). First, the average distance of all cameras from the origin is calculated by computing the Euclidean norm of the displacement vectors of the camera transformations and summing them. This value is divided by the number of camera transformations to obtain the average distance. The scaling factor is then computed by dividing the camera distances by the average distance and multiplying it with a factor, which is set to 4 by Instant NGP. Final scaling is performed by multiplying each transformation matrix with the final scaling factor. Structure from MotionStructure from Motion (SfM) generally describes the procedure of reconstructing a 3D scene from a set of images taken from different directions and positions. It relies on the calculation and matching of point correspondences within an image sequence from overlapping images by using methods such as SIFT (Lowe, 2004). In this study, the (incremental) Structure from Motion technique by (Schoberger and Frahm, 2016) is used for the external calculation of the camera poses. ### 3D Reconstruction NeRFNeRFs enable novel view synthesis of scenes. However, from the perspective of photogrammetry, instead of rendering new 2D views (Mueller et al., 2019), we are interested in the 3D geometry and corresponding color values of the scene. We consider the density as a kind of pseudo-probability for the occurrence of a surface in 3D space. Considering, positions with high densities indicate a high possibility to be an object point. In the first step, uniform sampling of the density field is achieved by sampling density and color values in the coordinate system of the trained NeRF at equidistant sampling points in a bounding box. In the second step, we filter the positions X = (x,y,z) with high density values using a global threshold \(\delta_{\text{t}}\). Thereby, we assume that object points maintain a higher density \(\delta>\delta_{\text{t}}\) compared to non-object points. For the 3D mapping we investigate 3D reconstructions of our scene based on four different types of input data, as shown in Figure 1. We use the internal HoloLens camera poses and external camera poses calculated by a SfM workflow as described in Section 3.1. In addition to the internal and external camera poses and corresponding images, the pose refinement of the Instant NGP implementation is used. The implementation requires initial poses in order to refine them and is unable to compute poses completely from the scratch. By using the camera pose as an additional variable in the training process, it propagates gradients back onto the camera parameters in order to minimize the loss. Multi-View StereoIn order to compare the reconstructions from NeRFs with a reconstruction from a conventional method, we use a classical Multi-View Stereo (MVS) pipeline (Schonberger and Frahm, 2016) on the basis of the output of the Structure from Motion in Section 3.1. Accordingly, the same HoloLens camera poses as for the NeRFs serve as input here, which makes the reconstructions comparable in the same coordinate system. On the one hand, we use the output of SfM for a sparse reconstruction. On the other, hand we generate a dense reconstruction with MVS. MVS (Schonberger et al., 2016) takes the information from the sparse model from SfM to for pixelwise computation of depth information in an image. ## 4 Dataset Our experiments are based on a dataset captured by the Microsoft HoloLens, which includes an indoor scene of a plant (Ficus) on a plane surface, see Figure 3. The HoloLens provides an interesting interface for the NeRF, as it generates the required input data, camera poses and associated sensor images. In general, HoloLens, developed by Microsoft and firstly released in 2018, embodies the world's first fully autonomous holographic computer and has become an important device for all kinds of applications, such as 3D mapping and modelling of indoor scenes (Weinmann et al., 2020; Weinmann et al., 2021). Figure 3: Visualization of an image of the captured Ficus plant as our measurement object using the Microsoft HoloLens RGB camera. Figure 2: Camera poses for the trajectory from SfM (trajectory top) via the transformations of Instant NGP implementation versus the internal camera poses from HoloLens (trajectory bottom): (a) shows the external versus the original internal camera poses, (b) shows the transformation of the HoloLens camera poses by rotating them 90 degrees around the global x-axis, (c) shows the translation to the center of attention and (d) shows scaling to the NeRF coordinate system. HoloLens3 generation 2 was released in 2019 and features improved camera technology compared to the first generation such as higher resolution and better color depth, resulting in sharper and more detailed images. Footnote 3: [https://www.microsoft.com/en-us/hololens/hardware](https://www.microsoft.com/en-us/hololens/hardware) (last access 02/20/2023) The HoloLens 2 server application (Dibene and Dunn, 2022) is used for requesting the data in the HoloLens. The system provides access to all the HoloLens 2 sensors, including the images from the \(1920\times 1080\) photo/video RGB camera and corresponding camera pose of the device in 3D space. In addition, device calibration data can be retrieved by the internal orientation (camera intrinsics). The HoloLens images and corresponding camera poses were captured with a hemispherical camera framing. Thereby step sizes of 32 scanning points at a height of about 120 cm, with two different viewing angles have been employed, which results in a total of 64 images. ## 5 Experiments and Results In this section, we present our experiments and results by a quantitative evaluation on analyzing the training process in Section 5.1. This is followed by a qualitative analysis in Section 5.2 of the resulting 3D reconstructions. We investigate the impact of the camera poses in general, the comparison of point clouds from NeRFs trained with different input sets based on HoloLens data in Section 4 as well as photogrammetric reconstructions. ### Training The training process of the NeRFs proceeds differently based on the chosen configurations, as shown in Figure 4. In particular, we use the Peak-Signal-to-Noise-Ratio (PSNR) in \(\mathrm{dB}\) between the input RGB images of the training data and the rendered images for the accuracy measurement while training. Comparing the training, both the internal HoloLens and the external SfM camera poses lead to a convergence of the NeRF. This occurs at approximately 20,000 training epochs, which corresponds to a duration of between 2 and 5 min training. The internal HoloLens camera poses achieve best results of about 25 dB. In contrast, more than 27 dB can be achieved with the external camera poses from SfM in pre-processing. Remarkably, pose refinement by training the extrinsics can increase the performance for the internal camera poses to a comparable level of about 27 dB. However, no further increase in PSNR is achieved for the external camera poses by pose refinement. For all configurations, the loss behaves inversely proportional. Based on these training results, conclusions can be drawn about the relative precisions of the different type of camera poses. ### 3D Reconstruction Finally, Figure 5 compares the 3D reconstructions from the NeRFs trained on different input data by using a global threshold \(\delta_{\text{u=15}}\), and the sparse and dense point clouds. In general, 3D reconstructions from NeRF on HoloLens data can be generated directly with the internal camera poses as well as from the external camera poses calculated via SfM. The visual quality of the reconstructions corresponds to the achieved PSNR values in Figure 4. This is particularly evident from the artifacts in the reconstruction from HoloLens internal camera poses with no pose refinement in Figure 5(a). The training course also shows a lower maximum PSNR of 25 dB compared to those of the other three training processes with PSNR values of 27 dB. In this case, artifacts are located in empty space. This effect rarely occurs during 3D reconstruction based on the internal camera poses with pose refinement, as Figure 5(b) shows. The external camera poses provide adequate qualitative reconstruction results without in Figure 5(c) and with pose refinement in Figure 5(d). Only small artifacts disappear with pose refinement. Overall, all input data provide sufficient 3D reconstructions from NeRFs with minor color differences. In particular, the surface of the pot of the plant can be reconstructed well using NeRFs. Figure 4: Comparison of the Peak-Signal-to-Noise-Ratio (PSNR) in \(\mathrm{dB}\uparrow\) and loss \(\downarrow\) during the training processes. The red curves show the PSNR, the blue curves the loss. The HoloLens images with internal HoloLens camera poses, internal HoloLens camera poses with pose refinement, external (SfM) camera poses, and external (SfM) camera poses with pose refinement are considered. Figure 5: Comparison of the 3D reconstructions from NeRFs using a global density threshold \(\delta_{\text{u}=15}\). For HoloLens images and (a) internal camera poses, (b) with pose refinement and (c) external camera poses, (d) with pose refinement. Compared to the (e) sparse and (f) dense point cloud from external camera poses with MVS. In contrast, the MVS approach does not provide a complete reconstruction of the object, which is especially noticeable on the pot. This occurs for both the sparse reconstruction in Figure 5(e) and dense reconstruction in Figure 5(f). Additionally, the dense point cloud contains gray artifacts at the fine structures of the branches. ## 6 Discussion This research investigates the application of camera poses from Microsoft HoloLens trajectories for 3D reconstruction. On the one hand, internal camera poses directly retrieved from the HoloLens trajectory via a server application have been investigated. On the other hand, external camera poses that were calculated in the conventional manner via Structure from Motion were considered. For both scenarios, an enhanced pose refinement was additionally applied by training the camera extrinsics. It could be demonstrated that, after a simple rotation around the x-axis, the internal HoloLens camera poses are sufficient for NeRF convergence in approximately 20,000 training epochs. This enables a 3D reconstruction using NeRF coordinates by sampling. Four investigations are considered as input for the corresponding images: The internal HoloLens camera poses, external camera poses from SfM, both with and without pose refinement. Overall, the results show varying quantitative and qualitative performance in training and 3D reconstruction based on the utilized camera poses. Considering the training process the unrefined internal HoloLens camera poses provide PSNR of about 25 dB. With pose refinement of the internal camera poses, the training process improves to about 27 dB. This is comparable with the external camera poses from SfM, which achieve higher PSNR values of around 27 dB, both unrefined and refined. We assume that improved poses lead to a superior training process in terms of the PSNR values and consequently better 3D reconstructions, which is confirmed by the qualitative results. Thereby the unrefined internal HoloLens camera poses contains more huge artifacts in the 3D reconstruction. However, by pose refinement of the internal camera poses, the artifacts are reduced, and the reconstruction is comparable to the reconstruction from external calculated camera poses. Each external camera poses without and with pose refinement, contain only a few small artifacts. We suggest that the externally calculated poses are already quite accurate and therefore do not improve further with pose refinement. Nevertheless, the results from 3D mapping using NeRF are notably superior to the classical photogrammetric method of dense Multi-View stereo (MVS) reconstruction from camera poses via SfM for our dataset. The NeRF reconstructions yield better results on untextured, homogeneous surfaces. This is especially evident for the pot of the plant, which apparently fails to be reconstructed with the conventional MVS. In addition, fine structures in MVS reconstruction contain gray artifacts, as can be seen in the branches of the plant and an inferior level of detail. Some color differences within the NeRF reconstructions are caused by the directionality of color in the NeRFs, as opposed to density. However, the color differences are minor and do not harm the overall impression of the reconstruction. ## 7 Conclusion In this paper, we presented a workflow for the extraction of high resolution 3D reconstructions almost directly from Microsoft HoloLens data under the application of Neural Radiance Fields (NeRFs). Thereby, the impact of the camera poses has been investigated using a quantitative analysis by considering the training process, as well as a qualitative analysis by regarding the final 3D reconstructions. We demonstrated that the internal HoloLens camera poses und corresponding images as input data are able to provide convergence of the NeRF during training. This enables the generation of a 3D reconstruction from positions with high density values in the NeRF coordinate system. Improvements in the training process and resulting 3D reconstruction can be achieved by pose refinement while training the NeRF. This enables a comparable quality in the training process and resulting point cloud as achieved by external camera poses calculated in pre-processing using approaches such as Structure from Motion. It demonstrates the impact of the camera poses on the quality of the 3D reconstruction. In addition, among all pose investigations, the NeRF reconstructions outperform the conventional photogrammetric method using Multi-View Stereo. In summary, the combination of internal HoloLens camera poses and associated images with NeRFs offers an immense potential for enabling highly detailed, colored, mobile 3D mappings of a scene in a straightforward workflow. In future work, we suggest using a 3D region growing algorithm instead of a global density threshold in terms of artifacts removal, assuming that all object points in the scene are spatially connected.
2307.09665
Anticipating Technical Expertise and Capability Evolution in Research Communities using Dynamic Graph Transformers
The ability to anticipate technical expertise and capability evolution trends globally is essential for national and global security, especially in safety-critical domains like nuclear nonproliferation (NN) and rapidly emerging fields like artificial intelligence (AI). In this work, we extend traditional statistical relational learning approaches (e.g., link prediction in collaboration networks) and formulate a problem of anticipating technical expertise and capability evolution using dynamic heterogeneous graph representations. We develop novel capabilities to forecast collaboration patterns, authorship behavior, and technical capability evolution at different granularities (e.g., scientist and institution levels) in two distinct research fields. We implement a dynamic graph transformer (DGT) neural architecture, which pushes the state-of-the-art graph neural network models by (a) forecasting heterogeneous (rather than homogeneous) nodes and edges, and (b) relying on both discrete -- and continuous -- time inputs. We demonstrate that our DGT models predict collaboration, partnership, and expertise patterns with 0.26, 0.73, and 0.53 mean reciprocal rank values for AI and 0.48, 0.93, and 0.22 for NN domains. DGT model performance exceeds the best-performing static graph baseline models by 30-80% across AI and NN domains. Our findings demonstrate that DGT models boost inductive task performance, when previously unseen nodes appear in the test data, for the domains with emerging collaboration patterns (e.g., AI). Specifically, models accurately predict which established scientists will collaborate with early career scientists and vice-versa in the AI domain.
Sameera Horawalavithana, Ellyn Ayton, Anastasiya Usenko, Robin Cosbey, Svitlana Volkova
2023-07-18T22:17:07Z
http://arxiv.org/abs/2307.09665v1
Anticipating Technical Expertise and Capability Evolution in Research Communities using Dynamic Graph Transformers ###### Abstract The ability to anticipate technical expertise and capability evolution trends globally is essential for national and global security, especially in safety-critical domains like nuclear nonproliferation (NN) and rapidly emerging fields like artificial intelligence (AI). In this work, we extend traditional statistical relational learning approaches (_e.g._, link prediction in collaboration networks) and formulate a problem of anticipating technical expertise and capability evolution using dynamic heterogeneous graph representations. We develop novel capabilities to forecast collaboration patterns, authorship behavior, and technical capability evolution at different granularities (_e.g._, scientist and institution levels) in two distinct research fields. We implement a dynamic graph transformer (DGT) neural architecture, which pushes the state-of-the-art graph neural network models by (a) forecasting heterogeneous (rather than homogeneous) nodes and edges, and (b) relying on both discrete- and continuous-time inputs. We demonstrate that our DGT models predict collaboration, partnership, and expertise patterns with 0.26, 0.73, and 0.53 mean reciprocal rank values for AI and 0.48, 0.93, and 0.22 for NN domains. DGT model performance exceeds the best-performing static graph baseline models by 30-80% across AI and NN domains. Our findings demonstrate that DGT models boost inductive task performance, when previously unseen nodes appear in the test data, for the domains with emerging collaboration patterns (_e.g._, AI). Specifically, models accurately predict which established scientists will collaborate with early career scientists and vice-versa in the AI domain. Dynamic Graphs, Transformers, Graph Neural Networks, Proliferation ## I Introduction Monitoring technical expertise and capability evolution globally is an extremely challenging but highly desired task. This is especially true for critical national security domains like artificial intelligence (AI) and nuclear nonproliferation (NN). It is also important to know when new technical capabilities emerge globally, when scientists start or stop publishing about specific technologies, when new collaborations (_e.g._, international and multidisciplinary) are established, when industry and academic partnerships emerge, and when publication behaviours for scientists of interests have a potential to transform the national security mission, specifically by: * providing understanding of how publicly available data could be used to monitor, forecast, and reason about potential proliferation and adversarial AI technologies globally [1]; * assuring quality, scale, and timeliness required for operational monitoring capability; * moving away from traditional reactive analyses and taking a proactive posture. To create this operational capability, we developed and validated a dynamic graph transformer (DGT) network, a novel deep-learning architecture that leverages attributes from both graph neural networks (GNNs) and Transformer models, for forecasting nodes and edges given discrete- or continuous-time historical inputs (_e.g._, publication behavior, collaboration patterns). We construct these inputs from digital scholarly publications that capture scientific knowledge development and collaboration patterns across disciplines, e.g., artificial intelligence, nuclear science. They provide new insights on how careers evolve, how collaborations drive scientific discovery, and how scientific progress emerges which enable researchers to gain a deeper understanding of the relationships and dynamics within the scientific community [2, 3]. Unlike any other work, our DGT models learn from dynamic _heterogenous_ structured representations of scientific collaborations, capability, and expertise evolution patterns at multiple levels of granularity (_e.g._, at scientist and institution). In contrast to predicting missing edges in a static graph, our models forecast the temporal edges in a dynamic graph. For example, the model predicts which two scientists would collaborate in next month while it predicts which emerging technical capability a scientist will publish on next year. To understand advantages and limitations of DGT models, we performed an in-depth analysis and comparison of performance across GNN models, types of relations (scientist-to-capability, scientist-to-scientist, etc.), and exogenous variables (country, venues, etc.). First, we test the ability of the models to generalize across Nuclear nonproliferation (NN) and Artificial Intelligence (AI) domains with diverse publication characteristics. We noticed that the DGT models trained with discrete-time and continuous-time dynamic graphs outperform the best-performing static graph baseline models by 30-80% in the Nuclear nonproliferation and Artificial Intelligence domains, respectively. Second, we demonstrate that our DGT models can predict both edges for nodes they have seen during training as well as "unseen" nodes they will encounter once deployed, which is critical for operational capability. For example, models generalize predictions to scientists who focus on emerging research topics in Artifical Intelligence, or to newcomer scientists who involve in collaborations with veteran scientists in Nuclear nonproliferation. Third, our detailed performance analysis suggests that collaborations across scientists and institutions within the same country (domestic) are easier to anticipate than cross-country collaborations (international); collaboration patterns within the United States are easier to anticipate than those outside the United States, with collaborations from China being the most difficult to forecast in the nuclear domain. The models also make highly accurate predictions for highly prolific, influential and interdisciplinary scientists. ## II Related Work In this section, we summarize prior work on dynamic graph modeling approaches. Previous research primarily focused on three tasks: node classification, edge prediction, and graph classification [4]. Edge prediction problems are well studied under two settings: interpolation and extrapolation. While interpolation focuses on predicting missing links in the past, extrapolation focuses on predicting future links and is more challenging. Our work focuses on an extrapolation in both discrete- and continuous-time dynamic graphs. An example extrapolation task is to predict who a given scientist will collaborate with next. We describe the recent dynamic graph models that perform these link prediction tasks in Section II-A and highlight the most relevant works closest to our problem domain in Section II-B. ### _Graph Neural Networks vs. Transformers_ Most of existing dynamic graph models use graph neural networks (GNN) and recurrent neural networks (RNN) to predict links in dynamic graphs [4, 5, 6]. RNNs and GNNs are used jointly to learn the temporal graph sequence and graph structural information, respectively. For example, GCRN [7] and EvolveGCN [5] use RNN and graph convolution neural network (GCN) [8] to learn from discrete-time graph snapshots. JODIE [9] extended the RNN models to learn dynamic embeddings from a sequence of temporal interactions. However, most of these methods are limited to _homogeneous_ (single-relational) dynamic graphs and do not handle multiple types of nodes and edges or other node features prevalent in the dynamic _heterogeneous_ graphs. More recently, some studies focused on predicting the future links in dynamic heterogeneous graphs [10, 11]. Jin et al. proposed the Recurrent Event Network (RE-Net) to predict future links in a discrete-time dynamic graphs [10]. Rossi et al. proposed the Temporal Graph Network (TGN) to predict future edges in a continuous-time dynamic graph [11]. In contrast to RE-Net, TGN accepts a sequence of timed edges as input to learn time-aware node embeddings. Both RE-Net and TGN models use RNN to handle node and edge updates. However, RNNs does not perform well when the number of timesteps increases in the temporal link prediction tasks [12]. We benchmark novel DGT models against the RE-Net and TGN approaches. Transformers achieved a great success in a broad class of machine-learning (ML) problems across multiple data modalities such as language [13] and vision [14], and recently on graphs [15]. For example, Graphormer[16] and GraphTransformer[17] use Transformer architecture [18] to implement message aggregation and positional encoding in graphs. However, Graphormer is evaluated on small molecule graphs and GraphTransformer only extracts features from one-hop neighborhood. GraphBERT [19] and TokenGT[20] transform a graph into the node and edge sequences and feed into Transformers. TokenGT has shown to be more expressive than all message-passing GNNs and outperforms GraphTransformer in a standard large-scale graph regression benchmark. However, when graph topology is important to the downtream prediction task, both GraphBERT and TokenGT can perform poorly since they do not take advantage of it. While Transformer-based methods applied on graphs show clear performance advantage over other GNN methods, most of these methods are limited to static graphs [16, 21] Several works combined a self-attention mechanism of the Transformer architecture with GNN and demonstrated performance advantage over message passing GNNs [22]. For instance, DYSAT [23] suggests using the self-attention mechanism for the aggregation of temporal and structural information. TGAT [24] first applies self-attention to the temporal augmented node features after encoding the temporal information into the node feature. However, most of these methods make graph-specific architectural assumptions [20]. Cong et al. [12] used a Transformer-based method in their approach to learn from temporal-union graphs extracted from dynamic graph snapshots. However, this method has not been evaluated on dynamic heterogeneous graphs. In this work, we advance GNN and Transformer architectures to operate on both discrete- and continuous-time dynamic heterogeneous graphs. Specifically, we use a self-attention mechanism to learn dynamic graph- and node-level changes and GNN to learn structural information in both global and local neighborhoods. We do not make any domain specific architectural assumptions. DGT models jointly learn from both temporal edge features and heterogeneous graph neighborhoods. ### _Academic Graph Modeling_ Previous work in the science of science domain primarily focused on co-citation or co-authorship networks (_e.g.,_ predicting missing edges in a co-citation network [25] or a co-authorship network [26]). DBLP [25] and ogbl-citation2 [26] are two commonly used benchmarks for link prediction in static academic networks. Similarly, HEP-PH [27] is a benchmark for link prediction in a co-citation network, but takes into account a dynamic graph setting. However, these datasets and benchmarks are not appropriate to use for evaluation on edge forecasting tasks on _dynamic heterogeneous_ graphs. Most recently, Hu et al. introduced a new OGB-LSC challenge benchmark for graph ML problems [28]. OGB datasets are extracted from the Microsoft Academic Graph [29]. Several recent works show the usefulness of the OGB datasets for large-scale graph learning [28]. One of the tasks in the challenge is to predict the missing subject categories of scientific articles in a heterogeneous academic graph. The top performing solutions in the challenge used different variants of message-passing based GNNs (e.g., R_UniMP [30], MDGNN [31] and MPNN&BGRL [32]). They also highlighted the importance of relation-aware node sampling in the heterogeneous graph learning. While these solutions provide more insights to academic graph modeling, they are limited to static graphs, and the corresponding node classification tasks. Apart from these ad-hoc prediction problems, there have been very few attempts to model global expertise and capability evolution in large-scale dynamic heterogeneous academic graph data. These graphs contain multiple types of nodes (e.g., scientists, institutions, capabilities) and edges (e.g., collaboration, partnership) that evolve over time [3]. For example, these graphs capture evolving interaction patterns across scientists that may exhibit the research trends and traits of academic communities. In addition, the temporal link prediction in dynamic heterogeneous academic graphs provides an ideal benchmark to test how well machine learning models generalize to the unseen test distributions, often called as _spatio-temporal distribution shifts_[33]. ## III Methodology This work leverages millions of research articles between 2015 and 2022 in two research domains (NN and AI) to understand and reason about the evolution of technical expertise and capabilities globally. For that we propose DGT, a new deep-learning method that integrates GNN with a Transformer architecture to forecast how technical expertise and capability development emerge through a combination of multiple interconnected factors. For example, our model learns features of human behavior extracted from historical collaborations, partnerships, and capability evolution to answer operationally relevant questions about proliferation risk assessment globally and competition in developing AI technologies. Specifically, we seek to answer research questions about scientific collaborations, partnerships, and capability development when studying dynamic graph model performance. * _Can we model varied patterns of behavior underlying the way scientists collaborate?_ Previous work [2] has shown that team-authored publications are more popular in terms of citations than single-authored publications. In our model we study scientists who are engaging more or less in collaborations. We model collaborations that occur within tightly connected groups of scientists with some engaging within the same or across multiple institutions. * _Can we model individuals and institutions making new partnerships?_ In our proposed model we focus on studying researchers at top universities who are more likely to collaborate with scientists at other top universities. * _Can we model the differences in technical capabilities that scientists research?_ We model scientists who adopt the most recent and emerging research trends and disrupt science by developing novel technologies and other scientists generate more theoretical innovations in contrast to applied technologies. ### _Problem Formulation_ We consider dynamic heterogeneous graphs \(G\) consisting of scientists, institutions, and capabilities as nodes \(N\). A pair of nodes is connected at a timestamp \(t\), by a directed edge that is denoted by a quadruplet (\(N_{i}\), \(E\), \(N_{j}\), \(t\)). Edges are of multiple types \(E\) such as collaboration \(E_{c}\) (_scientist-to-scientist_), partnership \(E_{p}\) (_scientist-to-institution_), and research focus \(E_{c}\) (_scientist-to-capability_). An ordered sequence of quadruplets represents the dynamic heterogeneous graph. In contrast to predicting missing edges in a static graph (_interpolation_), we need to predict the future edges in a dynamic graph (_extrapolation_). As these edges occur over multiple timestamps in the future, we treat the prediction task as a multistep inference problem (see Figure 1 and Definition III.1). Thus, we need to develop methods that can extrapolate the heterogeneous graph structure over future timestamps [3, 10]. Such predictions are extremely useful to forecast emerging science trends in terms of global expertise and capability development in domains like NN and AI. **Definition III.1**.: Given a graph \(G_{t}\) that represents the ordered sequence of quadruplets until time \(t\), the task is to forecast the graph (\(G_{t:t+m}\)) over multiple future timesteps \(m\). \(G_{t}\) can be represented as _discrete-time dynamic graphs_ (_e.g.,_ sequences of static graph snapshots) and _continuous-time dynamic graphs_ (_e.g.,_ timed lists of heterogeneous edges). ### _Dynamic Graph Transformers_ In this work, we introduce DGT to operate on dynamic heterogeneous graph inputs. DGT learns latent node representations from both discrete-time and continuous-time dynamic graphs that consist of heterogeneous node and edge types. We combine GNN and Transformers to learn time-aware and structure-aware node representations (Figure 2). Our objective is to map dynamic graphs to node embedding that can be useful for the temporal link prediction task [4]. These node embeddings should contain multiple types of information (_e.g.,_ heterogeneous node and edges, node and edge attributes, and graph temporal dynamics) captured by node and their structural neighborhood changes. We describe the discrete-time and continuous-time model architecture in Sections III-B1 and III-B2, respectively. #### Iii-B1 Discrete Model Architecture Given a sequence of discrete-time graph snapshots, \(G=G_{(1)},G_{(2)},...,G_{(t-1)}\), the Fig. 1: Problem definition - forecasting technical expertise and capability evolution using structured dynamic representations. pretraining objective is to generate the next graph snapshot \(G_{(t)}\) as shown in Equation 1. \[p(G)=\prod_{t}\prod_{(N_{i},E,N_{j})\in G}p(N_{i},E,N_{j}|G(t-N:t-1)) \tag{1}\] Our assumption is that all edges in \(G_{t}\) depend on the edges at the previous \(N\) timesteps. We improve the RE-Net [10] architecture's ability to learn the temporal and structural information from the discrete graph snapshots. The proposed DGT Discrete (DGT-D) model consists of a neighborhood aggregator module and an embedding module as shown in Figure 2. The neighborhood aggregator module includes the Relational Graph Convolutional Network (RGCN) layers to learn across multiple relations within a graph snapshot. This neighborhood aggregator outputs latent node representations that capture the \(k\)-hop neighborhood information. This allows the model to capture long-range context dependency present across nodes. DGT-D advances the RE-Net decoder to learn graph- and node-level temporal dependencies. We use the Transformer architecture [18] with a self-attention module and a position-wise feed-forward network to learn across latent representations from each graph snapshot. This helps to alleviate over-smoothing and over-squashing problems present in the recurrent event decoder [4]. The input to the self-attention module is \(H\in\mathbb{R}^{n\times d}\), where \(\mathbb{R}^{1\times d}\) represent \(d\)-dimensional graph (\(H_{G}\)) and node (\(H_{N}\)) level representations. For the global representation \(H_{G}\), we use an element-wise max-pooling operation over all node representations within a graph snapshot. We construct the local representations \(H_{N}\) for each node by aggregating neighborhood information from the node \(N\). While global representations summarize graph-level information from the discrete graphs, the local representations capture the events related to the node. #### Iii-B2 Continuous Model Architecture Given a sequence of time-stamped edges \(G=\{\{N_{i}(1),E(1),N_{j}(1)\},...,\{N_{i}(t-1),E(t-1),N_{j}(t-1)\}\}\), the pretraining objective is to predict the probability of an edge in the next timestep \(t\) as shown in the Equation 2. \[p((N_{i},N_{j})|t) =p((z_{N_{i}}||z_{N_{j}})| \tag{2}\] \[\{\{N_{i}(1),E(1),N_{j}(1)\},...,\] \[\{N_{i}(t-1),E(t-1),N_{j}(t-1)\}\})\] \(z_{N_{i}}\) represents the hidden node representations for node \(N_{i}\) that aggregates neighborhood information from the continuous-time dynamic graph. We assume that the presence of edge \((N_{i},E,N_{j})\) in the next timestep depends on the edge updates in the \(k\)-hop neighborhood of node endpoints at the previous timesteps. We represent the continuous-time dynamic heterogeneous graph as a list of timestamped edges. We rely on the TGN architecture [11] to inform the implementation of our DGT Continuous (DGT-C) model. Note, TGN generalizes to a majority of existing graph message passing-type architectures proposed for learning on both static and dynamic graphs. This model consists of a node memory module and an embedding module. The node memory module learns to compress and memorize the historical events as node states. Node states are updated upon each event associated with the respective node. For example, when there is an interaction between nodes \(N_{i}\) and \(N_{j}\) at timestamp \(t\), node states \(\mathbb{M}_{N_{i}}\) and \(\mathbb{M}_{N_{j}}\) are updated. DGT-C advances the TGN node memory module by introducing a self-attention module and a position-wise Fig. 2: DGT architecture with DGT-D and DGT-C variations to handle discrete-time and continuous-time inputs. feed-forward network to update the node states upon new events. The input to the self-attention module is a sequence of messages (_msg_) computed from the temporal events (\(N_{i}(t)\), \(E(t)\), \(N_{j}(t))\in B(t)\)) as shown in Equation 3. \[msg(t)_{Ni}=\mathbb{M}(t-1)_{N_{i}}\oplus\mathbb{M}(t-1)_{N_{j}}\oplus\Delta t \oplus(N_{i}(t),N_{j}(t)) \tag{3}\] \(\Delta t\) is the time difference between the current and previous event associated with the node \(N_{i}\). The output of the self-attention module is the updated node state \(\mathbb{M}(t)_{N_{i}}\). The embedding module takes the node states as the input and produces the time-aware node representations \(z_{i}(t)\) as shown in Equation 4. \[z_{i}(t)=\sum_{j\in\eta_{i}^{*}([0,t])}TGAT(\mathbb{M}(t)_{i},\mathbb{M}(t)_{j },i,j) \tag{4}\] We use temporal graph attention (TGAT) to implement the embedding function [11]. TGAT uses self-attention to aggregate information from the most important \(k\)-hop node neighborhoods and Time2Vec to encode temporal information [34]. TGAT avoids the memory staleness problem on handling sparse dynamic graph signals [11]. ## IV Data Collection and Processing We evaluate our approaches to the edge prediction problem using data from the AI and NN domains. In this section we describe four datasets, two for each domain, and the methods of data collection and preprocessing. ### _Artificial Intelligence Data_ The first domain we focus on is AI. For this use case, we constructed two separate graph networks using public, open-source conference and journal publication data. The result is the computational linguistics dataset from the Association for Computational Linguistics (ACL) and the ML dataset from ML conferences. #### Iv-A1 Computational Linguistics The first of the two datasets relating to the AI domain is a collection of publications from the ACL. We collected 51K papers published between 1965 and 2021 from the ACL Anthology.1 This total set of publications was then filtered to include only papers containing one AI keyword.2 The keyword list was curated by subject matter experts based on the frequency of keywords and coverage in the documents. Additionally, papers published before 2010 were removed to reduce sparsity and noise. These two filtering steps resulted in \(33K\) papers collected. Using the GROBID [35] approach, we extracted titles, abstracts, author names, author institutions, and location details. If location data were provided, we parsed the city/state and country names. After extracting the collaboration interactions, the resulting dataset contained \(35.6K\) unique authors from \(7.5K\) unique institutions. The final dataset included \(478K\) edges across training, validation, and test splits. Footnote 1: [https://aclanthology.org](https://aclanthology.org) Footnote 2: AI keywords: adversarial, causal, clustering, dialog, ethic, explanation, fair, genetic algorithm, interpretability, interpretable, language model, machine translation, glia, question-answer, reinforcement learning, sentiment, summarization, transfer learning, translation model, transparent. #### Iv-A2 Machine Learning In addition to the ACL dataset, we collected ML-related publications from the International Conference on Machine Learning (ICML), the International Conference on Learning Representations (ICLR), and the Conference on Neural Information Processing Systems (NeurIPS), merging them into a general ML dataset to complement the AI domain use case. We selected \(6K\) papers from ICML during the years of 2009 to 2021, \(2.5K\) papers from ICLR during the years of 2016 to 2021, and \(10.5K\) papers from NeurIPS. The GROBID extraction process, as with ACL, was used to identify all necessary metadata. We manually performed entity resolution to identify duplicate scientist and institution nodes and merged them across publication venues. The final ML dataset included \(48.5K\) unique authors and \(1.8K\) unique institutions, with a total of \(210K\) edges across training, validation, and test sets. In comparison to the ACL dataset, the ML dataset is less dynamic, with changes happening to the graph once a year. This dataset is also characterized as having more emerging interactions, meaning the links between nodes in the graph are overwhelmingly between previously unseen scientists, capabilities, and institutions. ### _Nuclear Nonproliferation Data_ The second domain we are interested in is NN. We constructed two datasets for this domain: from Web of Science (WoS) and Scopus. WoS and Scopus are multidisciplinary databases containing reference and citation records from over 250 academic fields. To create NN-specific datasets from multidisciplinary databases, we used a set of 51 keywords or phrases related to the topic of nuclear science and nonproliferation, which have been curated by subject matter experts.3 Our subsampled NN datasets contained at least one domain-specific keyword in the title or abstract. Footnote 3: NN keywords: centrifuge, chemical conversion, chemical engineer, chemical extraction, civil engineer, closed nuclear fuel cycle, computational physics, criticality test, depleted uranium, dose coefficient, electrical engineer, enriched uranium, fissile material, fission fragment, fission product, fissionable material, fuel cycle, fuel rod, hydrodynamic, international community, ionizing radiation, low enriched uranium, low linear energy transfer, military use, natural uranium, nuclear, nuclear accident, nuclear emergency, nuclear facility, nuclear fuel, nuclear incident, nuclear installation, nuclear material, nuclear security, plutonium, potential alpha energy, radiation emergency, radiation risks, radiation safety, radiation source, radioactive equilibrium, radioactive half-life, radioactive material, radioactive source, radioactive waste, radiological emergency, radiological protection, research reactor, spent fuel, uranium. #### Iv-B1 Web of Science The first dataset in the NN domain is a collection of publications from WoS.4 We sampled \(531K\) scientific publications published between 2015 and 2021. We filtered this collection to include only publications that contained one or more NN keywords in the title or abstract. Next, we choose to constrain the dataset to include publications authored by scientists with 20 or more papers published in WoS during this time. This filtering step greatly reduced the number of total papers from \(531K\) to \(48K\) and reduced the sparsity of the dataset, increasing the likelihood for scientist nodes to have more than one link with another scientist, institution, and capability. The final collection of data from the WoS contained \(6.7K\) authors and \(7.6K\) institutions. #### Iii-B2 Scopus In addition to the WoS papers, we collected \(105K\) publications from the Scopus database published between 2015 and 2021. We performed the same keyword filtering step to select publications containing at least one of the NN keywords in the title or abstract. Additionally, we filtered the resulting dataset to include only publications authored by scientists that had 15 or more papers published in Scopus during the this time. The final Scopus dataset contained \(3.3K\) unique scientists and \(7.2K\) unique institutions. This dataset was quickly dynamic with timesteps in a monthly granularity. The majority of the interactions were repeated, so the links in the graph were primarily across a groups of scientists and the same capabilities. ### _Data Augmentation_ We performed additional preprocessing steps to support our analysis. First, we labeled scientists as incumbent or newcomer nodes based on their frequency of appearances in the dataset. Fig. 3: Visualization of the dynamic heterogeneous graphs across four datasets with diverse scientist, institution, and capability nodes with a focus on the subgraphs extracted from the top 10 most active scientists. Note, we removed all edge timestamps to reduce visual clutter. For example, when a scientist published her first article in the computational linguistic (CL) community, we labeled her as a newcomer. Incumbent scientists published multiple times in the same community. Similar to the above classification, we labeled the institutions into new and repeated partnerships. For example, a scientist may establish a new partnership with an institution in the next publication, or might continue with the previously established partnership with an institution. Finally, we characterized the publication trends on the adoption and persistence of certain research foci in the respective research communities [36]. When a scientist adopted a new research focus to work on for the first time, we labeled them as new capabilities. Otherwise, a scientist maintained the same research focus across publications. Figure 3 shows the visualization of the academic graphs with different types of nodes in the AI and NN domains. There are more new scientists in the AI community who often collaborate with the incumbent scientists. In contrast, the NN community is more densely structured with many incumbent scientists who participate in the new and repeated collaborations. For example, more than 95% of collaborations occurred between two incumbent scientists in the WoS community in the NN domain. As a part of our metadata extraction process, we extracted location information directly from the PDF (available for the ACL and ML datasets) or citation records (available for the WoS and Scopus datasets). The quality of extracted location data varied greatly across the datasets. The manual inspection was done to verify valid country names. City, state, or province names were mapped to country names whenever possible (_e.g.,_ Beijing to China). Those publications missing location information, or those containing incorrect location information, or location names that could not be mapped to a specific country were labeled as Other. ## V Experiments In this section, we describe the experimental setup and discuss the results from our experiments. We evaluate the performance of the proposed neural architecture developed to predict edges on dynamic heterogeneous graphs. These graphs encode collaboration (_scientist-to-scientist_), partnership (_scientist-to-institution_), and expertise (_scientist-to-capability_) edges. Similar to previous work [4, 37], we formulate this temporal link prediction task as a ranking problem. The goal is to rank potential nodes that would be present in a future graph. Section V-A describes our evaluation metrics and baselines. First, we report model performance breakdown for the list of tasks with an increasing order of complexity in Section V-B. We present inductive task performance when previously unseen nodes could appear in the test data [3], and we summarize performance across different edge types in Section V-C. Finally, we provide an in-depth analysis of model performance across important data factors, such as international vs. domestic collaborations, international capability development, collaboration and partnership behavior of scientific elites, cross-disciplinary collaborations, and industry vs. academic partnerships, to better understand how and why DGT models behave in a certain way in the supplementary materials. ### _Experimental Setup_ Training, validation, and test data consist of temporal edge lists in the format of quadruplets. These quadruplets contain the head node, relation type, tail node, and timestamp. Given a head node and relation type, the model predicts the tail node at a given timestep. For example, the model ranks all other scientists that would collaborate with a given scientist in a future timestep. Note that for comprehensive evaluation, we report model predictions for the head node given the tail node and the relation type at a given timestep. We split the dataset by timestamp to have nonoverlapping records between training, validation, and testing splits as shown in Table I. #### V-A1 Evaluation Metrics Forecasting temporal edges presents a much harder challenge due to the number of all possible candidate nodes. For example, the model needs to evaluate all tail nodes given the head node in a quadruplet and repeat the process across all timesteps in the testing period. Similar to previous work [3], we follow a standard protocol of evaluating the link prediction performance by limiting the evaluation to a set of candidate nodes [28]. For each validation or test quadruplet, we perturb the tail node with 200 randomly sampled entities that do not appear in any of the training, validation, or test sets. Thus, models rank 201 candidates (consisting of 1 positive and 200 negative candidates) for each quadruplet. We use these ranks to calculate top \(k\) positive candidates among the corresponding negative candidates (Hits@K) as well as the mean reciprocal rank (MRR) metrics. #### V-A2 Baselines We implement the eight most representative state-of-the-art GNN baseline models, focusing on two main approaches: shallow and compositional encoding models. Shallow encoding models map each entity to a unique embedding vector. These methods rely on embedding lookup during inference, and can only make predictions for the nodes observed during training. For example, TransE [38] and RotatE use head-to-tail node relations to compute the plausibility of triples based on a distance function (_e.g.,_ Euclidean distance between entities). ConvE and ComplEx [39] exploit similarity of latent features. RGCN [40] uses a Graph Convolutional Network-based entity and relation encoder to learn entity representations. NodePiece [41] represents each node as a set \begin{table} \begin{tabular}{|c|l|r|r|r|} \hline _Dataset_ & _Split_ & _Time_ & _# Nodes, K_ & _# Edges, K_ \\ \hline \hline \multirow{3}{*}{**ACL**} & Training & 2010-2018 & 33 & 335 \\ \cline{2-5} & Validation & 2019 & 12 & 79 \\ \hline \multirow{3}{*}{**ML**} & Testing & 2020-2021 & 10 & 64 \\ \cline{2-5} & Training & 2010-2019 & 31 & 119 \\ \cline{2-5} & Validation & 2020 & 15 & 57 \\ \cline{2-5} & Testing & 2021 & 8 & 34 \\ \hline \multirow{3}{*}{**WoS**} & Training & 2015-2018 & 17 & 456 \\ \cline{2-5} & Validation & 2019 & 10 & 121 \\ \cline{2-5} & Testing & 2020-2021 & 8 & 71 \\ \hline \multirow{3}{*}{**Scopus**} & Training & 2015-2018 & 9 & 300 \\ \cline{2-5} & Validation & 2019 & 5 & 85 \\ \cline{1-1} \cline{2-5} & Testing & 2020-2021 & 4 & 46 \\ \hline \end{tabular} \end{table} TABLE I: Characteristics of AI and NN datasets used in our experiments for training, validation, and testing. of top-k nearest anchor nodes and \(m\) unique relation types around the node. Anchor nodes and relation types are encoded in a node representation that can be used in any downstream prediction task for any entity, including those unobserved during training. For baseline experiments we use the Pykeen library [42] and construct a static graph from the training data. We follow the best practices introduced in the Pykeen library (_e.g.,_ training approach, loss function, and the explicit modeling of inverse relations) [43]. For example, we follow _stochastic local closed world assumption_, where a random candidate set of triplets that are not part of the graph are considered as negative candidates. In addition, we compare our proposed model performance with the original RE-Net and TGN models. Compared to our DGT model, these models rely on gated recurrent unit sequence layers instead of Transformer layers. For the RE-Net model and its variants, we supply a batch size of 1,024, a hidden dimension of 200, and maintain default parameters listed in the original paper [10]. For the TGN model and its variants, we supply a batch size of 200 and a memory dimension of 172. All other parameters are unchanged from the original TGN paper [11]. ### _Transductive vs. Inductive Tasks_ We evaluate model performance separately on edges between nodes that are _observed_ during training (transductive setting), and edges between at least one _unobserved_ node (inductive setting). We group edges in the test data into these categories. In a transductive setting, edges connect two nodes observed in both training and testing data. We group transductive edges into the "first-time" and "repeated" categories based on edge frequency. We evaluate transductive performance with repeated edges, and semi-transductive performance with first-time edges [3]. In an inductive setting, there is at least one "unseen" node for every edge. For example, a graduate student ("unseen") can publish her first paper in the CL community with her mentor ("seen"). We use these edge groups to report the forecasting performance in both settings as shown in Figure 4. Dynamic graph models (RE-Net, TGN, and DGT variants) perform significantly better than the static graph baseline models on forecasting temporal links. For the example, DGT models achieve 30%-80% performance benefit across AI and NN domains over the static graph baseline models. This confirms the previous findings [4, 10] that dynamic graph information is important for forecasting links. However, the performance varies across different dynamic graph models across transductive and inductive tasks. For example, DGT-D model variants consistently perform well on full-transductive tasks than on inductive tasks. Full-transductive tasks may be easier to predict than inductive tasks since the model can repeat the regular patterns observed in the training data. On the other hand, DGT-C models perform significantly well on the inductive tasks, especially in AI datasets where inductive edges are the majority (Figures (a)a and (b)b). Note that inductive tasks require predictions for unseen nodes. The model achieves 0.64 MRR in the ML dataset, which outperforms the best-performing baseline model with 78% performance advantage. This shows that the Transformer models trained with continuous-time graph models can generalize into unseen nodes and edges. ### _Performance Analysis_ In this section, we report the performance of models across collaboration, partnership, and capability edge types. We focus on the transductive edges for this analysis to make a fair performance comparison across datasets. #### Iv-C1 Forecasting Collaboration Patterns We report the performance of forecasting collaboration links across a group of scientists. Our objective is to answer the following research questions: _What scientist will a given scientist collaborate with next? Which veteran scientist collaborates with an early career scientist? Which groups of scientists collaborate repeatedly? Which collaborations occur within tightly connected groups of scientists?_ Figures (a)a- (d)d report the performance of DGT model variants across the AI and NN domains. First, the DGT-C model outperforms the baseline models on the AI datasets. The model achieves 0.18 MRR on forecasting ML collaborations that outperforms the best-performing baseline with nearly 50% performance advantage (Figure (b)b). More importantly, the DGT-C model is able to predict the incumbent scientists who collaborate with newcomer scientists more accurately than the baseline models. We see 96% of the newcomer-to-newcomer collaborations across ICLR, ICML, and NeurIPs communities in the testing period, while 20% of collaborations between incumbent and newcomer scientists appear in the testing period. For example, Sergey Levine, Percy Liang, and Zhuoran Yang are top-three scientists that publish in ML venues with newcomer scientists. The DGT-C model achieves 0.11 MRR and 0.3 Hits@10 for predicting such ML collaborations. The DGT-C performs similarly in the ACL dataset, with 78% performance advantage over the best-performing baseline (Figure (a)a). DGT-C achieves 0.34 MRR and 0.72 Hits@10 for predicting incumbent-newcomer collaborations in the ACL dataset. Second, the DGT-D and RE-Net models achieve the best performance in the WoS and Scopus datasets. For example, the DGT-D model achieves 0.36 MRR and 0.45 Hits@10 in the WoS dataset. The collaborations in the AI domain are more frequently emerging than in the NN domain (see the supplementary materials for more details). Of NN collaborations, 99% occurred across a group of incumbent scientists and the majority of such collaborations are repeated. For example, the model is able to predict a group of Japanese scientists who repeatedly publish in the WoS dataset. Hiroyoshi Sakurai is the most active scientist appearing in the testing period who repeatedly published with other scientists. We believe the performance differences across the DGT-C and DGT-D variants are mainly due to the discrete- and continuous-time granularity of the graph inputs in the respective model variants. Models may capture different temporal graph characteristics when trained with such different graph inputs. This performance difference also explains the generalization ability of the DGT-C model, which is implemented with the Transformer architecture to handle continuous-time graph inputs. #### Iv-C2 Forecasting Partnership Patterns In this section, we investigate the performance with respect to partnership edges, between scientists and institutions. Figure 5 reports the breakdown in performance across all datasets. Thus, we answer the following research questions: _What institution will a given scientist partner with in a research collaboration? Who are the scientists partnered with an institution in the next publication? Are authors partnered with multiple institutions harder to forecast? How do the models perform on forecasting partnerships across large and small organizations?_ We find that in the case of both AI datasets, the best-performing model for partnership edges is DGT-C. All models also suffer a loss in performance when predicting on mirrored edges, in this case institution to scientist. When breaking down the edge-specific performance further into full-transductive, semi-transductive, and inductive edges, we find that the DGT-C outperforms all other models in all cases for the ACL dataset. We see the same pattern in the ML dataset, except for the case of full-transductive edges, where RE-Net outperforms DGT-C. The small percentage of full-transductive edges in the ML dataset may be the reason for this difference between overall and edge-type specific performance. Overall, the DGT-C is the only model to consistently perform well across all types of partnership edges in ACL and across the majority of partnership edges in the ML dataset, despite struggling to predict mirrored edges in the ML dataset. Fig. 4: Forecasting performance in transductive and inductive settings. We measure the performance on predicting edges in three groups: 1) full-transductive (seen nodes with repeated interactions); 2) semi-transductive (seen nodes with first-time interactions); and 3) inductive (interactions with an unseen node). The task complexity increases in the same order. Note that the missing bars in the inductive task are due to the static graph baseline methods that are not able to perform the inductive tasks. Percentages of inductive and transductive edges are shown on the figure titles for each dataset. In contrast, across both NN datasets, the best-performing models are the RE-Net and DGT-D. When breaking down the performance further into types of scientists and institutions, we notice this performance increase only holds in the case of full-transductive partnerships. For semi-transductive and inductive edges, the TGN is the best-performing model. However, all models struggle to predict new partnership edges compared to the full-transductive task. Because there are so few inductive edges in the NN datasets, only the semi-transductive performance has a significant effect on the overall performance. Due to there being a majority of full-transductive edges in both NN datasets, this performance advantage by the TGN on semi-transductive edges does not hold for the overall performance. We also note that in the Scopus dataset, institution-to-scientist edges are easier to predict than scientist-to-institution edges. In addition, we conducted a fine-grained analysis on the best-performing model, DGT-C, on the ACL dataset. In order to answer the question of whether institution size has an effect on model performance across partnership edges, we ranked each institution by number of individual collaborators within the dataset. One thing to note is that the majority of the institutions in the ACL dataset have fewer than 10 collaborators. We found that the DGT-C achieved high performance (0.9 MRR or higher) among a set of smaller institutions with under 100 collaborators, namely INRIA, University of California, and Xi'an Jiaotong University. However, the hardest institutions to predict (0.04 MRR or lower) were also smaller institutions with under 100 collaborators, such as Institute for Human and Machine Cognition, University of Bucharest, and Universitat Politecnica de Catalunya. The largest institutions in the ACL dataset are Carnegie Mellon University, University of Edinburgh, and Peking University. The DGT-C model on average achieves 0.5 MRR for each of these institutions, which is the same as the overall average MRR for partnership edges on ACL. The DGT-C model does reliably perform well on larger institutions; however, because several smaller Fig. 5: Transductive forecasting performance breakdown by different directed edge types. institutions have a high-performance advantage and several larger institutions have a performance disadvantage, it seems as though in many cases the size of the institution does not have a direct correlation on performance. #### Iv-A3 Forecasting Authorship Behavior In this section, we report the performance of forecasting links between scientists and capabilities. We answer the following research questions: i) _What is the next capability a scientist will publish on?_ ii) _Which scientists will publish on a given capability?_ iii) _Which capabilities are harder to forecast?_ To answer these questions, we predict either the head or tail node given a test quadruplet that consists of a scientist and a capability. For example, we rank all candidate capabilities given a scientist, or rank all scientists given a capability. Figures 4(a)- 4(d) report the forecasting accuracy in the AI and NN datasets. We have four observations from these figures. First, the DGT-D model consistently outperforms the rest of the baselines on predicting authorship behavior in all datasets except ML. DGT-D achieves 0.54 and 0.89 MRR in the ACL and WoS datasets, respectively. We also noticed that DGT-D and RE-Net models have comparable performance across multiple metrics, but the DGT-D predictions are much closer to the ground truth. For example, the model predicts the _language model_ and _machine translation_ as the most popular topics in the ACL dataset, while predicting the _radioactive_ and _hydrodynamic_ as the most popular topics in the WoS dataset. Second, the DGT-C model has a performance advantage in the ML dataset (Figure 4(b)). For example, the model achieves 0.98 MRR on predicting which ML capabilities scientists will publish on. The model predicts many newcomer scientists Fig. 6: Model performance (MRR) of collaboration edges between the five most frequent countries and capabilities. Left: best model performance for collaborations between countries of scientist-scientist and scientist-institution edges. Right: best model performance for scientist-capability edges. DGT-C and DGT-D are the best-performing models for ACL and WoS datasets, respectively. Blank cells indicate no edge between entities. publishing under _reinforcement learning_, _adversarial_, and _language model_ topics in the ML dataset. The ML capability evolution has unique characteristics from the rest of the dataset as 84% scientist-capability edges in the testing period are newly formed in contrast to \(<1\%\) similar edges in the WoS and Scopus datasets. As we keep capabilities the same across both training and testing periods, the model is able to generalize the predictions for scientists unseen in the training period. Third, we analyze how the models predict which scientists will publish on a given capability instead of the next capability a scientist will publish on. We noticed that the DGT-D is the best-performing model in most of the cases (Figure 4(b), 4(c) and 4(d)), but the DGT-C model predicts more accurately in the ACL dataset (Figure 4(a)). This model achieves 0.34 MRR and 0.78 Hits@10 with a two to four times performance advantage over the best-performing baseline in the ACL dataset. We see that the DGT-D model performs significantly well in the WoS and Scopus datasets. For example, the model accurately predicts the most active scientists (T. Hayat, A. Alsaedi, etc.) who publish on the _hydrodynamic_ topic in the testing period. Finally, we investigate which capabilities are more difficult to forecast than others. We rank the capabilities from the best to worst forecasting accuracy and filter out the infrequent ones. We noticed that _nuclear, radioactive, fission fragment_, and _hydrodynamic_ are the best-performing capabilities, while _nuclear accident,radioactive source, radioactive material_, and _nuclear material_ are the worst-performing capabilities in the WoS dataset. Similarly, _causal, adversarial_, and _transfer learning_ and _summarization, ethic_, and _natural language generation_ are the sets of best- and worst-performing capabilities in the ML dataset. ### _Knowledge-Informed Performance Reasoning_ We examined international and domestic collaborations in two datasets: ACL and WoS as shown in Figures 8(a) and 8(b). Interactions between the same countries have different performances across the two domains, _e.g.,_ China to Japan has a higher MRR value of 0.52 compared to 0.35 MRR from WoS. Interactions involving China are the hardest for the WoS DGT-D model to predict despite being the most frequent country. In WoS, domestic collaborations (_i.e.,_ interactions within the same country) achieve greater performance over international collaborations (see Figure 8(b)). In contrast, model performance from the ACL community varies in both domestic and international collaborations. Next, we looked at model performances across capabilities in the AI and NN domains. Figures 8(c) and 8(d) show the performance (MRR) on edges between the top five countries and capabilities (subsampled due to space) for ACL and WoS. In Figure 8(c), near perfect MRR scores for China and Japan highlight the model's ability to forecast in those expertise areas, _i.e.,_ natural language generation (nlg), reinforcement learning, summarization, and transfer learning. The 'Other' country category has interactions with all capabilities, but performance varies significantly. One reason for this is the uncertainty in country labeling. Potentially dozens of different countries are being represented by this one category. ## VI Summary and Discussions Forecasting technical expertise and capability evolution on national security domains would provide important clues for analysts to make informed decisions. In this paper, we developed the DGT network, a novel deep-learning architecture that leverages attributes from both GNNs and transformer models, to anticipate technical expertise and capability evolution in critical national security domains like AI and NN. To this end, we trained and evaluated eight DGT models from the publicly available digital scholarly data with complex relationships across scientists, institutions, and capabilities in both discrete and continuous time settings. We made our graph datasets and code publicly available [3]. We show that DGT models perform well on inductive link forecasting tasks for the nodes unseen during the training. While this is useful for analysts to detect the emerging scientists who work on operationally relevant disciplines, it is more challenging due to the lack of signals to track the involvement of new scientists. DGT generalizes the patterns seen on the training data to detect which veteran scientists attract which new scientists and vice-versa. We noticed models capturing different collaboration patterns across AI and NN domains. For example, the models learn more from the tightly connected cliques of scientists in the NN domain than the hierarchical structure present in the AI domain. We predict the collaboration patterns for highly influential scientists are more accurate than other scientists in AI datasets. Our detailed performance analysis suggests that collaborations across scientists and institutions within the same country (domestic) are easier to anticipate than cross-country collaborations (international); collaboration patterns within the United States are easier to anticipate than those outside the United States, with collaborations from China being the most difficult to forecast in the NN domain. Analysts can narrow down important scientists from a specific country who may start collaborating on important topics. We also predict the research topics that scientists will tackle in the future. For example, the models predict highly interdisciplinary scientists who work on multiple research topics such as nuclear, radioactive, fission fragment, etc. in the NN domain. At the same time, models generalize predictions to AI scientists who focus on emerging research topics such as reinforcement learning, adversarial, and language model. This provides important clues for analysts to detect impactful scientists who collaborate with people on new topics [44], or other scientists who continue publishing on the same topic. This forecasting information would be useful to determine the direction in scientific discovery, especially for funding agencies to promote high-risk/high-reward projects testing unexplored hypothesis in national security domains [45]. For example, the models predict that language modeling was one of the popular AI topics in 2020 through the patterns of collaboration and expertise in the training data before 2019. Since 2020, language models revolutionized AI research and advanced scientific breakthroughs in multiple domains such as chemistry, biology and security [13, 15]. ## Acknowledgments This work was supported by the NNSA Office of Defense Nuclear Nonproliferation Research and Development, U.S. Department of Energy, and Pacific Northwest National Laboratory, which is operated by Battelle Memorial Institute for the U.S. Department of Energy under Contract DE-AC05-76RLO1830. This article has been cleared by PNNL for public release as PNNL-SA-181649. Authors thank Sridevi Wagle, Shivam Sharma and Sannish Soni for their help with preparing the datasets.
2305.07031
Hawkes Process Based on Controlled Differential Equations
Hawkes processes are a popular framework to model the occurrence of sequential events, i.e., occurrence dynamics, in several fields such as social diffusion. In real-world scenarios, the inter-arrival time among events is irregular. However, existing neural network-based Hawkes process models not only i) fail to capture such complicated irregular dynamics, but also ii) resort to heuristics to calculate the log-likelihood of events since they are mostly based on neural networks designed for regular discrete inputs. To this end, we present the concept of Hawkes process based on controlled differential equations (HP-CDE), by adopting the neural controlled differential equation (neural CDE) technology which is an analogue to continuous RNNs. Since HP-CDE continuously reads data, i) irregular time-series datasets can be properly treated preserving their uneven temporal spaces, and ii) the log-likelihood can be exactly computed. Moreover, as both Hawkes processes and neural CDEs are first developed to model complicated human behavioral dynamics, neural CDE-based Hawkes processes are successful in modeling such occurrence dynamics. In our experiments with 4 real-world datasets, our method outperforms existing methods by non-trivial margins.
Minju Jo, Seungji Kook, Noseong Park
2023-05-09T07:52:56Z
http://arxiv.org/abs/2305.07031v2
# Hawkes Process Based on Controlled Differential Equations ###### Abstract Hawkes processes are a popular framework to model the occurrence of sequential events, i.e., occurrence dynamics, in several fields such as social diffusion. In real-world scenarios, the inter-arrival time among events is _irregular_. However, existing neural network-based Hawkes process models not only i) fail to capture such complicated irregular dynamics but also ii) resort to heuristics to calculate the log-likelihood of events since they are mostly based on neural networks designed for regular discrete inputs. To this end, we present the concept of Hawkes process based on controlled differential equations (HP-CDE), by adopting the neural controlled differential equation (neural CDE) technology which is an analogue to _continuous_ RNNs. Since HP-CDE continuously reads data, i) irregular time-series datasets can be properly treated preserving their uneven temporal spaces, and ii) the log-likelihood can be exactly computed. Moreover, as both Hawkes processes and neural CDEs are first developed to model complicated human behavioral dynamics, neural CDE-based Hawkes processes are successful in modeling such occurrence dynamics. In our experiments with 4 real-world datasets, our method outperforms existing methods by non-trivial margins. ## 1 Introduction Real-world phenomena typically correspond to the occurrence of sequential events with _irregular_ time intervals and _numerous_ event types, ranging from online social network activities to personalized healthcare and so on [1, 1, 2, 10]. Hawkes processes and Poisson point process are typically used to model those sequential events [1, 13, 14]. However, their basic assumptions are too stringent to model such complicated dynamics, e.g., all past events should influence the occurrence of the current event. To this end, many advanced techniques have been proposed for the past several years, ranging from classical recurrent neural network (RNN) based models such as RMTPP [15] and NHP [16] to recent transformer models like SAHP [13] and THP [15]. Even so, they still do not treat data in a fully continuous way but resort to heuristics, which is sub-optimal in processing irregular events [1, 16, 17]. Likewise, their heuristic approaches to model the continuous time domain impede solving the multivariate integral of the log-likelihood calculation in Eq. (4), leading to approximation methods such as the Monte Carlo sampling (cf. Table 1). As a consequence, the strict constraint and/or the inexact calculation of the log-likelihood may induce inaccurate predictions. In this work, therefore, we model the occurrence dynamics based on differential equations, not only directly handling the sequential events in a continuous time domain but also exactly solving the integral of the log-likelihood. One more inspiration of using differential equations is that they have shown several non-trivial successes in modeling human behavioral dynamics [1, 16, 15] -- in particular, we are interested in controlled differential equations. To our knowledge, therefore, we first answer the question of whether occurrence dynamics can be modeled as controlled differential equations. Controlled differential equations (CDEs [13]) are one of the most suitable ones for building human behavioral models. CDEs were first developed by a financial mathematician to model complicated dynamics in financial markets which is a typical application domain of Hawkes processes since financial transactions are temporal point processes. In particular, neural controlled differential equations \begin{table} \begin{tabular}{c|c|c} \hline \hline **Model** & **Exact log-likelihood** & **How to model dynamics** \\ \hline NHP, SAHP, & X & Discrete \\ THP & & O & Continuous \& \\ \cline{2-3} HP-CDE & (\(\lambda^{*}\) is continuous.) & robust to irregular dynamics \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of neural network-based Hawkes process models. \(\lambda^{*}\) denotes the conditional intensity function (cf. Eqs. (4), (6), and (7)). (neural CDEs [12]), whose initial value problem (IVP) is written as below, are a set of techniques to learn CDEs from data with neural networks: \[\begin{split}\mathbf{h}(t_{b})&=\mathbf{h}(t_{a})+ \int_{t_{a}}^{t_{b}}f(\mathbf{h}(t);\theta_{f})dZ(t)\\ &=\mathbf{h}(t_{a})+\int_{t_{a}}^{t_{b}}f(\mathbf{h}(t);\theta_{f })\frac{dZ(t)}{dt}dt,\end{split} \tag{1}\] where \(f\) is a CDE function, and \(\mathbf{h}(t)\) is a hidden vector at time \(t\). \(Z(t)\) is a continuous path created from discrete sequential observations (or events) \(\{(\mathbf{z}_{j},t_{j})\}_{j=a}^{b}\) by an appropriate algorithm1, where in our case, \(\mathbf{z}_{j}\) is a vector containing the information of \(j\)-th occurrence, and \(t_{j}\in[t_{a},t_{b}]\) contains the time-point of the occurrence, i.e., \(t_{j}<t_{j+1}\). Note that neural CDEs keep reading the time-derivative of \(Z(t)\) over time, denoted \(\dot{Z}(t):=\frac{dZ(t)}{dt}\), and for this reason, neural CDEs are in general, considered as _continuous_ RNNs. In addition, NCDEs are known to be superior in processing irregular time series Llyons _et al._ (2004). Footnote 1: One can use interpolation algorithms or neural networks for creating \(Z(t)\) from \(\{(\mathbf{z}_{j},t_{j})\}_{j=a}^{b}\)[12]. Given the neural CDE framework, we propose **H**awkes **P**rocess based on **C**ontrolled **D**ifferential **E**quations (HP-CDE). We let \(\mathbf{z}_{j}\) be the sum of the event embedding and the positional embedding and create a path \(Z(t)\) with the linear interpolation method which is a widely used interpolation algorithm for neural CDEs (cf. Figure 2). To get the exact log-likelihood, we use an ODE solver to calculate the non-event log-likelihood. Calculating the non-event log-likelihood involves the integral problem in Eq. (4), and our method can solve it exactly since conditional intensity function \(\lambda^{*}\), which indicates an instantaneous probability of an event, is defined in a continuous manner over time by the neural CDE technology. In addition, we have three prediction layers to predict the event log-likelihood, the event type, and the event occurrence time (cf. Eqs. (8), (12), (13) and Figure 3). We conduct event prediction experiments with 4 datasets and 4 baselines. Our method shows outstanding performance in all three aspects: i) event type prediction, ii) event time prediction, and iii) log-likelihood. Our contributions are as follows: 1. We model the _continuous_ occurrence dynamics under the framework of neural CDE whose original theory was developed for describing _irregular non-linear_ dynamics. Many real-world Hawkes process datasets have irregular inter-arrival times of events. 2. We then exactly solve the integral problem in Eq. (4) to calculate the non-event log-likelihood, which had been done typically through heuristic methods before our work. ## 2 Preliminaries ### Multivariate Point Processes Multivariate point processes are a generative model of an event sequence \(X=\{(k_{j},t_{j})\}_{j=1}^{N}\) and \(x_{j}=(k_{j},t_{j})\) indicates \(j\)-th event in the sequence. This event sequence is a subset of an event stream under a continuous time interval \([t_{1},t_{N}]\), and an observation \(x_{j}\) at time \(t_{j}\) has an event type \(k_{j}\in\{1,\cdots,K\}\), where \(K\) is total number of event types. The arrival time of events is defined as \(t_{1}<t_{2}<\cdots<t_{N}\). The point process model learns a probability for every \((k,t)\) pair, where \(k\in\{1,\cdots,K\},\ t\in[t_{1},t_{N}]\). The key feature of multivariate point processes is the intensity function \(\lambda_{k}(t)\), i.e., the probability that a type-\(k\) event occurs at the infinitesimal time interval \([t,t+dt)\). The Hawkes process, one popular point process model, assumes that the intensity \(\lambda_{k}(t)\) of type \(k\) can be calculated by past events before \(t\), so-called history \(\mathcal{H}_{t}\), and its form is as follows: \[\lambda_{k}^{*}(t):=\lambda_{k}(t|\mathcal{H}_{t})=\mu_{k}+\sum_{j:t_{j}<t} \psi_{k}(t-t_{j}), \tag{2}\] where \(\lambda^{*}(t)=\sum_{k=1}^{K}\lambda_{k}^{*}(t)\), \(\mu_{k}\) is the base intensity, and \(\psi_{k}(\cdot)\) is a pre-determined decaying function for type \(k\). We use \(*\) to represent conditioning on the history \(\mathcal{H}_{t}\). According to the formula, all the past events affect the probability of new event occurrence with different influences. However, the intensity converges to the base intensity if the decaying function becomes close to zero. Currently, a deep learning mechanism is applied to Hawkes processes by parameterizing the intensity function. For instance, RNNs are used in the neural Hawkes process (NHP) [13], and its intensity function is defined as follows: \[\lambda^{*}(t)=\sum_{k=1}^{K}\phi_{k}(\mathbf{w}_{k}^{\top}\mathbf{h}(t)),\quad t \in[t_{1},t_{N}], \tag{3}\] where \(\phi_{k}(\cdot)\) is the softplus function, \(\mathbf{h}(t)\) is a hidden state from RNNs, and \(\mathbf{w}_{k}\) is a weight for each event type. The softplus function keeps intensity values positive. However, one downside of NHP is that RNN-based models assume that events have regular intervals. Thus, one of the main issues in NHP is how to fit a model to a continuous irregular time domain. ### Neural Network-based Hawkes Processes Hawkes processes are a popular temporal predicting framework in various fields since it predicts both _when_, which _type_ of events would happen with mathematical approaches. It is especially widely used in sociology fields to capture the diffusion of information [13, 14, 15, 16], seismology fields to model when earthquakes and aftershocks occur, medical fields to track the status of patients [15, 16], and so on. For enhancing the performance of Hawkes processes, a lot of deep learning approaches have been applied. The two basic approaches are the recurrent marked temporal point process (RMTPP [13]) and the neural Hawkes process (NHP [13]). RMTPP is the first model that combines RNNs into point processes, and NHP is a Hawkes process model with an RNN-parameterized intensity function. Based on NHP, the self-attentive Hawkes process (SAHP [15]) attaches self-attention modules to reflect the relationships between events. Additionally, the transformer Hawkes process (THP [20]) uses the transformer technology [21], one of the most popular structures in natural language processing, to capture both short-term and long-term temporal dependencies of event sequences. One important issue of neural network-based Hawkes process is how to handle irregular time-series datasets. To deal with this issue, NHP uses continuous-time LSTMs, whose memory cell exponentially decays. SAHP and THP both employ modified positional encoding schemes to represent irregular time intervals since the conventional encoding assumes regular spaces between events. However, all mentioned approaches still do not explicitly process irregular time-series. In contrast to them, our HP-CDE is robust to irregular time-series since the original motivation of neural CDEs is better processing irregular time-series by constructing continuous RNNs. ### Neural Controlled Differential Equations as continuous RNNs Neural controlled differential equations (neural CDEs) are normally regarded as a continuous analogue to RNNs since they process the time-derivative of the continuous path \(Z(t)\). Especially, neural CDEs retain their continuous properties by using the interpolated path \(Z\) made of discrete data \(\{(\mathbf{z}_{j},t_{j})\}_{j=a}^{b}\) and solving the Riemann-Stieltjes integral to get \(\mathbf{h}(t_{b})\) from \(\mathbf{h}(t_{a})\) as shown in Eq. (1) -- in particular, this problem to derive \(\mathbf{h}(t_{b})\) from the initial condition \(\mathbf{h}(t_{a})\) is known as initial value problem (IVP) (cf. Figure 1). At first, to make the interpolated continuous path \(Z\), linear interpolation or natural cubic spline interpolation is generally used among several interpolation methods. Then, we use existing ODE solvers to solve the Riemann-Stieltjes integral problem with \(\hat{\mathbf{h}}(t):=\frac{d\mathbf{h}(t)}{dt}=f(\mathbf{h}(t);\theta_{f}) \frac{dZ(t)}{dt}\). ### Maximum Likelihood Estimation in Temporal Point Process Most of the neural temporal point process frameworks choose the maximum likelihood estimation (MLE) [10] as one of the main training objectives. In order to enable the MLE training, getting the log-probability of every sequence \(X\) is required, which consists of formulas using intensity functions conditioned on the history \(\mathcal{H}_{t}\)=\(\{(k_{j},t_{j}):t_{j}<t\}\). Thus, log-probability for any event sequence \(X\) whose events are observed in an interval \([t_{1},t_{N}]\) is as follows: \[\log p(X)=\sum_{j=1}^{N}\log\lambda^{*}(t_{j})-\int_{t_{1}}^{t_{N}}\lambda^{*} (t)dt, \tag{4}\] where \(\sum_{j=1}^{N}\log\lambda^{*}(t_{j})\) denotes the event log-likelihood and \(\int_{t_{1}}^{t_{N}}\lambda^{*}(t)dt\) means the non-event log-likelihood. Non-event log-likelihood represents sum of the infinite number of non-events' log-probabilities in \([t_{1},t_{N}]\), except the infinitesimal times when the event occurs. In the case of the event log-likelihood, it is comparably easy to compute as the formula is simply a sum of the intensity functions. However, it is challenging to compute the non-event log-likelihood, due to its integral computation. Due to the difficulty, NHP, SAHP, THP and many other models use approximation methods, such as Monte Carlo integration [14] and numerical integration methods [22], to get the value. However, since those methods do not exactly solve the integral problem, numerical errors are inevitable. ## 3 Proposed Method In this section, we describe our _explicitly continuous_ Hawkes process model, called HP-CDE, based on the neural CDE framework which is considered as continuous RNNs. Owing to the continuous property of the proposed model, the exact log-likelihood, especially for the non-event log-likelihood part with its challenging integral calculation, can also be computed through ODE solvers. That is, our proposed model reads event sequences with irregular inter-arrival times in a continuous manner, and exactly computes the log-likelihood. ### Overall Workflow Figure 2 shows comprehensive designs of our proposed model, HP-CDE. The overall workflow is as follows: 1. Given the event sequence \(X=\{(k_{j},t_{j})\}_{j=1}^{N}\), i.e., event type \(k_{j}\) at time \(t_{j}\), the embeddings \(\{\mathbf{E_{e}}(k_{j}),\mathbf{E_{p}}(t_{j})\}_{j=1}^{N}\) Figure 1: Visualization of the continuous hidden state of the neural CDE model are made through the encoding processes, where \(\mathbf{E_{e}}(k_{j})\) is an embedding of \(k_{j}\) and \(\mathbf{E_{p}}(t_{j})\) is a positional embedding of \(t_{j}\). 2. Then we use \(\{\mathbf{E_{e}}(k_{j})\oplus\mathbf{E_{p}}(t_{j})\}_{j=1}^{N}\) as the discrete hidden representations \(\{\mathbf{z}_{j}\}_{j=1}^{N}\). In other words, \(\mathbf{z}_{j}=\mathbf{E_{e}}(k_{j})\oplus\mathbf{E_{p}}(t_{j})\), i.e., the element-wise summation of the two embeddings. 3. An interpolation algorithm is used to create the continuous path \(Z(t)\) from \(\{(\mathbf{z}_{j},t_{j})\}_{j=1}^{N}\) -- we augment the time information \(t_{j}\) to each \(\mathbf{z}_{j}\). 4. Using the continuous path \(Z(t)\), a neural CDE layer calculates the final continuous hidden representation \(\mathbf{h}(t)\) for all \(t\). At the same time, an ODE solver integrates the continuous intensity function \(\lambda^{*}(t)\) which is calculated from \(\mathbf{h}(t)\) (cf. Eq. (7)) to calculate the non-event log-likelihood. In addition, there are three prediction layers to predict the event type, time, and log-likelihood (cf. Figure 3). We provide more detailed descriptions for each step in the following subsections with the well-posedness of our model. ### Embedding We embed both the type and time of each event into separate vectors and then add them. To be more specific, we map each event type to an embedding vector \(\mathbf{E_{e}}(k)\), which is trainable. With trigonometric functions, we embed the time information to a vector \(\mathbf{E_{p}}(t)\), which is called positional encoding in transformer language models (cf. Appendix A). We use the sum of the two embeddings, \(\{\mathbf{E_{e}}(k_{j})\oplus\mathbf{E_{p}}(t_{j})\}_{j=1}^{N}\) as the discrete hidden representations \(\{\mathbf{z}_{j}\}_{j=1}^{N}\), i.e., \(\mathbf{z}_{j}=\mathbf{E_{e}}(k_{j})\oplus\mathbf{E_{p}}(t_{j})\). ### Occurrence Dynamics and Continuous Intensity Function With \(\{\mathbf{z}_{j}\}_{j=1}^{N}\), we calculate the _continuous_ hidden representation \(\mathbf{h}(t_{j})\) for any arbitrary \(j\), where \(t_{1}\leq t_{j}\), based on the neural CDE framework as follows: \[\mathbf{h}(t_{j})=\mathbf{h}(t_{1})+\int_{t_{1}}^{t_{j}}f(\mathbf{h}(t);\theta _{f})\frac{dZ(t)}{dt}dt, \tag{5}\] where \(Z(t)\) is a continuous path created by an interpolation algorithm from \(\{(\mathbf{z}_{j},t_{j})\}_{j=1}^{N}\). The well-posedness2 of neural CDEs is proved in [13, Theorem 1.3] under the Lipschitz continuity requirement (cf. Appendix B). Neural CDE layer is able to generate the continuous hidden representation \(\mathbf{h}(t_{j})\), where \(t_{1}\leq t_{j}\), even when the sequence \(\{(\mathbf{z}_{j},t_{j})\}_{j=1}^{N}\) is an irregular time-series, i.e., the inter-arrival time varies from one case to another. Footnote 2: The well-posedness of an initial value problem means that i) its unique solution, given an initial value, exists, and ii) its solutions continuously change as initial values change. This continuous property enables our model to exactly solve the integral problem of the non-event log-likelihood. That is, the non-event log-likelihood can be re-written as the following ODE form: \[\mathbf{a}(t_{N})=\int_{t_{1}}^{t_{N}}\lambda^{*}(t)dt, \tag{6}\] where the conditional intensity function of Eqs. (2) and (3) is, in our case, the sum of the conditional intensity functions of all event types as follows: \[\lambda^{*}(t)=\sum_{k=1}^{K}\lambda_{k}^{*}(t),\quad\lambda_{k}^{*}(t)=\phi_{k }(\mathbf{W}_{k}^{\text{intst}\top}\mathbf{h}(t_{j})), \tag{7}\] where \(\mathbf{W}_{k}^{\text{intst}}\) is a weight matrix of intensity about type \(k\), and therefore, \(\mathbf{W}_{k}^{\text{intst}\top}\mathbf{h}(t_{j})\) is a linear projected representation which has the history of events before time \(t_{j}\). \(\phi_{k}(x):=\beta_{k}\log(1+\exp(x/\beta_{k}))\) is the softplus function with a parameter \(\beta_{k}\) to be learned. The softplus function is used to restrict the intensity function to have only positive values. Therefore, the log-probability of HP-CDE for any event sequence \(X\) is redefined from Eq. (4) as: \[\log p(X)=\sum_{j=1}^{N}\log\lambda^{*}(t_{j})-\mathbf{a}(t_{N}). \tag{8}\] As a result, we can naturally define the following augmented ODE, where \(\mathbf{h}(t)\) and \(\mathbf{a}(t)\) are combined: \[\frac{d}{dt}\begin{bmatrix}\mathbf{h}(t)\\ \mathbf{a}(t)\end{bmatrix}=\begin{bmatrix}f(\mathbf{h}(t);\theta_{f})\frac{dZ(t )}{dt}\\ \lambda^{*}(t)\end{bmatrix} \tag{9}\] and \[\begin{bmatrix}\mathbf{h}(t_{1})\\ \mathbf{a}(t_{1})\end{bmatrix}=\begin{bmatrix}\pi(\mathbf{z}(t_{1});\theta_{ \tau})\\ 0\end{bmatrix}, \tag{10}\] where \(\pi\) is a fully connected layer. The neural network \(f\) is defined as follows: \[f(\mathbf{h}(t))=\text{Tanh}(\pi_{M}(\text{ELU}(\cdots(\text{ELU}(\pi_{1}( \mathbf{h}(t))))))), \tag{11}\] which consists of fully connected layers with the ELU or the hyperbolic tangent activation. The number of layers \(M\) is a hyperparameter. In Zuo et al. [20], the generated hidden representations from the self-attention module of their transformer have discrete time stamps, and therefore, its associated intensity function definition is inevitably discrete. For that reason, they rely on a heuristic method, e.g., Monte Carlo method, to calculate the non-event log-likelihood. In our case, however, the physical time is modeled in a continuous manner and therefore, the exact non-event log-likelihood can be calculated as in Eq. (6). ### Prediction Layer Our model has three prediction layers as in other Hawkes process models: i) next event type, ii) next event time, and iii) the event log-likelihood (cf. Figure 3). We use Eq. (7) to calculate the event log-likelihood. For the event type and time predictions, we predict \(\{\hat{t}_{j}\}_{j=2}^{N+1}\) and \(\{\hat{k}_{j}\}_{j=2}^{N+1}\) after reading \(X=\{(k_{j},t_{j})\}_{j=1}^{N}\). For the event type prediction layer, we use the following method: \[\hat{\mathbf{p}}_{j+1} =\text{Softmax}(\mathbf{W}^{\text{type}}\mathbf{h}(t_{j})), \tag{12}\] \[\hat{k}_{j+1} =\arg\max_{k}\hat{\mathbf{p}}_{j+1}(k),\] where \(\mathbf{W}^{\text{type}}\) is a trainable parameter and \(\hat{\mathbf{p}}_{j+1}(k)\) is the probability of type \(k\) at time \(t_{j+1}\). For the event time prediction layer, we use the following definition: \[\hat{t}_{j+1}=\mathbf{W}^{\text{time}}\mathbf{h}(t_{j}), \tag{13}\] where \(\mathbf{W}^{\text{time}}\) is a trainable parameter. ### Training Algorithm Our loss definition consists of three parts. The first part is the following MLE loss, i.e. maximizing the log-likelihood (cf. Eq. (8)): \[\max\sum_{i=1}^{S}\log p(X_{i}), \tag{14}\] where \(S\) is the number of training samples. While training, the log-intensity of each observed event increases and the non-event log-likelihood decreases in the whole interval \([t_{1},t_{N}]\). The second loss is the event type loss function which is basically a cross-entropy term as follows: \[\mathcal{L}_{\text{type}}(X)=\sum_{j=2}^{N+1}-\mathbf{k}_{j}^{\top}\log(\hat{ \mathbf{p}}_{j}), \tag{15}\] where \(\mathbf{k}_{j}\) is a one-hot vector for the event type \(k_{j}\). In the case of the event time loss, we use the inter-arrival time \(\tau_{i}=t_{i}-t_{i-1}\) to compute the loss as follows: \[\mathcal{L}_{\text{time}}(X)=\sum_{j=2}^{N+1}(\tau_{j}-\hat{\tau}_{j})^{2}. \tag{16}\] Therefore, the overall objective function of HP-CDE can be written as follows: \[\min\sum_{i=1}^{S}-\alpha_{1}\log p(X_{i})+\mathcal{L}_{\text{ type}}(X_{i})+\alpha_{2}\mathcal{L}_{\text{time}}(X_{i}), \tag{17}\] where \(\alpha_{1}\) and \(\alpha_{2}\) are hyperparameters. In Alg. (1), we show the training algorithm. We first initialize all the parameters. From our training data, we randomly build a mini-batch \(\{X_{i}\}_{i=1}^{S}\) in Line 4 -- the optimal mini-batch size varies from one dataset to another. After feeding the constructed mini-batch into our model, we calculate the discrete and continuous hidden representations in Lines 6 and 7. With the loss in Eq. (17), we train our model. We repeat the steps \(max\_iter\) times. ``` 1:Initialize all the parameters of the embedding and the neural CDE layer 2:\(iter\gets 0\) 3:while\(iter<max\_iter\)do 4: Sample a mini-batch \(\{X_{i}\}_{i=1}^{S}\in\mathcal{D}_{train}\) 5: Calculate the embedding vectors, i.e, \(\mathbf{E}_{\mathbf{e}}(k_{j})\), and \(\mathbf{E}_{\mathbf{p}}(t_{j})\) 6: Calculate the discrete hidden representation \(\mathbf{z}_{j},\forall j\) 7: Calculate the continuous hidden representation \(\mathbf{h}(t)\) using neural CDE and compute the non-event log-likelihood using ODE solver with Eq. (6) over time 8: Update the parameters with Eq. (17) 9:if the loss does not decrease for \(\delta\) iterations then 10: exit 11:endif 12:endwhile 13:return the trained parameters ``` **Algorithm 1** How to train HP-CDE ## 4 Experiments ### Experimental Environments #### Experimental Settings In this section, we compare the model performance of HP-CDE with 4 state-of-the-art baselines on 4 datasets. Each dataset is split into the training set and the testing set. The training set is used to tune the hyperparameters and the testing set is used to measure the model performance. We evaluate the models with three metrics: i) log-likelihood (LL) of \(X=\{(k_{j},t_{j})\}_{j=1}^{N}\), ii) accuracy (ACC) on the event type prediction, and iii) root mean square error (RMSE) on the event time prediction. We train each model 100 epochs and report the mean and standard deviation of the evaluation metrics of five trials with different random seeds. We compare our model with various baselines (cf. Section 2.2): Recurrent Marked Temporal Point Process (RMTPP)3, Neural Hawkes Process (NHP)4, Self-Attentive Hawkes Process (SAHP)5, and Transformer Hawkes Process (THP)6. More details including hyperparameter configurations are in Appendix C. Figure 3: Prediction layer of HP-CDE **Datasets** To show the efficacy and applicability of our model, we evaluate using various real-world data. MemeTracker [10], Retweet [22], and StackOverFlow [10], are collected from Stackoverflow, web articles, and Twitter, respectively. We also use a medical dataset, called MIMIC [1]. We deliberately choose the datasets with various average sequence lengths and event type numbers \(K\) to show the general efficacy of our model. The average sequence length ranges from 3 to 109, and the number of event types \(K\) ranges from 3 to 5000 (cf. Table 3). That is, we cover not only from simple to complicated ones, but also from short-term to long-term sequences. Details of datasets are in Appendix C.3 ### Experimental Results We show the experimental results of each model on MIMIC, MemeTracker, Retweet, and StackOverFlow in Table 2. We analyze the results in three aspects: i) the event prediction, ii) the log-likelihood, and iii) the model complexity. Ablation and sensitivity analyses are in Appendix D and E. **Event Prediction** HP-CDE outperforms other baselines with regards to both the event type and the event time prediction in most cases as reported in Table 2. To be specific, in terms of accuracy, HP-CDE shows the best performance in every dataset. These results imply that processing data in a continuous manner is important when it is in a continuous time domain. Even though HP-CDE only shows the lowest RMSE on datasets with short sequence length, MIMIC and MemeTracker, we provide the solution to lower RMSE of HP-CDE when using datasets with long sequence length in Section 4.3. For the imbalanced datasets of MIMIC and MemeTracker, where only 20% of types occupy 90% and 70% of events each, we do the following additional analyses. Notably, HP-CDE attains an accuracy of 0.151 in MemeTracker, which is up to 243% higher than those of baselines, and an RMSE of 0.726 in MIMIC, about 15% lower. Furthermore, we use the macro F1 score to measure the quality of type predictions. As shown in Table 4, our model shows the best F1 score in both of the imbalanced datasets. Especially for MemeTracker, models with attention modules have relatively low F1 scores, indicating that when there exist too many classes and if they are imbalanced, attentions are overfitted to several frequently occurring classes. This phenomenon is also observed in Figure 4. In Figure 4, HP-CDE shows the most diverse predictions in terms of the number of predicted classes. \begin{table} \begin{tabular}{c|c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Dataset} \\ \cline{2-3} & MIMIC & MemeTracker \\ \hline RMTPP & 0.385\(\pm\)0.037 & 0.000\(\pm\)0.000 \\ NHP & 0.126\(\pm\)0.018 & 0.011\(\pm\)0.002 \\ SAHP & 0.108\(\pm\)0.112 & 0.000\(\pm\)0.000 \\ THP & 0.162\(\pm\)0.016 & 0.000\(\pm\)0.000 \\ \hline HP-CDE & **0.452\(\pm\)0.035** & **0.069\(\pm\)0.004** \\ \hline \hline \end{tabular} \end{table} Table 4: F1 score (\(\uparrow\)) for imbalanced datasets \begin{table} \begin{tabular}{c|c|c c c|c c} \hline \hline Dataset & Model & \multicolumn{2}{c}{LL \(\uparrow\)} & ACC \(\uparrow\) & RMSE \(\downarrow\) & \begin{tabular}{c} Memory \\ usage(MB) \\ \end{tabular} & \begin{tabular}{c} Training \\ time(m) \\ \end{tabular} \\ \hline \multirow{4}{*}{MIMIC} & RMTPP & -1.222\(\pm\)0.080 & 0.823\(\pm\)0.014 & 1.035\(\pm\)0.023 & 3 & 0.004 \\ & NHP & -0.647\(\pm\)0.051 & 0.534\(\pm\)0.015 & 0.976\(\pm\)0.020 & 13 & 0.045 \\ & SAHP & -0.859\(\pm\)0.328 & 0.555\(\pm\)0.171 & 1.138\(\pm\)0.059 & 34 & 0.037 \\ & THP & -0.233\(\pm\)0.012 & 0.741\(\pm\)0.021 & 0.856\(\pm\)0.040 & 9 & 0.012 \\ \cline{2-6} & HP-CDE & **2.573\(\pm\)0.201** & **0.847\(\pm\)0.007** & **0.726\(\pm\)0.042** & 58 & 0.058 \\ \hline \multirow{4}{*}{MemeTracker} & RMTPP & NaN & 0.006\(\pm\)0.000 & NaN & 1,708 & 0.425 \\ & NHP & -9.395\(\pm\)2.814 & 0.044\(\pm\)0.003 & 441.293\(\pm\)0.233 & 5,096 & 12.263 \\ & SAHP & 2.160\(\pm\)0.324 & 0.009\(\pm\)0.000 & 521.672\(\pm\)4.071 & 32,894 & 6.642 \\ & THP & -5.717\(\pm\)0.649 & 0.015\(\pm\)0.000 & 446.477\(\pm\)2.665 & 891 & 2.610 \\ \cline{2-6} & HP-CDE & **3.846\(\pm\)0.626** & **0.151\(\pm\)0.005** & **441.223\(\pm\)3.480** & 3,669 & 3.817 \\ \hline \multirow{4}{*}{Retweet} & RMTPP & NaN & 0.490\(\pm\)0.000 & NaN & 210 & 0.044 \\ & NHP & -9.082\(\pm\)0.125 & 0.547\(\pm\)0.010 & 16,630.956\(\pm\)0.217 & 750 & 17.820 \\ & SAHP & 1.904\(\pm\)0.566 & 0.505\(\pm\)0.067 & 16,648.339\(\pm\)1.436 & 13,276 & 0.197 \\ & THP & -7.347\(\pm\)0.268 & 0.499\(\pm\)0.013 & **15,050.470\(\pm\)26.712** & 1,582 & 0.142 \\ \cline{2-6} & HP-CDE & **6.844\(\pm\)0.539** & **0.552\(\pm\)0.009** & 15,849.218\(\pm\)26.9068 & 197 & 6.236 \\ \hline \multirow{4}{*}{StackOverFlow} & RMTPP & -1.894\(\pm\)0.002 & 0.429\(\pm\)0.000 & 1.321\(\pm\)0.002 & 27 & 0.040 \\ & NHP & -7.726\(\pm\)0.581 & 0.434\(\pm\)0.015 & 1.027\(\pm\)0.027 & 449 & 3.556 \\ \cline{1-1} \cline{2-6} & SAHP & -0.431\(\pm\)0.225 & 0.244\(\pm\)0.002 & 4.525\(\pm\)1.098 & 11,080 & 0.147 \\ \cline{1-1} & THP & -0.554\(\pm\)0.001 & 0.449\(\pm\)0.001 & **0.973\(\pm\)0.001** & 4,585 & 0.169 \\ \cline{1-1} \cline{2-6} & HP-CDE & **7.348\(\pm\)0.466** & **0.452\(\pm\)0.001** & 0.996\(\pm\)0.017 & 44 & 6.878 \\ \hline \hline \end{tabular} \end{table} Table 2: Experimental results.\(\uparrow\) (resp. \(\downarrow\)) denotes that the higher (resp. lower) the better, and we use boldface to denote the best score. \begin{table} \begin{tabular}{c|c|c c c|c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{\(K\)} & \multicolumn{2}{c|}{Sequence length} & \multirow{2}{*}{\# Events} \\ \cline{2-4} \cline{6-6} & & Min & Average & Max & \\ \hline MIMIC & 75 & 2 & 4 & 26 & 1,930 \\ MemeTracker & 5000 & 1 & 3 & 31 & 123,639 \\ \cline{2-6} & 3 & 50 & 109 & 264 & 2,173,533 \\ \cline{2-6} & StackOverFlow & 22 & 41 & 72 & 720 & 345,116 \\ \hline \hline \end{tabular} \end{table} Table 3: Characteristics of datasets used in experiments Particularly, in Figure 4 (b), HP-CDE successfully predicts for 1,164 classes among 2,604 classes, which is almost 50% of the classes in test data, whereas NHP, SAHP, and THP predict only for 217, 4, and 7 classes, respectively. Regardless of the characteristics of datasets, e.g., the number of types, the degree of imbalance, and so on, our model shows outstanding prediction results, which prove the importance of continuous processing and computing the exact log-likelihood leading to more accurate learning of dynamics. **Log-likelihood Calculation** As shown in Table 2, our models always show the best log-likelihood, outperforming others by large margins, on every dataset. One remarkable point is that our log-likelihood is always positive, while baselines show negative values in many cases. That is, in HP-CDE, the event log-likelihood exceeds the non-event log-likelihood at all times. Figure 5 shows the training curves of models fitted on Retweet and MemeTracker in a log-scale. First of all, HP-CDE show the best log-likelihood at every training epoch. Overall, except THP, the log-likelihood of MemeTracker tends to be more unstable than that of Retweet, since MemeTracker has about 1,700 times more event types than Retweet. **Memory Usage** Table 2 also recaps the model complexity. Exactly calculating the non-event log-likelihood using ODE solvers incurs additional memory usage, so that the model uses bigger memory than those of other sampling methods such as Monte Carlo sampling. Especially when the number of event types \(K\) is large, i.e., MIMIC and MemeTracker, the complexity of HP-CDEs increases as we exactly compute the non-event log-likelihood for every event type. However, when \(K\) is relatively small, owing to the adjoint sensitivity method [2, 1], HP-CDE's memory footprint notably decreases. For example, when using Retweet with \(K=3\), the space complexity of HP-CDE is almost 1% of that of THP. ### Additional Study on the Long Sequence Length While HP-CDE shows a good performance on the datasets with relatively short sequence lengths, i.e., MIMIC and MemeTracker, its RMSE results on others with longer sequence lengths, i.e., Retweet and StackOverFlow, are slightly larger than those of THP's. Therefore, to effectively deal with long sequence datasets, we put the self-attention part of transformer [20] right before the neural CDE layer and name the model HP-CDE-AT. Experimental results of HP-CDE-AT in comparison with HP-CDE and THP, which shows the highest score among baselines, are summarized in Figure 6. According to Figure 6 (a), HP-CDE-AT achieves the smallest RMSE, improving the performance of the original HP-CDE model. Remarkably, in Figure 6 (b), HP-CDE-AT even shows the best performance on StackOverFlow in both metrics, accuracy and RMSE. In conclusion, since HP-CDE-AT attains overall best results on longer datasets, HP-CDE-AT is one good option for long sequence datasets (cf. Appendix F). ## 5 Conclusions Temporal point processes are frequently used in real-world applications to model occurrence dynamics in various fields. In particular, deep learning-based Hawkes process models have been extensively studied. However, we identified the two possible enhancements from the literature and presented HP-CDE to overcome the limitations. First, we use neural CDEs to model occurrence dynamics since one of their main application areas is to model uncertainties in human behaviors. Second, we exactly calculate the non-event log-likelihood which is one important part of the training objective. Existing work uses heuristic methods for it, which makes the training process unstable sometimes. In our experiments, consequently, our presented method significantly outperforms them and shows the most diverse predictions, i.e., the least overfitting. Figure 4: The number of classes in test data vs. the number of classes in correct event type predictions, i.e., hits. HP-CDE provides not only accurate but also diverse predictions. Figure 5: Training curves on Retweet and MemeTracker. HP-CDE shows the highest log-likelihood with the fastest convergence speed. Figure 6: Additional study on long-sequence datasets, comparing accuracy and RMSE of HP-CDE-AT to HP-CDE and THP. ## Acknowledgements Noseong Park is the corresponding author. This work was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2020-0-01361, Artificial Intelligence Graduate School Program at Yonsei University, 10%), and (2022-0-01032, Development of Collective Collaboration Intelligence Framework for Internet of Autonomous Things, 45%) and (No.2022-0-00113, Developing a Sustainable Collaborative Multi-modal Lifelong Learning Framework, 45%). ## Ethical Statement MIMIC contains much personal health information. However, it was released after removing observations, such as diagnostic reports and physician notes, using a rigorously evaluated deidentification system to protect the privacy of the patients who have contributed their information. Therefore, our work does not have any related ethical concerns.
2305.08302
t-RAIN: Robust generalization under weather-aliasing label shift attacks
In the classical supervised learning settings, classifiers are fit with the assumption of balanced label distributions and produce remarkable results on the same. In the real world, however, these assumptions often bend and in turn adversely impact model performance. Identifying bad learners in skewed target distributions is even more challenging. Thus achieving model robustness under these "label shift" settings is an important task in autonomous perception. In this paper, we analyze the impact of label shift on the task of multi-weather classification for autonomous vehicles. We use this information as a prior to better assess pedestrian detection in adverse weather. We model the classification performance as an indicator of robustness under 4 label shift scenarios and study the behavior of multiple classes of models. We propose t-RAIN a similarity mapping technique for synthetic data augmentation using large scale generative models and evaluate the performance on DAWN dataset. This mapping boosts model test accuracy by 2.1, 4.4, 1.9, 2.7 % in no-shift, fog, snow, dust shifts respectively. We present state-of-the-art pedestrian detection results on real and synthetic weather domains with best performing 82.69 AP (snow) and 62.31 AP (fog) respectively.
Aboli Marathe, Sanjana Prabhu
2023-05-15T02:05:56Z
http://arxiv.org/abs/2305.08302v1
# t-RAIN: Robust generalization under weather-aliasing label shift attacks ###### Abstract In the classical supervised learning settings, classifiers are fit with the assumption of balanced label distributions and produce remarkable results on the same. In the real world, however, these assumptions often bend and in turn adversely impact model performance. Identifying bad learners in skewed target distributions is even more challenging. Thus achieving model robustness under these "label shift" settings is an important task in autonomous perception. In this paper, we analyze the impact of label shift on the task of multi-weather classification for autonomous vehicles. We use this information as a prior to better assess pedestrian detection in adverse weather. We model the classification performance as an indicator of robustness under 4 label shift scenarios and study the behavior of multiple classes of models. We propose t-RAIN a similarity mapping technique for synthetic data augmentation using large scale generative models and evaluate the performance on DAWN dataset. This mapping boosts model test accuracy by 2.1, 4.4, 1.9, 2.7 % in no-shift, fog, snow, dust shifts respectively. We present state-of-the-art pedestrian detection results on real and synthetic weather domains with best performing 82.69 AP (snow) and 62.31 AP (fog) respectively. ## 1 Introduction Autonomous perception is notoriously vulnerable to out-of-distribution settings like adverse weather and imagery corruptions. As data from sensors is both limited and often corrupted by natural phenomena, for practical purposes, in-built model robustness is essential for efficient computation. Given the dynamic surroundings and terrains present in everyday driving scenes, building robustness to out-of-distributions settings is an essential feature for vehicular safety and trust. However, modern classifiers are mostly trained on good-weather data due to the abundance and ease of classification, making them vulnerable to adversarial weather attacks like sand, dust, mist, snow, droplets, fog and rain. In this work, we treat multi-weather robustness as a supervised learning problem in the standard settings and optimize for best performance. Then we perturb the target distribution to simulate label shift and test this robustness. The main goal is pedestrian detection under adversarial weather conditions and study of the underlying performance shifts. Figure 1: **Sim2Real Detection:** DAWN-WEDGE (Real-Synthetic) Data Samples Depicting Adversarial Weather Conditions Including Dust (Tormado, Sandstorms), Fog (Mist, Haze, Fog), Rain and Snow in Autonomous Driving Scenes. Our main contributions include: 1. **Benchmark.** Multi-weather classification benchmark on DAWN dataset. Analysis of model behaviour under limited settings. 2. **Label Shift.** Simulation of label shift settings for multi-weather classification. Proposal of t-RAIN algorithm for synthetic data augmentation using VLM prompting. 3. **Pedestrian Detection.** We conduct experiments to link the multi-weather classification behaviour by considering the task of pedestrian detection in synthetic and real settings. ## 2 Background ### Label Shift Tackling image corruptions for improved perception has been a long-standing challenge in the field of computer vision [1, 2, 26]. As newer datasets have introduced weather-based corruptions [3, 17, 41] for improving robustness, the awareness of this subject is on the rise. Recently, multi-weather robustness has been the focus of several works which proposed ideas like stacking [27], ensembles [39] and image restoration [25], the performance of classic benchmark models still fail on extreme weather conditions. In the history of label shift methods, several works study correcting label shift and generalization in general, especially in unsupervised settings [10, 11, 12]. The progress in this field is rapid due to the parallel development of autonomous vehicles and need for explainability for trust-worthy AI systems for the future. ### Adversarial Weather Robustness The DAWN dataset [17] and WEDGE dataset [23] present interesting adversarial weather conditions including fog, rain, snow, dust as visible in Figure 1. The most recent benchmark on these datasets [23, 25] presents state-of-the-art results and demonstrates effectiveness of using synthetic data augmentation in the task of overall object detection. ### Sim2Real Gap The recent development in large vision-language models [28, 29] and refined generative techniques, have led to creation of more realistic synthetic images. The natural advancement would be adoption of synthetic images to augment limited real-world datasets. However, this adoption is limited by the cost, realism, availability and usability of synthetic images. By incorporating synthetic images [23] in this work, we demonstrate one positive use case of such images. ### Pedestrian Detection Finding people in imagery has been a long-standing challenge in computer vision, with several decades of prior work [6, 37, 44] setting up the foundation for examination \begin{table} \begin{tabular}{l|c c c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**Real Data (DAWN Dataset)**} & \multicolumn{3}{c}{**Synthetic Data (WEDGE Dataset)**} \\ \cline{2-13} & **car** & **person** & **bus** & **truck** & **T-4 AP** & **mc** & **bicycle** & **mAP** & **car** & **person** & **bus** & **truck** & **van** & **mAP** \\ \hline \hline **Prior Art** & & & & & & & & & & & & & \\ \hline Multi-weather city [27] & - & - & - & - & 21.20 (39,19) & - & - & (39,19) & - & - & - & - & - & - \\ Rolli. [31] & - & - & - & - & - & - & - & 28.80 & - & - & - & - & - & - \\ Transfer Learning [25] & 7.00 & 8.00 & 7.00 & - & 5.50 & - & 0.00 & - & - & - & - & - & - & - \\ Data Augmentation [25] & 6.00 & 4.00 & 3.00 & 0.00 & 26.25 & - & **92.00** & - & - & - & - & - & - & - \\ Weather. & 48.00 & 0.00 & 0.00 & 0.00 & 12.00 & - & - & - & - & - & - & - & - & - \\ Night GAN [22] & 52.56 & 52.34 & 21.73 & 13.71 & 35.08 & 35.51 & 23.29 & 32.75 & - & - & - & - & - & - \\ \hline \hline **Evaluation on DAWN-All** & & & & & & & & & & & & & & \\ \hline **Trained on Good Weather Data (COCO [19])** & & & & & & & & & & & & & & \\ \hline FasterRCNN & & & & & & & & & & & & & & & \\ MobileNet & 37.56 & 34.93 & 20.90 & 12.91 & 26.57 & 23.15 & 18.95 & 24.73 & 34.10 & 36.26 & 39.35 & 16.05 & 0.00 & 25.15 \\ Large 320 [15, 30] & & & & & & & & & & & & & \\ FasterRCNN & & & & & & & & & & & & & & \\ MobileNet & 60.64 & 55.96 & 32.78 & 23.66 & 43.26 & 38.55 & 28.75 & 40.05 & 35.34 & 39.52 & 35.83 & 25.43 & 0.00 & 27.22 \\ Large [15, 30] & & & & & & & & & & & & & & \\ FasterRCNN ResNet 50 [30] & **69.13** & **70.31** & **38.64** & 30.54 & **52.15** & **52.17** & **30.56** & **48.55** & 31.41 & 33.54 & 30.19 & 18.75 & 0.00 & 22.78 \\ \hline **Fine-Tuning on WEDGE** & & & & & & & & & & & & & \\ \hline FasterRCNN & & & & & & & & & & & & & & & \\ MobileNet & **39.52** & 23.97 & 7.81 & **22.08** & 23.34 & 0.00 & 0.00 & 15.56 & 40.40 & 43.01 & 49.88 & 31.41 & 10.19 & 34.98 \\ Large 320 [15, 30] & & & & & & & & & & & & & \\ FasterRCNN & & & & & & & & & & & & & & \\ MobileNet & 59.81 & 34.61 & 14.06 & **30.67** & 34.78 & 0.00 & 0.00 & 23.19 & 52.52 & **54.79** & **51.23** & 50.01 & 7.95 & 43.30 \\ Large [15, 30] & & & & & & & & & & & & & \\ FasterRCNN ResNet 50 [30] & 68.09 & 54.29 & 27.48 & **35.02** & 46.22 & 0.00 & 0.00 & 30.81 & **57.48** & 54.71 & 46.92 & **57.43** & **10.49** & **45.41** \\ \hline \hline \end{tabular} \end{table} Table 1: **Object Detection Benchmarks on DAWN and WEDGE datasets.** The latest work [23] in this domain presents state-of-the-art benchmark (models trained on Good-weather data and fine-tuned on WEDGE) on DAWN and WEDGE datasets. The pedestrian detection benchmark is **70.31 AP** on DAWN and **54.79 AP** on WEDGE. (54.29 and 54.71 is best WEDGE fine-tuned model benchmark) of finer problems in modern computer vision. The creation of high-quality datasets [6, 9, 13, 42] was a contributing factor to rapid development of powerful algorithms capable of detection in challenging conditions. Classical algorithms combined with novel architectures for robust detection were the focus of many works in vision encompassing multi-scale detection, occlusion invariance and cascaded rejection classifiers [4, 21, 24, 32, 38, 40, 43].More recently, tracking and detection of pedestrians in the real-world has been solved using a variety of deep networks, algorithmic strategies and forecasting approaches [7, 8, 18, 33]. ## 3 Methodology ### Datasets We employ the DAWN Dataset [17] and the WEDGE dataset [23] to test the efficacy of our strategy. The DAWN dataset is a 1000 image object detection dataset that includes traffic imagery in bad weather like rain,fog, dust and snow. WEDGE is a synthetic dataset that employs the DALL-E 2 model [28, 29] with prompts encompassing 16 weather and season conditions with a focus on autonomous vehicle scenarios. It features images captured during severe weather, such as rain, snow, fog, and sandstorms (Refer Figure 1). ### Proposed t-RAIN Algorithm In limited data settings, generalization capabilities are naturally limited by the number of examples seen by the classifier. However, with the development of large-scale vision-language models (stable diffusion, DALL-E), access to unlimited synthetic datasets has become much easier. We propose an algorithm that can leverage classical classifiers with synthetically generated data to provide generalization capabilities even in limited data settings (\(<160\) images). \[cos(x,y)=\frac{x\cdot y}{||x||\cdot||y||} \tag{1}\] The algorithm works by mapping similarity between the source distribution classes (C) and target synthetic classes (C) to sample relevant images and up-sample for each class. The technique currently performs uniform weighting but can be extended to weighted sampling in the future works. Once the source class and target class are compared (this can be between class and prompt keywords as well), we filter the images with greatest similarity (currently cosine similarity 1). We return this set through the oracle until all classes are mapped and sufficient samples (determined by hyper-parameter \(\beta\)) from the target dataset (of size \(\eta\)) are satisfactorily generated. The filtered target samples from class with closest class similarity to source sample (\(X_{i}\)) are the augmentation set. ``` 0: Randomly synthesized unlabelled dataset \(Q_{t}\) Labelled real training dataset \(Q\) with C classes for\(i\in Train-Set\)\(Size(or\)\(B\)\(iterations)\)do 2: Sample data point \(X_{i}\) at random \(\phi_{i}\ \leftarrow\ Oracle(X_{i},C_{i})\) 4:\(Q_{C_{i}}\ \leftarrow\ \phi_{i}\) endfor 6:return\(Q\) ``` _Sub-Program: Oracle for Mapping Sim2Real Samples_ ``` 0: Sample \(X_{i}\), Class \(C_{i}\) Labelled synthetic dataset \(S\) with C classes (extracted from prompts) 0:\(C_{i}\in C\) \(j\gets 1\) 1:for\(j\leq\eta\)do Sample synthetic data point \(Q_{j}\) at random 1:\(Sc_{j}\gets Cosine\_Similarity(Class(Q_{j}),C_{i})\) \(\psi_{j}\gets Q_{j}\) 2:endfor SORT\(\psi\) by \(Sc_{j}\) 1:FILTER\(\phi_{i}<-\psi[\eta-\beta:\eta]\) (\(\beta=210\)\(maximum\)\(values\)\(here\)) return\(\phi_{i}\) ``` **Algorithm 1** t-RAIN Algorithm ## 4 Experiments The experimentation procedure was carried out in the following steps: 1. Training of set of classifiers on DAWN dataset targeted for optimal classification accuracy. 2. Performance evaluation on smaller training sets (80-50-20 splits) and robustness evaluation. Figure 2: **Label Shift Simulation**: Shifts 0,1,2,3,4 correspond to the simulated No-Shift, Rain, Fog, Snow, Dust Shifts’ target label distributions. 3. Simulation of 4 label shift scenarios and comparison with uniform label distribution. 4. Pedestrian detection under 4 adverse conditions for 2 datasets: Real (DAWN) and Synthetic (WEDGE). ## 5 Results and Discussion ### Model Generalization in Limited Data Settings As visible in Table 2 and Figure 4, the model performance drops significantly when restricted to limited data environments as expected. The models begin overfitting on training set and are unable to generalize to multi-weather conditions. EfficientNetV2S and ConvNeXtSmall (the weakest learning models) have shown improvement on limited data settings, indicating the pseudo-generalization capabilities of weak learners, albeit extremely poor performance due to underfitting. ### Label Shift Generalization We simulate label shift by boosting one class at a time (Figure 2) and a simple random affine transformation on the remaining set sizes. The goal is to observe shift when specific weather conditions dominate the target distribution. ### Benchmark Comparison We can see from Table 3 and Figures 3,6,7, some model-specific trends: 1. Xception model which had fit very well on snow class, performed better when the snow class was positively biased in the target distribution. Similarly it per \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline \multirow{2}{*}{**Split**} & \multirow{2}{*}{**Model**} & **Train Acc.** & **Test Acc.** & \multirow{2}{*}{**F1-Score**} & \multirow{2}{*}{**Recall**} & **F1-Score** & **F1-Score** & **F1-Score** & **F1-Score** \\ & & & **Acc.** & & & & **Rain** & **Snow** & **Dust** & **Fog** \\ \hline \multirow{8}{*}{80} & Xception & 99.69 & 71.88 & 0.73 & 0.72 & 0.72 & 0.62 & **0.93** & 0.77 & 0.56 \\ & VGG-16 & 99.69 & **78.75** & **0.79** & **0.79** & **0.79** & 0.68 & 0.9 & **0.88** & **0.71** \\ & VGG-19 & 99.53 & 70.63 & 0.7 & 0.71 & 0.7 & 0.55 & 0.82 & 0.79 & 0.63 \\ & ResNet50 & 41.56 & 45.63 & 0.5 & 0.49 & 0.45 & 0.45 & 0.73 & 0.34 & 0.28 \\ & MobileNet & 99.69 & 78.12 & **0.79** & 0.78 & 0.78 & **0.73** & 0.91 & 0.87 & 0.63 \\ & DenseNet & 99.22 & 77.50 & 0.78 & 0.76 & 0.77 & 0.73 & 0.88 & 0.83 & 0.63 \\ & InceptionV3 & 99.69 & 70.63 & 0.72 & 0.71 & 0.71 & 0.68 & 0.89 & 0.72 & 0.55 \\ & MobileNetV2 & 99.69 & 73.12 & 0.73 & 0.73 & 0.72 & 0.63 & 0.86 & 0.79 & 0.62 \\ & EfficientNetV2S & 75.63 & 48.12 & 0.49 & 0.49 & 0.46 & 0.32 & 0.72 & 0.26 & 0.55 \\ & ConvNeXtSmall & 61.56 & 51.25 & 0.41 & 0.41 & 0.4 & 0.31 & 0.59 & 0.3 & 0.42 \\ \hline \multirow{8}{*}{50} & Xception & 99.75 & 72.50 & 0.73 & 0.72 & 0.72 & 0.68 & 0.87 & 0.73 & 0.61 \\ & VGG-16 & 99.75 & 69.38 & 0.7 & 0.69 & 0.69 & 0.61 & 0.88 & 0.78 & 0.5 \\ & VGG-19 & 99.75 & 70.00 & 0.7 & 0.7 & 0.59 & 0.84 & 0.78 & 0.58 \\ & ResNet50 & 58.75 & 55.62 & 0.51 & 0.51 & 0.49 & 0.53 & 0.7 & 0.25 & 0.46 \\ & MobileNet & 99.75 & 73.12 & 0.74 & 0.73 & 0.73 & 0.65 & 0.87 & 0.8 & 0.61 \\ & DenseNet & 99.75 & 74.37 & 0.75 & 0.74 & 0.74 & 0.67 & 0.89 & 0.81 & 0.6 \\ & InceptionV3 & 99.75 & 63.75 & 0.62 & 0.63 & 0.62 & 0.63 & 0.77 & 0.63 & 0.42 \\ & MobileNetV2 & 99.75 & 70.63 & 0.7 & 0.71 & 0.7 & 0.65 & 0.85 & 0.7 & 0.59 \\ & EfficientNetV2S & 74.50 & 47.50 & 0.46 & 0.47 & 0.45 & 0.49 & 0.73 & 0.18 & 0.41 \\ & ConvNeXtSmall & 65.25 & 50.63 & 0.46 & 0.47 & 0.45 & 0.49 & 0.73 & 0.18 & 0.41 \\ \hline \multirow{8}{*}{20} & Xception & **100.00** & 67.50 & 0.67 & 0.68 & 0.67 & 0.62 & 0.82 & 0.7 & 0.54 \\ & VGG-16 & **100.00** & 65.00 & 0.66 & 0.66 & 0.66 & 0.76 & 0.68 & 0.6 \\ & VGG-19 & **100.00** & 62.50 & 0.62 & 0.62 & 0.62 & 0.54 & 0.73 & 0.64 & 0.58 \\ & ResNet50 & 61.87 & 48.75 & 0.5 & 0.47 & 0.47 & 0.52 & 0.57 & 0.26 & 0.52 \\ & MobileNet & **100.00** & 73.12 & 0.74 & 0.73 & 0.73 & 0.67 & 0.86 & 0.78 & 0.6 \\ & DenseNet & **100.00** & 71.88 & 0.71 & 0.71 & 0.71 & 0.65 & 0.89 & 0.76 & 0.54 \\ & InceptionV3 & **100.00** & 56.88 & 0.62 & 0.57 & 0.53 & 0.56 & 0.76 & 0.62 & 0.17 \\ & MobileNetV2 & **100.00** & 63.13 & 0.66 & 0.63 & 0.62 & 0.55 & 0.84 & 0.62 & 0.45 \\ & EfficientNetV2S & 85.62 & 48.12 & 0.45 & 0.45 & 0.42 & 0.52 & 0.58 & 0.29 & 0.31 \\ & ConvNeXtSmall & 68.75 & 44.37 & 0.59 & 0.42 & 0.37 & 0.49 & 0.52 & 0.05 & 0.44 \\ \hline \hline \end{tabular} \end{table} Table 2: **Weather Classification Benchmark** (Learning with Limited Data) with Benchmark Models (Xception [5], VGG16 [34], VGG19 [34], ResNet50 [14], MobileNet [15], DenseNet [16], InceptionV3 [35], MobileNetV2 [15], EfficientNetV2S [36], ConvNeXtSmall [20]). As expected, the models trained with 80-20 split (greatest training set) deliver the best results. forms worse when exposed to other shifts which it had learned poorly. Upon applying t-RAIN it shows 3 % increase in accuracy for no-shift and fog shift conditions. It shows 5% increase in accuracy for dust-shift attacks. 2. VGG-16 is one of the stronger learning models with higher performance values. It performs similar on all classes and attains high performance under all shifts, Figure 3: **How well do models differenciale between weather?** Evaluation of class-level performance of models under 80-20, 50-50 and 20-80 train-test splits using F1-Score Metric on DAWN dataset [17]. \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline **Split** & **Shift** & **M1** & **M2** & **M3** & **M4** & **M5** & **M6** & **M7** & **M8** & **M9** & **M10** \\ \hline \multirow{8}{*}{80} & 1 & 71 & 78 & 70 & 45 & 78 & 77 & 70 & 73 & 48 & 51 \\ & 2 & 70 & 77 & 68 & 56 & 81 & 79 & **73** & 72 & 42 & 49 \\ & 3 & 71 & 81 & 71 & 38 & 79 & 75 & 66 & 73 & **56** & 54 \\ & 4 & 75 & **83** & **76** & 53 & **81** & **81** & 73 & 78 & 53 & **57** \\ & 5 & 67 & 80 & 69 & 35 & 78 & 74 & 69 & 75 & 41 & 44 \\ \hline \multirow{8}{*}{50} & 1 & 72 & 69 & 70 & 55 & 73 & 74 & 63 & 70 & 47 & 50 \\ & 2 & **77** & 75 & 74 & 59 & 76 & 76 & 70 & 76 & 46 & 50 \\ & 3 & 73 & 68 & 70 & 52 & 71 & 74 & 59 & 69 & 43 & 52 \\ & 4 & 75 & 77 & 76 & 59 & 77 & 81 & 71 & **79** & 48 & 53 \\ & 5 & 67 & 68 & 66 & 46 & 69 & 71 & 59 & 69 & 37 & 43 \\ \hline \multirow{8}{*}{20} & 1 & 67 & 65 & 62 & 48 & 73 & 71 & 56 & 63 & 48 & 44 \\ & 2 & 70 & 68 & 65 & 51 & 77 & 74 & 68 & 66 & 53 & 47 \\ & 3 & 67 & 68 & 64 & 48 & 68 & 70 & 44 & 61 & 43 & 35 \\ & 4 & 74 & 69 & 67 & 45 & 77 & 77 & 58 & 73 & 50 & 35 \\ & 5 & 61 & 66 & 64 & 43 & 67 & 69 & 50 & 63 & 35 & 36 \\ \hline \multirow{8}{*}{t-RAIN} & 1 & 70 & 68 & 65 & 55 & 71 & 74 & 64 & 66 & 43 & 42 \\ & 2 & 70 & 71 & 62 & **61** & 74 & 76 & 66 & 71 & 42 & 38 \\ \cline{1-1} & 3 & 70 & 69 & 65 & 50 & 72 & 72 & 60 & 64 & 47 & 43 \\ \cline{1-1} & 4 & 72 & 73 & 69 & 56 & 80 & 78 & 68 & 67 & 43 & 38 \\ \cline{1-1} & 5 & 66 & 67 & 62 & 50 & 68 & 70 & 58 & 63 & 38 & 39 \\ \hline \end{tabular} \end{table} Table 3: **Weather Classification Benchmark:** Test Accuracy of Benchmark Models M1 to M10 from Left to Right (Xception [5], VGG16 [34], VGG19 [34], ResNet50 [14], MobileNet [15], DenseNet [16], InceptionV3 [35], MobileNetV2 [15], EfficientNetV2S [36], ConvNeXtSmall [20]) After Label Shift Shift 1 : None, Shift 2: Rain, Shift 3: Fog, Shift 4: Snow, Shift 5: Dust on DAWN dataset [17]. especially snow. When synthetic data is sampled via t-RAIN, it consistently outperforms baseline VGG-16 with 1-4% increases. This may point towards better generalization of stronger models due to efficient learning of complex representations from the extended input space. 3. VGG-19 shows similar trends to VGG-16 but with 3% under-performing margin for rain and dust shifts. 4. ResNet50 is one of the weakest learners with second-last performance in most cases. It generalizes better in limited data due to under-fitting and shows mediocre results which improve marginally for snow shifts. t-RAIN dramatically improves the ResNet performance on all shifts by 2-11 % accuracy. This is an important result, as we observe that variability in data can boost both strong and weak learners, with greater effects on weak learners. 5. EfficientNetV2S is a significantly weak learner with worst performance on dust shift. t-RAIN is only able to boost performance by 3-4 % on dust and fog shifts. 6. ConvNextSmall is also one of the more poorly performing models with almost zero robustness to weather conditions like dust shift. Although t-RAIN improves robustness under all conditions by 3 - 8 % except rain and no-shift the model suffers under limited data constraints and plummets to bottom rank. 7. MobileNet features good performance and fast training. It is not robust to fog corruptions but otherwise provides reasonable results with 1-4% boosts in majority classes and worse under rain and no-shifts. 8. DenseNet features similar trends with baseline performance mainly for snow shifts. It features the most common boost of 1-3 % over all shifts uniformly. 9. MobileNetV2 features good performance and fast training. The model improved with t-RAIN under rain and fog shifts by 5 % and 3 % respectively. 10. Inception V3 is an average learner but suffers adversely from fog shift. Adding t-RAIN to such models significantly improves the performance which is a remarkable result. There is 8 and 16% increase in test accuracy after using t-RAIN to improve no-shift and fog robustness in Inception V3.Adding t-RAIN improves the performance the most dramatically out of all the other results with 16% increase in test accuracy here under fog shift which is a remarkable result. Some general observations include snow being one of the easiest weather classes to recognize due to significant distinguishing characteristics. Models suffer from easier evaluation when snow is considered as one of the evaluation classes. For true robustness evaluation such classes should be held out and only measured as sanity checks and not robustness measures. As visible in Figure 6, the t-RAIN algorithm is able to improve generalization, for all learners from strong learners like VGG-16 to weak learners like EfficientNetV2S and outperforms the performance on limited data with synthetic augmentation. The averaged improvement across all 5 shifts are 2.1, -0.8,4.4,1.9, 2.7 % respectively. One interesting result presents a question of why t-RAIN boosts specific class -shifts performance inspite of underlying uniform distribution and uniform augmentation. This could potentially be attributed to special variations and robustness introduced by the synthetic data which helped gain generalization capabilities beyond the source distribution. \begin{table} \begin{tabular}{l l l} \hline **Weather/Data** & **DAWN** & **WEDGE** \\ \hline Rain & 73.5 & 29.7 \\ Snow & **82.69** & 22.34 \\ Dust & 59.66 & 60.89 \\ Fog & 73.32 & **62.31** \\ \hline \end{tabular} \end{table} Table 4: **Pedestrian detection in adverse weather**: We observe that best detection performance is under Real Snow Conditions with **82.69 AP** and Synthetic Fog Conditions with **62.31 AP** when evaluated on DAWN and WEDGE datasets. The number of images used for evaluation was 766 and 810 respectively. The model for detection is FasterRCNN with Resnet 50 Backbone pre-trained on COCO images [19]. Figure 4: **Size matters**: Effect of training set size on model performance, S-1, S-2, S-3 represents the 80-20 (red), 50-50 (blue) and 20-80 (green) train-test split variations of the trained models on DAWN dataset [17]. ### Pedestrian Detection We use the earlier analysis as a prior to analyze performance of models in detection pedestrians under anomalous weather conditions. As visible in Table 4 and Figure 8, the best detection performance is under Real Snow Conditions with **82.69 AP** and Synthetic Fog Conditions with **62.31 AP** when evaluated on DAWN and WEDGE datasets. Intuitively another direction we explore the adversarial difficulty of each weather condition for detecting pedestrians. In real data, dust weather appears to obstruct vision for pedestrian detectors the most whereas in synthetic data, snow appears to obstruct vision the most. This is a surprising result as snow is the easiest weather in real data, which is the opposite in synthetic data. Due to this adversary, we can understand now why t-RAIN improves generalization capability of 7/10 models under snow shift with a maximum increase of 10% accuracy. However, conversely, ease in detection like fog conditions does not imply ineffectiveness of t-RAIN. t-RAIN improves generalization capability of 10/10 models under fog shift with a maximum increase of 16% accuracy. Further analysis can be done in the direction of discovering distribution shift induced by synthetic generation. This analysis was not performed in the scope of this study, but there can be multiple possible underlying factors for the results reported in Table 4. Firstly, we can see that generated humans in synthetic data appear out-of-distribution due to the trust and security layers implemented for privacy concerns and obscuration. Thus evaluation of pedestrian detection across Sim2Real data is not actually an informative indicator of generative accuracy but may hint towards undiscovered generative anomalies. Models that work well on real-data and poorly on synthetic data could be suffering due to (a) Real2Sim gap (b) Beneficial adversarial robustness or (c) Harmful generative anomalies that do not help real-world models. Identifying the exact cause for performance shift is an interesting challenge that we propose for future works. ## 6 Conclusion Better overall test performance may not always signify better multi-weather generalization, but could be attributed to underlying factors like unseen target distribution shifts. Models may have possibly rote-learned specific classes and Figure 5: **Understanding multi-weather robustness: Relative Performance of classification models under different label shift and training data distributions on DAWN dataset [17].** Figure 6: **Contribution of t-RAIN to Model Generalization: We demonstrate model performance under 5 shift conditions: No shift, Rain, Fog, Snow, Dust in the above 5 figures from Left to Right. The black arrows indicate the test accuracy of t-RAIN algorithm and in-line coloured circles represent individual models. Whenever the black arrow appears above the circle, the t-RAIN outperforms limited data benchmark on DAWN dataset [17].** still go unseen as bad learners due to convenient boosts in model performance due to label shifts. Weak learners also show pseudo-generalization capabilities which are usually too small in magnitude and misleading to be considered significant. Leveraging weaker learners through ensemble methods can be explored in the future scope of this study. Through this small-scale study, we were able to uncover many insights on the fundamental problems with multi-weather robustness as an extension on the label shift and generalization problems of benchmark classification models. The applications of this study mainly apply to autonomous perception in unsupervised settings, where model robustness is difficult to evaluate and target distributions are often skewed. They can extend to all real world scenarios like medical image analysis, species classification etc that showcase out-of-distribution examples and variable label distributions. Given unlabelled target data, one can attain reasonable results if model is robust to label shift on uniform source label distribution. One might also attempt to predict unlabelled target weather distribution upto a certain confidence using a well-trained model from this work. Another application can include using weather-classification labels as a prior for downstream computer vision tasks like specialized image denoising specific to the weather condition. We consider the integration of large-scale generative models into our study as an example of improvement on classical data collection methods with novel architectures for better generalization. In the future work, we would like to put forward better methods for overcoming label shift vulnerability and weather-specific methods for robust all-weather vision extended to unsupervised settings. Figure 8: **Finding people in all seasons:** Pedestrian Detection in Real (DAWN) and Synthetic (WEDGE) imagery (left to right). Figure 7: **Performance Evaluation**: The lines demonstrate model-wise performance differences as measured by relative accuracy between limited data benchmarks and proposed t-RAIN algorithm under all 5 shift conditions. Fog shift appears to have the most dramatic improvement in test-time performance on DAWN dataset [17].
2309.01795
Composite federated learning with heterogeneous data
We propose a novel algorithm for solving the composite Federated Learning (FL) problem. This algorithm manages non-smooth regularization by strategically decoupling the proximal operator and communication, and addresses client drift without any assumptions about data similarity. Moreover, each worker uses local updates to reduce the communication frequency with the server and transmits only a $d$-dimensional vector per communication round. We prove that our algorithm converges linearly to a neighborhood of the optimal solution and demonstrate the superiority of our algorithm over state-of-the-art methods in numerical experiments.
Jiaojiao Zhang, Jiang Hu, Mikael Johansson
2023-09-04T20:22:57Z
http://arxiv.org/abs/2309.01795v1
# Composite Federated Learning with Heterogeneous Data ###### Abstract We propose a novel algorithm for solving the composite Federated Learning (FL) problem. This algorithm manages non-smooth regularization by strategically decoupling the proximal operator and communication, and addresses client drift without any assumptions about data similarity. Moreover, each worker uses local updates to reduce the communication frequency with the server and transmits only a \(d\)-dimensional vector per communication round. We prove that our algorithm converges linearly to a neighborhood of the optimal solution and demonstrate the superiority of our algorithm over state-of-the-art methods in numerical experiments. Jiaojiao Zhang\({}^{\star}\), Jiang Hu\({}^{\dagger}\), Mikael Johansson\({}^{\star}\)+\({}^{\star}\)School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology \({}^{\dagger}\) Massachusetts General Hospital and Harvard Medical School, Harvard University Footnote †: J. Zhang and M. Johansson are with the School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden. Email: {jiaoz,mikaelj}@kth.se. J. Hu is with the Massachusetts General Hospital and Harvard Medical School, Harvard University, Boston, MA 02114 ([email protected]). This work is supported in part by funding from Digital Futures and VR under the contract 2019-05319. Composite federated learning, heterogeneous data, local update. ## 1 Introduction Federated Learning (FL) is a popular machine learning framework where a server coordinates a large number of workers to train a joint model without any sharing of the local data [1]. The combination of distributed computations and potential for privacy protection makes FL an attractive approach in various applications, such as machine learning [1], wireless networks [2], and Internet of Things [3], to mention a few. Compared to conventional distributed learning, FL suffers from a communication bottleneck at the server and is more sensitive to data heterogeneity among workers [1, 4]. To improve communication efficiency, McMahan et al. introduced Federated Averaging (FedAvg) [5], where the workers perform multiple local updates before updating the server with their new local states. When data among workers is homogeneous, the use of local updates is a practical approach for improving communication efficiency [6]. However, when the data distribution becomes more heterogeneous, FedAvg begins to suffer from client drift. Numerous solutions have been proposed to overcome these challenges [7, 8, 1, 9]. Most existing FL algorithms focus on smooth problems. However, real-world applications often call for non-smooth objective functions, for example, when we want to find a solution within a restricted domain or encourage specific solution properties such as sparsity or low-rank [10]. This motives us to address composite FL problems on the form \[\operatorname*{minimize}_{x\in\mathbb{R}^{d}}\,f(x)+g(x). \tag{1}\] Here, \(x\in\mathbb{R}^{d}\) is the decision vector (model parameters in a machine learning application), \(f(x):=\frac{1}{n}\sum_{i=1}^{n}f_{i}(x)\) is the average data loss of the \(n\) workers, and \(g\) is a convex but possibly non-smooth regularizer. To make the data dependence explicit, we let \(\mathrm{D}_{i}=\bigcup_{l=1}^{m_{i}}\mathrm{D}_{il}\) with \(\mathrm{D}_{i1},\ldots,\mathrm{D}_{im_{i}}\) being the \(m_{i}\) data points of worker \(i\), \(f_{il}(x;\mathrm{D}_{il})\) be the sample loss of worker \(i\) associated with the data point \(\mathrm{D}_{il}\), and \(f_{i}(x):=\frac{1}{m_{i}}\sum_{\mathrm{D}_{il}\in\mathrm{D}_{i}}f_{il}(x; \mathrm{D}_{il})\). Note that we do not make any assumptions about similarity between the datasets \(\mathrm{D}_{i}\). Solving (1) in the context of FL presents several challenges. Federated Mirror Descent (Fed-Mid), which is a natural extension of FedAvg that replaces the local stochastic gradient descent (SGD) steps in FedAvg with proximal SGD [11], faces the "curse of primal averaging" [10]. To illustrate this effect, consider the case when \(g\) is the \(\ell_{1}\)-norm. Although each worker generates a sparse local model after its local updates, averaging the local models at the server typically results in a solution that is no longer sparse. Another difficulty of solving (1) in the FL setting arises due to the coupling between the proximal operator and communication. If the server averages local models that have been updated using proximal operators, it is no longer possible to directly obtain the average of the gradients across all the workers due to the nonlinearity of the general proximal operators. This makes both the algorithm design and the analysis more challenging. ### Contribution We propose a novel algorithm for solving the composite FL problem. A key innovation of our algorithm is that it decouples the proximal operator evaluation and the communication to efficiently handle non-smooth regularization. Moreover, each worker uses local updates to reduce the communication frequency with the server and sends only a single \(d\)-dimensional vector per communication round, while addressing the issue of client drift efficiently. Without any assumptions on data similarity, we prove that our algorithm converges linearly up to a neighborhood of the optimal solution. ### Related Work **Smooth FL Problems.** FedAvg was originally proposed in [5], and a general analysis for the homogeneous data case was carried out in [6]. However, when data is heterogeneous, the use of local updates in FedAvg introduces client drift, which limits its practical usefulness. The client drift was analyzed theoretically under a notion of bounded heterogeneity in [12, 13], and several variants of FedAvg have been proposed to reduce or eliminate the drift [1, 7, 8, 9]. For example, SCAFFOLD [7] and MIME [9] tackle client drift by designing control variates to correct the local direction during the local updates. A drawback of these approaches is their need to communicate also the control variates, which increases the overall communication cost. Fedsplit [8], on the other hand, adopts the Peaceman-Rachford splitting scheme [14] to address the client drift through a consensus reformulation of the original problem where each worker only exchanges one local model per communication round. However, none of the mentioned algorithms can handle the composite FL problem. **Composite FL Problems.** Compared to the abundance of studies on smooth FL problems, there are few studies for general composite problems. One attempt to address this gap is the Federated Dual Averaging (FedDA) introduced in [10]. In this method, each worker performs dual averaging [15] during the local updates, while the server averages the local models in the dual space and applies a proximal step. Convergence is established for general loss functions by assuming bounded gradients. However, under data heterogeneity, the convergence analysis is limited to quadratic loss functions. The Fast Federated Dual Averaging (Fast-FedDA) algorithm [16] uses weighted summations of both past gradient information and past model information during the local updates. However, it comes with an additional communication overhead. While the convergence of Fast-FedDA is established for general losses, it still requires the assumption of bounded heterogeneity. The work [17] introduces Federated Douglas-Rachford (FedDR), which avoids the assumption of bounded heterogeneity. A follow-up of FedDR is FedADMM, proposed in [18], which uses FedDR to solve the dual problem of (1). In both FedDR and FedADMM, the local updates implement an inexact evaluation of the proximal operator of the smooth loss with adaptive accuracy. However, to ensure convergence, the accuracy needs to increase by iteration, resulting in an impractically large number of local updates. **Notation.** We let \(\|\cdot\|\) be \(\ell_{2}\)-norm and \(\|\cdot\|_{1}\) be \(\ell_{1}\)-norm. For positive integers \(d\) and \(n\), we let \(I_{d}\) be the \(d\times d\) identity matrix, \(1_{n}\) be the all-one \(n\)-dimensional column vector, and \([n]=\{1,\ldots,n\}\). We use \(\otimes\) to denote the Kronecker product. For a set B, we use \(|\mathrm{B}|\) to denote the cardinality. For a set \(\mathcal{C}\), we use \(I_{\mathcal{C}}(x)\) to denote the indicator function, where \(I_{\mathcal{C}}(x)=0\) if \(x\in\mathcal{C}\) and \(I_{\mathcal{C}}(x)=\infty\) otherwise. For a convex function \(g\), we use \(\partial g\) to denote the subdifferential. For a random variable \(v\), we use \(\mathbb{E}[v]\) to denote the expectation and \(\mathbb{E}[v|\mathcal{F}]\) to denote the expectation given event \(\mathcal{F}\). For vectors \(x_{1},\ldots,x_{n}\in\mathbb{R}^{d}\), we let \(\mathrm{col}\{x_{i}\}_{i=1}^{n}=[x_{1};\ldots;x_{n}]\in\mathbb{R}^{nd}\). Specifically, for a vector \(\overline{x}\in\mathbb{R}^{d}\), we let \(\mathrm{col}\{\overline{x}\}_{i=1}^{n}=[\overline{x};\ldots;\overline{x}]\in \mathbb{R}^{nd}\). For a vector \(\omega\) and a positive scalar \(\tilde{\eta}\), we let \(P_{\tilde{\eta}g}(\omega)=\arg\min_{u\in\mathbb{R}^{d}}\tilde{\eta}g(u)+ \frac{1}{2}\|\omega-u\|^{2}\). Specifically, for \(\mathrm{col}\{\omega_{i}\}_{i=1}^{n}\), we let \(P_{\tilde{\eta}g}(\mathrm{col}\{\omega_{i}\}_{i=1}^{n})=\mathrm{col}\{P_{ \tilde{\eta}g}(\omega_{i})\}_{i=1}^{n}\). ## 2 Proposed Algorithm The per-worker implementation of the proposed algorithm is given in Algorithm 1. In general, our algorithm involves communication rounds indexed by \(r\) and local updates indexed by \(t\). In every round \(r\), workers perform \(\tau\) local update steps before updating the server. During the local updates, each worker \(i\) maintains the local model state before and after the application of the proximal mapping. We call these models pre-proximal, denoted \(\widehat{z}_{i,t}^{r}\), and post-proximal, denoted \(z_{i,t}^{r}\). The local mini-batch stochastic gradient with size \(b\) is computed at the post-proximal local model \(z_{i,t}^{r}\). Following this, a client-drift correction term \(c_{i}^{r}=\frac{1}{\eta_{g}\eta\tau}(P_{\tilde{\eta}g}(\overline{x}^{-1})- \overline{x}^{r})\)\(-\frac{1}{\tau}\sum_{t=0}^{\tau-1}\nabla f_{i}(z_{i,t}^{r-1};\mathrm{B}_{i,t}^ {r-1})\) is added to the update direction for the pre-proximal local model \(\widehat{z}_{i,t}^{r}\). At the end of the round, the final pre-proximal model, \(\widehat{z}_{i,\tau}^{r}\), is transmitted to the server. At the \(r\)-th communication round, the server also manipulates two models: a pre-proximal global model \(\overline{x}^{r}\) and a post-proximal global model \(P_{\tilde{\eta}g}(\overline{x}^{r})\). The server calculates the average of pre-proximal local models \(\widehat{z}_{i,\tau}^{r}\) and uses the average information to update the post-proximal global model \(P_{\tilde{\eta}g}(\overline{x}^{r})\), ensuring that the server-side algorithm behaves similarly to a centralized proximal SGD approach. Finally, the server broadcasts the pre-proximal global model \(\overline{x}^{r+1}\) to all workers that use it to update their correction terms \(c_{i}^{r+1}\). As shown in Appendix 7.1, the proposed algorithm can be described mathematically by the following iterations \[\left\{\begin{aligned} &\mathbf{\widehat{z}}_{t+1}^{r}=\mathbf{ \widehat{z}}_{t}^{r}-\eta\Big{(}\nabla\mathbf{f}\left(\mathbf{z}_{t}^{r}; \mathrm{B}_{t}^{r}\right)+\frac{1}{\tau}\sum_{t=0}^{\tau-1}\overline{\nabla \mathbf{f}}\left(\mathbf{z}_{t}^{r};\mathrm{B}_{t}^{r-1}\right)\\ &\qquad-\frac{1}{\tau}\sum_{t=0}^{\tau-1}\nabla\mathbf{f}\left( \mathbf{z}_{t}^{r-1};\mathrm{B}_{t}^{r-1}\right)\Big{)},\;\forall t\in[\tau]-1, \\ &\mathbf{z}_{t+1}^{r}=P_{(t+1)\eta g}\left(\mathbf{\widehat{z}}_{t+ 1}^{r}\right),\;\forall t\in[\tau]-1,\\ &\mathbf{\overline{x}}^{r+1}=P_{\tilde{\eta}g}(\mathbf{\overline{x} }^{r})-\eta_{g}\eta\sum_{t=0}^{\tau-1}\overline{\nabla\mathbf{f}}\left( \mathbf{z}_{t}^{r};\mathrm{B}_{t}^{r}\right),\end{aligned}\right. \tag{2}\] where \(\mathbf{z}_{t}^{r}=\mathrm{col}\{z_{i,t}^{r}\}_{i=1}^{n},\mathbf{\widehat{z}}_ {t}^{r}=\mathrm{col}\{\widehat{z}_{i,t}^{r}\}_{i=1}^{n},\mathbf{\overline{x} }^{r}=\mathrm{col}\{\overline{x}\}_{i=1}^{n}\), \(\nabla\mathbf{f}(\mathbf{z}_{t}^{r};\mathrm{B}_{t}^{r})=\mathrm{col}\{ \nabla f_{i}(z_{i,t}^{r};\mathrm{B}_{t}^{r})\}_{i=1}^{n}\), and \(\overline{\nabla\mathbf{f}}(\mathbf{z}_{t}^{r};\mathrm{B}_{t}^{r})=\mathrm{col}\left\{ \frac{1}{n}\sum_{i=1}^{n}\nabla f_{i}(z_{i,t}^{r};\mathrm{B}_{i,t}^{r})\right\}_{i=1}^ {n}\). When \(r=1\), we set \(\nabla f_{i}\left(z_{i,t}^{0};\mathrm{B}_{t}^{0}\right)=0_{d}\) for all \(t\in[\tau]-1\), which implies that \(\frac{1}{\tau}\sum_{t=0}^{\tau-1}\overline{\nabla\mathbf{f}}\left(\mathbf{z}_{t}^{0}; \mathrm{B}_{t}^{0}\right)-\frac{1}{\tau}\sum_{t=0}^{\tau-1}\nabla\mathbf{f} \left(\mathbf{z}_{t}^{0};\mathrm{B}_{t}^{0}\right)=0_{nd}\). Note that the updates of the post-proximal local models \(\mathbf{z}_{t+1}^{r}\) use the parameter \((t+1)\eta\) for computing the proximal operator \(P_{(t+1)\eta g}\big{(}\widehat{z}_{t+1}^{r}\big{)}\). This is similar to using a decaying step-size in stochastic gradient methods and significantly improves the practical performance of Algorithm 1, as we will demonstrate in numerical experiments. We give a more detailed motivation for this update in Appendix 7.2. Our algorithm has the following additional features. **Decoupling proximal operator evaluation and communication.** Each worker \(i\) manipulates a pre-proximal local model \(\widehat{z}^{r}_{i,t}\) during the local updates and sends \(\widehat{z}^{r}_{i,\tau}\) to the server after \(\tau\) local updates. The algorithm decouples proximal operator evaluation and communication in the sense that the server, by averaging \(\widehat{z}^{r}_{i,\tau}\), can directly obtain the average of the local gradients across the workers, \(\sum_{t=0}^{\tau-1}\frac{1}{n}\sum_{i=1}^{n}\nabla f_{i}(z^{r}_{i,t};\mathrm{B }^{r}_{i,t})\); cf. the last step of (2). This is confirmed in the first step of (2), indicating that the average correction term among all workers is zero, i.e., \(\left(\frac{1\pi_{1}^{r}}{n}\!\otimes\!I_{d}\right)\left(\frac{1}{\tau}\sum_{t =0}^{\tau-1}\overline{\nabla\mathbf{f}}\left(\mathbf{z}^{r}_{t};\mathrm{B}^{r }_{t}\right)\!-\!\frac{1}{\tau}\sum_{t=0}^{\tau-1}\nabla\mathbf{f}\left( \mathbf{z}^{r}_{t};\mathrm{B}^{r}_{t}\right)\right)\!=\!0_{nd}\) for all \(r\in[R]\). In comparison, if each worker \(i\) naively uses proximal SGD with client-drf correction during the local updates, i.e., \(z^{r}_{i,t+1}=P_{(t+1)\eta g}\big{(}z^{r}_{i,t}-\eta(\nabla f_{i}(z^{r}_{i,t}; \mathrm{B}^{r}_{i,t})\)\(+\frac{1}{\eta_{g}\eta\tau}(P_{\hat{\eta}g}(\overline{x}^{r-1})-\overline{x}^{r}) -\frac{1}{\tau}\sum_{t=0}^{\tau-1}\nabla f_{i}\)\((z^{r-1}_{i,t};\mathrm{B}^{r-1}_{i,t}))\big{)}\), and sends \(z^{r}_{i,\tau}\) to the server after \(\tau\) local updates, then the server can no longer extract the average gradient due to the nonlinearity of proximal operator, and the resulting scheme becomes much more difficult to analyze. **Overcoming client drift.** From (2), we can better understand the role of the correction term. In fact, each worker \(i\) utilizes \(\frac{1}{\eta_{g}\eta\tau}\left(P_{\hat{\eta}g}(\overline{x}^{r-1})-\overline{ x}^{r}\right)\), which is equal to \(\frac{1}{n\tau}\sum_{t=0}^{\tau-1}\sum_{i=1}^{n}\nabla f_{i}(z^{r}_{i,t}; \mathrm{B}^{r-1}_{i,t})\), to introduce the local gradient information of the other workers and then replaces the previous \(\frac{1}{\tau}\sum_{t=0}^{\tau-1}\nabla f_{i}\left(z^{r-1}_{i,t};\mathrm{B}^{r -1}_{i,t}\right)\) with the new \(\nabla f_{i}(z^{r}_{i,t};\mathrm{B}^{r}_{i,t})\). Intuitively, during the local updates, each worker \(i\) approximately minimizes \(\frac{1}{n}\sum_{i=1}^{n}f_{i}+g\) rather than \(f_{i}+g\) itself, which is crucial for overcoming client drift. Notably, only a \(d\)-dimensional vector is exchanged per communication round per worker, making the communication lightweight. ## 3 Analysis In this section, we prove the convergence of Algorithm 1. All the proofs can be found in Appendix 7. To facilitate the analysis, we impose the following assumptions on \(f_{i}\) and \(g\)[19, Theorem 5.8 and Theorem 5.24]. **Assumption 3.1**.: _The loss function \(f_{i}:\mathbb{R}^{d}\mapsto\mathbb{R}\) is both \(\mu\)-strongly convex and \(L\)-smooth._ **Assumption 3.2**.: _The function \(g:\mathbb{R}^{d}\mapsto\mathbb{R}\cup\infty\) is proper closed convex, but not necessarily smooth. In addition, we assume \(g\) satisfies one of the following conditions:_ * _for any_ \(x\in\mathbb{R}^{d}\) _and any_ \(\widetilde{\nabla}g(x)\in\partial g(x)\)_, there exists a constant_ \(0<B_{g}<\infty\) _such that_ \(\|\widetilde{\nabla}g(x)\|\leq B_{g}\)_._ * \(g\) _is an indicator function of a compact convex set._ To handle the stochasticity caused by random sampling \(\mathrm{B}^{r}_{i,t}\), we denote \(\mathcal{F}^{r}_{t}\) as the event generated by \(\{\xi^{\vec{r}}_{i,\vec{t}},\ |\ i\in[n];\tilde{r}\in[r];\tilde{t}\in[t]-1\}\). We make the following assumptions regarding the stochastic gradients. **Assumption 3.3**.: _The stochastic gradients of each worker \(i\) satisfy_ \[\mathbb{E}\left[\nabla f_{i}(z^{r}_{i,t};\mathrm{B}^{r}_{i,t})| \mathcal{F}^{r}_{t}\right]=\nabla f_{i}(z^{r}_{i,t}), \tag{3}\] \[\mathbb{E}\left[\left\|\nabla f_{i}(z^{r}_{i,t};\mathrm{B}^{r}_{i,t})-\nabla f_{i}(z^{r}_{i,t})\right\|^{2}|\mathcal{F}^{r}_{t}\right]\leq \sigma^{2}/b.\] To measure the optimality, we define the Lyapunov function \[\Omega^{r}:=\|P_{\hat{\eta}g}(\overline{x}^{r})-x^{\star}\|^{2}+\|\Lambda^{r}- \overline{\Lambda}^{r}\|^{2}/n, \tag{4}\] where \(\Lambda^{r}:=\eta(\tau\nabla\mathbf{f}(P_{\hat{\eta}g}(\overline{x}^{r}))+ \sum_{t=0}^{\tau-1}\nabla\mathbf{f}(\mathbf{z}^{r-1}_{t};\mathrm{B}^{r-1}_{t}) -\sum_{t=0}^{\tau-1}\nabla\mathbf{f}(\mathbf{z}^{r-1}_{t};\mathrm{B}^{r-1}_{t}))\), \(\overline{\Lambda}^{r}:=\mathrm{col}\left\{\frac{1}{n}\sum_{i=1}^{n}\Lambda^{r }_{i}\right\}_{i=1}^{n}\), and \(x^{\star}\) is the optimal solution to (1). The first component \(\|P_{\hat{\eta}g}(\overline{x}^{r})-x^{\star}\|^{2}\) in the Lyapunov function \(\Omega^{r}\) serves to bound the optimality of the global model \(P_{\hat{\eta}g}(\overline{x}^{r})\). The second component is used to bound the client-drift error, which measures how far the local models \(\{z^{r}_{i,\tau}\}_{i}\) are from the common initial point \(P_{\hat{\eta}g}(\overline{x}^{r})\) after the local updates. This drift error can be controlled by the inconsistency of the local directions accumulated through the local updates, as characterized by \(\|\Lambda^{r}-\overline{\Lambda}^{r}\|^{2}/n\). We derive the following theorem. **Theorem 3.4**.: _Under Assumptions 3.1, 3.2, and 3.3, if the step sizes satisfy_ \[\tilde{\eta}:=\eta\eta_{g}\tau\leq\mu/(150L^{2}),\ \eta_{g}=\sqrt{n}, \tag{5}\] _then the sequence \(\{\Omega^{r}\}_{r}\) generated by Algorithm 1 satisfies_ \[\mathbb{E}\left[\Omega^{R+1}\right]\leq\left(1-\frac{\mu\tilde{\eta}}{3} \right)^{R}\mathbb{E}[\Omega^{1}]+\frac{30\eta\eta_{g}}{\mu}\frac{\sigma^{2}}{nb} +\frac{21\tau\eta\eta_{g}}{\mu n}B_{g}^{2}.\] Theorem 3.4 shows that \(\mathbb{E}\left[\Omega^{R+1}\right]\) converges linearly to a residual error order \(\mathcal{O}(\eta\eta_{g}\sigma^{2}/(\mu n\hat{b})+\tau\eta_{g}B_{g}^{2}/(\mu n))\). The first term in the residual is controlled by the stochastic gradient variance, while the second term in the residual is due to the bound of the subgradient \(\partial g\). Notably, in the special case when \(g(x)=I_{\mathcal{C}}(x)\) and \(\mathcal{C}\) is a convex compact set, we can get rid of \(B_{g}\) in the residual, under the following assumption. **Assumption 3.5**.: _When \(g(x)=I_{\mathcal{C}}(x)\), for the optimal solution \(x^{\star}\), it holds that \(\nabla f(x^{\star})=0\)._ Assumption 3.5 is, for example, satisfied when the optimal solution \(x^{\star}\) is in the interior of the convex set \(\mathcal{C}\). **Corollary 3.6**.: _Under Assumptions 3.1-3.3, and 3.5, if the step sizes satisfy (5), then the sequence \(\{\Omega^{r}\}_{r}\) generated by Algorithm 1 satisfies_ \[\mathbb{E}\left[\Omega^{R+1}\right]\leq\left(1-\frac{\mu\tilde{\eta}}{3} \right)^{R}\mathbb{E}[\Omega^{1}]+\frac{30\eta\eta_{g}}{\mu}\frac{\sigma^{2}} {nb}.\] We will verify the theoretical results with numerical experiments in the next section. ## 4 Numerical Experiments Consider the sparse logistic regression problem \[\underset{x\in\mathbb{R}^{d}}{\operatorname{minimize}}\frac{1}{n}\sum_{i=1}^ {n}f_{i}(x)+\frac{\vartheta_{2}}{2}\|x\|^{2}+\vartheta_{1}\|x\|_{1}, \tag{6}\] where \(f_{i}(x)=\frac{1}{m}\sum_{l=1}^{m}\ln\left(1+\exp\left(-\left(\mathbf{a}_{il}^ {T}x\right)b_{il}\right)\right)\), \((\mathbf{a}_{il},b_{il})\)\(\in\mathbb{R}^{d}\times\{-1,+1\}\) is a feature-label pair for the \(l\)-th sample on worker \(i\), and \(\vartheta_{1}\) and \(\vartheta_{2}\) are the regularization parameters. The optimal solution \(x^{\star}\) of (6) is computed in advance and the performance is measured by the optimality defined as \(\text{optimality}:=\|P_{\eta g}(\overline{x}^{r})-x^{\star}\|/\|x^{\star}\|\). To generate data, we use the method in [1] which allows to control the degree of heterogeneity by two parameters \((\alpha,\beta)\). In the first set of experiments, we compare our algorithm with existing algorithms, namely FedMid [10], FedDA [10], and Fast-FedDA [16], which all use a fixed number of local updates to solve the composite FL problem. We set \((\alpha,\beta)=(10,10)\), \(n=30\), \(m=2000\), \(\vartheta_{2}=0.01\), \(\vartheta_{1}=0.0001\), and \(\tau=5\). For the proposed algorithm, we use hand-tuned step sizes \(\eta=1\) and \(\eta_{g}=1\). For FedMid and FedDA, we use the same step sizes \(\eta=1\) and \(\eta_{g}=1\). For Fast-FedDA, we use the adaptive step sizes as specified in [16], which are decaying step sizes. We evaluate the algorithms under both full gradients and stochastic gradients with \(b=20\). As shown in Fig. 1, when using the full gradients, our algorithm achieves exact convergence. Although Theorem 3.4 suggests the existence of a residual determined by \(B_{g}\) (the subgradient bound), our experimental results show better performance than the theoretical results, indicating that there is the possibility to improve the analysis. Due to client drift, FedMid and FedDA only converge to a neighborhood of the optimal solution. FedDA performs better than FedMid because it overcomes the curse of primal averaging. Fast-FedDA converges slowly due to its decaying step sizes. When we use stochastic gradients, our algorithm also converges to a neighborhood. The other algorithms still perform worse due to client drift or the use of decaying step sizes. In the second set of experiments, we examine the impact of the step size \(\eta\) and the number of local updates \(\tau\) on our algorithm. For the impact of \(\eta\), we fix \(b=50\), \(\eta_{g}=1\), and \(\tau=10\) and consider \(\eta\in\{0.02,0.2,1\}\). For the impact of \(\tau\), on the other hand, we fix \(b=50\), \(\eta_{g}=1\), and \(\eta=0.2\) and study \(\tau\in\{2,5,10\}\). As shown in Fig. 2, smaller step sizes \(\eta\) lead to slower convergence but higher accuracy. In addition, a larger number of local updates \(\tau\) leads to faster convergence while maintaining the same level of accuracy. ## 5 Conclusion We have proposed an innovative algorithm for federated learning with composite objectives. By decoupling the proximal operator evaluation and communication, we are able to handle non-smooth regularizers in an efficient manner. The algorithm reduces the communication frequency through local updates, exchanges only a \(d\)-dimensional vector per communication round per worker, and addresses client drift. We prove linear convergence up to a neighborhood of the optimal solution and show the advantages of the proposed algorithm compared to existing methods in numerical experiments. Figure 1: Comparision with existing methods using full gradients (left) and stochastic gradients (right), respectively. Figure 2: Impact of \(\eta\) (left) and \(\tau\) (right) on Algorithm 1.
2307.06556
Metal Oxide-based Gas Sensor Array for the VOCs Analysis in Complex Mixtures using Machine Learning
Detection of Volatile Organic Compounds (VOCs) from the breath is becoming a viable route for the early detection of diseases non-invasively. This paper presents a sensor array with three metal oxide electrodes that can use machine learning methods to identify four distinct VOCs in a mixture. The metal oxide sensor array was subjected to various VOC concentrations, including ethanol, acetone, toluene and chloroform. The dataset obtained from individual gases and their mixtures were analyzed using multiple machine learning algorithms, such as Random Forest (RF), K-Nearest Neighbor (KNN), Decision Tree, Linear Regression, Logistic Regression, Naive Bayes, Linear Discriminant Analysis, Artificial Neural Network, and Support Vector Machine. KNN and RF have shown more than 99% accuracy in classifying different varying chemicals in the gas mixtures. In regression analysis, KNN has delivered the best results with R2 value of more than 0.99 and LOD of 0.012, 0.015, 0.014 and 0.025 PPM for predicting the concentrations of varying chemicals Acetone, Toluene, Ethanol, and Chloroform, respectively in complex mixtures. Therefore, it is demonstrated that the array utilizing the provided algorithms can classify and predict the concentrations of the four gases simultaneously for disease diagnosis and treatment monitoring.
Shivam Singh, Sajana S, Poornima, Gajje Sreelekha, Chandranath Adak, Rajendra P. Shukla, Vinayak Kamble
2023-07-13T04:52:18Z
http://arxiv.org/abs/2307.06556v2
## Metal Oxide-based Gas Sensor Array for the VOCs Analysis in Complex Mixtures using Machine Learning ## Abstract Detection of Volatile Organic Compounds (VOCs) from the breath is becoming a viable route for the early detection of diseases non-invasively. This paper presents a sensor array with three metal oxide electrodes that can use machine learning methods to identify four distinct VOCs in a mixture. The metal oxide sensor array was subjected to various VOC concentrations, including ethanol, acetone, toluene and chloroform. The dataset obtained from individual gases and their mixtures were analyzed using multiple machine learning algorithms, such as Random Forest (RF), K-Nearest Neighbor (KNN), Decision Tree, Linear Regression, Logistic Regression, Naive Bayes, Linear Discriminant Analysis, Artificial Neural Network, and Support Vector Machine. KNN and RF have shown more than 99% accuracy in classifying different varying chemicals in the gas mixtures. In regression analysis, KNN has delivered the best results with R\({}^{2}\) value of more than 0.99 and LOD of 0.012, 0.015, 0.014 and 0.025 PPM for predicting the concentrations of varying chemicals Acetone, Toluene, Ethanol, and Chloroform, respectively in complex mixtures. Therefore, it is demonstrated that the array utilizing the provided algorithms can classify and predict the concentrations of the four gases simultaneously for disease diagnosis and treatment monitoring. Gas sensor array, Metal oxide, Volatile Organic Compound, Complex mixture, Machine learning. ## 1 Introduction Modern technology is becoming even more essential for applications relating to healthcare. Consequently, there is much interest in reducing surgical involvement and enhancing illness early identification. Since it is quicker, less intrusive, and more accessible than a traditional clinical assessment, identifying certain illnesses employing human exhaled air has garnered great interest[1-3]. In this context, exhaled breath is the ideal non-invasive approach since it accurately captures the metabolic processes occurring within the human body[4, 5]. Compared to standard urine or serum tests, disease identification utilizing expiratory VOCs has emerged as the preferable approach for early screening. Besides, it has another excellent relevance for continuous breath monitoring for knowing health anomalies that appear transient or periodic[6]. Breath monitoring has several benefits, the most significant among them being a simple, quick, and straightforward sampling collection method provided by its non-invasive approach[7]. Many (about hundreds) of volatile chemical molecules are found in an individual's breath[4]. Some Volatile Organic Compounds (VOC) chemicals, notably isoprene (heart disease), acetone (diabetes), toluene (lung cancer), nitrogen monoxide (asthma), pentane (heart disease) and ammonia (kidney dysfunction) are established indicators that anticipate underlying disorders[8-10]. However, as shown in Fig. 1(a), several variables affect the constitution of exhaled breath and can be broadly classified as lifestyle-based, health-based and environment based. The usual range for toxicants in a person's exhaled breath is between parts per billion (PPB) to parts per trillion (PPT)[11]. The number of VOCs and their relative proportions are specific to the health of individuals, or unexpected VOCs may be released by irregular metabolic reactions[5]. Therefore, breath evaluation is often used to identify various diseases, including renal dysfunction, prostate cancer, and other types of cancers[1, 5, 12]. Identifying various indicators for each ailment makes it possible to distinguish between healthy people and those with illnesses using a sensor array. It is also possible to continually monitor those using wearable technology[6, 7, 13]. The brief involvement of these VOCs in various diseases through exhalation and their severe effect on the human body are expressed in Fig. 1(b). We have identified common VOCs Fig. 1: (a) Variables affecting the exhaled breath composition. (b) Cumulative breath biomarkers linked to cystic fibrosis, lung cancer, heart failure and diabetes (adapted from [14]). (c) The schematic of breath sampling having possible VOCs and sensor array response to a mixture of gases. The AI route to deconvolution complexity of the data and make composition predictions. like ethanol, toluene, acetone, and chloroform, among the biomarkers routinely used to analyze the response. Numerous potential uses, including exhaled condition monitoring for the detection of smaller doses of ethanol, have lately gained considerable attention. Breath ethanol levels in a healthy individual are typically below 380 parts per billion. Nevertheless, this might increase to 2300 ppb in cases of alcoholism and a history of fatty liver[15, 16]. Exhaled Breath includes several volatile chemicals, most present in minimal ppb concentrations. In such circumstances, it is thought that a person who has elevated breath acetone (T2DM \(>\) 1.71 ppm, T1DM \(>\) 2.19 ppm (Type 1 diabetes mellitus (T1DM), an asymptomatic disease, is caused by the body's antibodies attacking and killing the beta cells that produce insulin in the pancreas. As a result, little or no insulin is created, which causes blood sugar levels to increase. It often affects children and young people, and insulin therapy is always required. However, in type 2 diabetes mellitus (T2DM), the body generates inadequate quantities of insulin or becomes resistant, making it difficult to maintain normal blood sugar levels. Acetone often develops in adults and correlates with lifestyle factors, including obesity and inactivity and it may well go approximately 21 ppm[17, 18], which would ring alarming bells of diabetes. A biochemical disorder like diabetes mellitus affects over 400 million people worldwide[19] linked to such occurrences. Diabetes and disorders with overlapping biomarkers are shown graphically in Fig. 1(b). In humans, breathing indoor air or consuming substantial quantities of chloroform-containing liquids such as chlorinated water may result in the presence of chloroform in breath[20]. Besides, chloroform has been reported to affect the liver, kidneys, and the neurological network in general (brain)[21]. Current-state-of-the-art technologies use gas chromatography followed by mass spectroscopy to analyze breath samples to investigate specific VOCs in patient samples. Although those are precise, these techniques require a sophisticated setup and trained individuals to handle those, increasing the analysis cost. Moreover, it is also a time taking process to get the analysis report from centralized laboratories. These techniques also use labeling or pretreatment of the samples, which may affect the exact levels of the VOCs in complex media. Recently, gas sensors have been explored to analyze VOCs in breath samples due to their simple design, high sensitivity, fast response time and cost-effectiveness[22, 23, 24, 25]. These sensors can be employed at the point-of-care for VOCs analysis. Metal-oxide-based gas sensors have gained significant interest in these sensors due to their small size, ease of operation, inexpensiveness, excellent sensing performance and low maintenance. However, despite the high sensitivity and fast response time, these sensors have yet to reach clinical studies due to the presence of interfering species generating overlapping and masking gas-sensing signals[26]. The electrical signals generated from the gas sensor in a multi-component mixture solution are difficult to differentiate between signals of the target analyte and interfering species. In the recent past, another approach called "electronic nose" where a gas sensor array has been utilized in place of a single sensor to record the response in a multi-component mixture solution, and the data was analyzed using Machine Learning (ML) algorithms[27, 28]. The gas sensor array consists of non-specific sensors in the array and records the fingerprints of the multi-component mixture solution. This approach reduced the effect of interfering species and required no pretreatment of the breath samples, thereby shifting the challenges of gas sensing from the physical to the digital domain. In this study, we employed a gas sensor array based on targets that were sputtered with a Direct Current (DC) source to deposit the metal oxides CuO, NiO and ZnO. The responses of various analytes passing over it were recorded using four volatile compounds: ethanol, toluene, acetone, and chloroform. The initial phase was mixing a single analyte with synthetic air and recording the response (resistance vs. time) for each electrode at 200 \({}^{\circ}\)C. One electrode was utilized at a time. After that, an experiment was conducted using two gases simultaneously, with one analyte being kept constant at a specific concentration while the other was changed. There were twelve different potential combinations for the 200 \({}^{\circ}\)C parallel readings. Similarly, the reaction was measured while carefully purging three gases, with two maintaining constant and the third changing. The gas sensor array's electrical response was examined using ML techniques. We have used different ML algorithms to analyze the data from the Metal Oxide Semiconductor (MOS) sensor array and compared their performances for the simultaneous detection of four VOCs. The ML algorithm was used to perform two types of analysis: (i) _classification_ to categorize the varying gas/ chemical and (ii) _regression_ analysis to predict the concentration of the gas. Therefore, not only qualitative but quantitative detection of four VOCs simultaneously allows the detection of multiple diseases and monitoring of the health of individuals. The proof-of-concept demonstration using a sensor array combined with ML algorithms can potentially analyze individual VOCs in breath samples to provide diagnostic and therapeutic information in diseases outlined such as lung cancer, heart diseases, diabetes and fibrosis, etc. Further miniaturization and its application to point-of-care testing devices can improve diagnostics and treatment monitoring of diseases (e.g., cancer). ## 2 Experimental Details In this section, we discuss the fabrication of MOS gas sensor array followed by experiment setups for ML-based gaseous chemical classification and regression analysis. ### Fabricating metal oxide (MOS) gas sensor array #### 2.1.1 Thin film deposition using DC-RF magnetron sputtering DC reactive magnetron sputtering was used to create thin films of CuO, NiO and ZnO onto both glass and alumina substrates. During DC magnetron sputtering, the metal (Copper, Nickel, and Zinc) targets (99.99%) of 1 inch in diameter and a few millimeters thick were employed. The sputter gas was pure argon (99.9997%), while the reactive gas was pure oxygen (99.9997%). Mass flow regulators controlled both gas-flows independently. The sputtering chamber was vacuumed to a base pressure of about 10-6 mbar with the help of a turbo molecular vacuum pump and a rotary mechanical backing pump before the thin oxide films were deposited. The input parameters of different voltages and currents were used. The constant Argon flow rate was 30 SCCM. Pre-sputtering was kept going for 10 minutes to ensure the target surface was thoroughly scrubbed. Following the pre-sputtering step, 10 SCCM of oxygen was added into the reaction chamber while the deposition pressure was maintained at a constant \(\sim\) 10\({}^{-2}\) mbar. Thin film deposition on substrates may begin once the shutter is opened. The optimum deposition time (t\({}_{\text{d}}\)) was different for three oxides, while the optimum substrate temperature (Ts) was 300 K. After rotating the substrates while maintaining a distance of 6-8 cm from the target, we found the best results. Table S1 shows the variation of deposition parameters for all three oxide films. The sputtered samples are shown digitally in Fig. S1(inset). #### 2.1.2 Material characterization of MOS gas sensor array Powder X-ray diffraction (XRD) was used to examine the microstructure and crystallinity of materials using a Bruker Powder XRD device utilizing Cu k\(\alpha\) radiation (\(\lambda\) = 1.5418 A) and a nickel filter. Data were gathered at a scan rate of 2 data points per minute, with steps of 2 thetas ranging from 10 to 80 degrees. The films' surface morphology was captured using a Nova NANOSEM 450 equipped with WDS and EDS. We used a secondary emission mode with an operating voltage of 10 kV for this particular picture capture. EDS was used to verify the composition of the elements. Raman spectroscopy was performed with a Horiba scientific Xplora plus spectrometer using a 514-nanometer-wavelength argon laser. The samples' thickness was determined using a KLA Tencor D600 stylus surface profiler equipped with a step height measuring system. #### 2.1.3 Gas sensing studies using MOS sensor array: Experimental setup Gas sensing experiments were carried out by observing how the thin films' electrical resistance changed in response to various VOCs at fixed operating temperatures. The sample gases were infused under dynamic flow conditions fixed by mass flow controllers (Maker Alicat, United States) with varying capacities. In order to create the test gas vapors, synthetic air was bubbled over volatile organic compounds that were maintained at a constant heating of zero degrees Celsius by an MFC-controlled carrier gas flow. The vapor concentrations were calculated using the Antoine equation. It could adjust the dilution factors by combining the test gas-saturated transport fumes with a synthetic flow of air that the MFC sustains. Fig. S2 depicts the gas detection apparatus used in this investigation. The films were mounted on a brass sample holder and put in a detecting chamber that could reach 400 \({}^{\circ}\)C using a calibrated smart thermostat (Excel Instruments) to evaluate the gas sensing characteristic. A thermocouple of class K was subsequently inserted into the film frame and connected to the thermometer to measure the sensor's temperature. An Alumina substrate with interdigitated Gold electrodes is utilized for measuring sensor resistance. A Keithley 6517B electrometer linked to a workstation was used to measure the sensor resistance by applying a consistent bias voltage of 10 V to two probes. With a tolerance of 1 fA, it's a high-resistance analyzer that could contribute meaningfully to \(10^{15}\) ohms. Exposing the deposited films to the appropriate vapours diluted in air was necessary to test the sensor response for ethanol and other volatile organic chemicals. The % response (S) was calculated using eq (1) as below. \[\%\ Response,S=\frac{Ra-Rg}{Ra} \tag{1}\] Where Ra is the sensor resistance within airflow and Rg is the sensor resistance when the test gas is present. It should be noted that the response sign for n-type and p-type devices is opposite to the stated gas. The sensor resistance decreases when n-type material is exposed to a reducing gas because the gas injects excess carriers into the material. However, as the resistance decreases, the resistance changes the most, increasing 100% monotonically. However, suppose the resistance increases due to gas exposure, as in the case of p-type material subjected to reducing gas. In that situation, the standard deviation of the resistance change is more significant than 100% or greater than double the original value. The chemiresistive array sensing tests were done by passing a set amount of target gas mixed with a predefined proportion of air, determined by the equalization method at fixed intervals. Both with and without the analyte, the total flow was maintained at 500 SCCM, and the two-probe mode was used to collect the sensor's resistance data. The sensors were tested by being exposed to ethanol concentrations of 100-2400 ppm at 200 \({}^{\circ}\)C. Individual response research utilizing toluene, chloroform, and acetone was also carried out under identical conditions. By cooling the liquids in the tube to the same temperature and using the same MFC dispersion ratios, similar studies were conducted at 200 \({}^{\circ}\)C to examine ethanol's cross-sensitivity to other gases such as toluene, chloroform, and acetone. ### 2.2. Gaseous chemical classification and regression analysis using machine learning models The dataset used in this study consists of gas sensor data comprising three different mixtures. Each mixture represents a distinct scenario based on the number of gases present, namely 1. _1-gas:_ a single gaseous chemical, 2. _2-gases:_ mixture with one constant chemical and one varying chemical, 3. _3-gases:_ mixture with two constant chemicals and one varying chemical. The chemicals involved in these mixtures are Acetone, Toluene, Chloroform, and Ethanol. On these 1-gas, 2-gases and 3-gases datasets, we performed the analysis. All possible combinations of four biomarkers were employed to record the readings for the mixtures with two and three gases, mentioned in Table S2. A gaseous chemical that needs to be classified or whose concentration needs to be anticipated is kept variable for datasets with mixtures of gases. Three sensing components (CuO, NiO, and ZnO) are used in the dataset to record measurements. The objective is to predict or classify the concentration of the varying gas, whether it is Acetone, Toluene, Chloroform, or Ethanol. Each dataset has the following sample rows: 2241436 sample rows for 1-gas, 227617 rows for the mixture of 2-gases and 131120 rows for the mixture of 3-gases. There are 6, 8, and 10 columns in the abovementioned datasets. The correlation matrices for the three datasets are shown in Fig. S3in supplementary information. We performed two types of analysis: (i) _classification_ to categorize the varying gas/ chemical and (ii) _regression_ analysis to predict the concentration of the gas. For the classification task with 1-gas dataset, we used five features, i.e., resistance, time, concentration in terms of parts per million (PPM), temperature and electrode, to categorise the varying chemicals. For the classification task with 2-gases dataset, we used seven features, i.e., time, ZnO_resistance, NiO_resistance, CuO resistance, constant_chemical (CC), CC_PPM and varying_chemical_PPM (VC_PPM) to classify varying_chemical (VC). For 3-gases dataset, we used nine features, i.e., time, ZnO_resistance, NiO_resistance, CuO resistance, constant_chemical_1 (CC_1), CC_1_PPM, CC_2, CC_2_PPM, and varying_chemical PPM to classify varying_chemical (VC). For regression tasks with 1-gas, we predicted the gas concentration in PPM; for both of the 2 and 3 gases datasets, we predicted VC_PPM. The rest of the column values were used as features. For the experimental analysis, the dataset was divided into training, validation and testing sets with a ratio of 56:14:30. To assess the performance of the classification analysis, the accuracy metric was used; and for the regression analysis, the mean absolute error (MAE), mean squared error (MSE), root mean square error (RMSE), normalized RMSE (NRMSE), coefficient of determination (R\({}^{2}\)), Limits of Detection (LoD), and Limit of Quantification (LoQ) were employed[29-31]. Here, we present the results of the test dataset. We observed significant outliers in the dataset. Outliers in the input data may distort and deceive ML models during training, leading to longer training times, less accurate models and ultimately worse outcomes. Therefore, the outliers were eliminated using the data quantile information[31] defining an upper and lower limit. A data value was eliminated from our primary data frame if it exceeded the upper limit or fell below the lower limit. The datasets underwent preprocessing steps to conduct a comprehensive analysis, including outlier detection and removal, min-max scaling to handle variations in feature values and label encoding to address categorical features[32]. Categorical data[33] was encoded using label encoding, as only eight distinct values were in the categorical column. It is crucial to convert categorical data into a numerical format to enable processing by ML models. Other approaches for categorical data include one-hot encoding, vectorization and label encoding. Upon completing the dataset preprocessing, models were built using the selected algorithms. Rigorous hyperparameter tuning was performed for all the algorithms employed in this gas sensor dataset analysis. Grid search cross-validation was utilized for hyper parameter tuning[34]. _Machine configuration_: All the ML based analysis were performed on the TensorFlow-2 framework having Python 3.7.13 over a computer with Intel(R) Xeon(R) CPU @ 2.00GHz having 52 GB RAM and Tesla T4 16 GB GPU. ## 3 Results This section discusses the fabrication of devices, characterization of gas sensor array, ML-based classification and regression of gaseous chemicals. ### Device fabrication, characterization of the gas sensor array and sensing studies In order to create our device, we used a DC reactive magnetron sputtering technique to deposit copper, nickel, and zinc oxide on an alumina substrate having interdigitated gold electrodes with the corresponding metal targets. Table S1 represents the sputtering parameter to ensure the deposition process is accurate. Pre-deposition of 10 minutes was done to ensure that the surface was thoroughly scrubbed and no contamination was left. Fig. S1 (inset) represents the schematic diagram of our fabricated device. The gold electrodes were vital because they helped the device detect resistance changes when exposed to different gases. #### 3.1.1 Material characterization of MOS gas sensor array Although the samples used in gas sensing are deposited on Alumina substrates, the XRD of those films was primarily dominated by highly crystalline Alumina substrate peaks. Therefore, to confirm the crystallinity of each synthesized sensor film, the same were also deposited on glass and investigated using XRD. The corresponding XRD patterns of CuO, NiO, and ZnO are displayed in Fig. 2(a). The bottommost XRD data in Fig. 2(a) illustrates the characteristic patterns for the (002), (-111) and (111) reflections of a monoclinic CuO layer with lattice constants of 5.13, 3.42, and 4.68 A (JCPDS: 01-073-6023). Similarly, the (111) and (200) considered as the top of NiO (JCPDS: 00-047-1049) were matched by the middle XRD pattern, which corresponds to a cubic arrangement with lattice constants of \(\mathbf{a}=\mathbf{b}=\mathbf{c}=4.17\) A. The top-most XRD pattern in Fig. 2(a) shows a single peak that matches the ZnO wurtzite structure containing positions (100), (002) and (101) (JCPDS: 00-036-1451), and it possesses a hexagonal structure with cell parameters (\(\mathbf{a}=\mathbf{b}=3.25\) and \(\mathbf{c}=5.21\) A). No extra peaks corresponding to any impurity are seen to the best of the resolution in any of the XRD patterns. From all these XRD data, it may be inferred that each of the oxide layers is formed albeit with a minimal thickness which results in the poor intensity of peaks and primarily ZnO and NiO show a very strong texture in crystallinity marked by a single diffraction peak. This implies that the films are preferentially oriented (except CuO) along a certain direction[22]. This happens due to homogeneous nucleation of the oxide crystals, which grow along the crystal's energetically most favorable (lowest formation energy) planes. Moreover, the broadening of the peaks reflects a smaller crystallite size, possibly due to a lack of energy for long-range growth as the deposition is carried out at room temperature. Nevertheless, such small crystallite size and low thickness are favorable for gas sensing as the sensing response is dramatically improved if the dimensions are of the order of space charge region[35]. Because of the low thickness of the films, the XRD pattern is not significant in analyzing the crystallinity of the films. Therefore, Raman spectra have been investigated for all three samples at room temperature. Here the signal is collected from the tiny focus of the laser beam on the film surface and is collected in back reflection geometry. Therefore, it has much better sensitivity to the surface than the substrate that lies below. The Raman spectra identify their vibrational properties at ambient temperature and are found to give peaks that are unique to each material. The copper oxide Raman spectrum on an alumina substrate is shown in Fig. 2(b), depicting Raman modes at 294.38, 343.3, and 628.8 cm-1. The positions of the peaks in the spectra with this specimen are in close vicinity of those corresponding reported CuO values[36, 37]. Several factors, such as poor crystallinity, an accumulation of structural faults in the crystalline lattice, and fluorescence of the incident radiation, may be responsible for the broad baseline around 400 and 600 cm-1 seen in this spectral region. Figure 2: The (a) XRD patterns and Raman spectra of (b) CuO, (c) NiO and (d) ZnO of the thin films at room temperature (for XRD, the samples were also deposited on glass substrates). The Scanning electron micrographs of CuO, NiO and ZnO at low (e-g) and high (h-j) magnifications, respectively. EDS spectra of (k) CuO, (l) NiO and (m) ZnO. Fig. 2(c) depicts the Raman spectra recorded for NiO thin films that were deposited on a glass substrate for 18 minutes. As per identification in ref[38], the peaks observed may be ascribed to the one-phonon (1P) (at 570 cm-1) constituting TO and LO modes, (2P) 2TO aspects (at 730 cm-1), and 2LO modes (at 1090 cm-1), respectively, thereby confirming the phase. The moderate fraction of 2TO (at 730 cm-1) equates to a two-phonon (2P) transverse sequence. In comparison, the peaks LO at 570 cm-1 and 2LO at 1090 cm-1 relate to longitudinal optical phonon modes (LO) of a primary and secondary order, respectively. The Ni-O bond's stretching mode and flaws are both indicated by the peak LO's considerable breadth (570 cm-1)[39]. The Raman modes A1(TO) positioned at 380 cm-1, E2(H) at 435 cm-1 and ELO at 583 cm-1 constitute the vibrational configurations corresponding to the hexagonal wurtzite geometry of ZnO[40, 35] on an alumina surface, as shown in Fig. 2(d). Although the peak at 325 cm-1 matches up to the second order vibration mode originating from the zone boundary phonons [E2(high)-E2(low)] of hexagonal ZnO[40], 2LA pattern (536.4 cm-1) correlates to Longitudinal Acoustic (LA) phonon vibration of the second order. The sharp peak at 418 cm-1 is attributed to ZnO's E1 (TO) state because of the oxygen deficiencies or zinc interstitials. According to Ristic et al.[41], this peak is often found in bulk ZnO grains. The peak at 510.75 cm-1 corresponds to the primary Raman signal of the ZnO A1L weak mode in the wurtzite structure. Overall, the three samples' Raman spectra show very low intensities and significant peak broadening. Like XRD, this broadening results from the sufficiently small size of the crystallites. Therefore, these results are in good agreement with that of the XRD of the films. However, show better confirmation of single-phase oxide films and their nanocrystalline nature. Along with the crystalline structure, the morphology (shape, grain size, porosity, etc.) of the sensor films significantly affects the sensing attributes of the chemiresistive sensors. Therefore, the microstructure and morphology of the films are examined using scanning electron microscopy along with microscopic composition analysis using energy dispersive X-ray spectroscopy. The same for all three films is shown in Fig. 2(e)-2(j) at low and high magnifications. It should be noted that the films are deposited on a polycrystalline Alumina substrate that has a distinct grain structure. The same is seen in SEM images of all the films. However, the sensing oxide film deposited on its top takes an almost conformal shape of the alumina substrate grains. It, therefore, is not easily seen at low magnification (Fig. 2(e,f,g)). Upon close inspection at high magnifications, the smaller crystallites of the sensing oxide are seen in all three films (Fig 2. (h, i, j)). It may be seen that the particles are clustered, making it exceedingly difficult to determine their form. The granules develop in tiers, and the texture appears rough. Nevertheless, such a high surface roughness and thereby, high surface area is beneficial for the gas sensing devices. The ZnO films particularly show a 2D flake-like morphology. Typically ZnO sheets, when formed, have (002) orientation due to the low free energy of formation[42]. The presence of sheets along with (002) a single peak in XRD points to the same. The local chemical composition of the films is examined through EDS spectroscopy. It may be seen from Fig. 2 (k, l, m) that along with the Al form substrate, only a single metal is seen in the spectra of each film such as Cu, Ni and Zn. While the oxygen peak may arise from sensor film or substrate as both are oxides. #### 3.1.2 Electrical Measurement of the MOS gas sensor array The I-V characteristics of the oxide thin films deposited in the alumina substrate with gold IDEs were investigated from room temperature to 300 \({}^{\circ}\)C. The bias voltage was swept between -10 V and +10 V to each sensor at ambient temperatures demonstrating an ohmic contact throughout the entire temperature range. The resistance values at each temperature were calculated from I-V slopes. Fig. S4(a, d), S4(b, e) and S4(c, f) show the IV plots for CuO, NiO and ZnO in linear and logarithmic scales. The resistance values so deduced were plotted as a function of temperature, and all the samples demonstrated typical insulating/semiconducting nature. (See Fig. S5 in the supporting information section). The typical value of resistances was about 500 k\(\Omega\), 20 M\(\Omega\) and 100 M\(\Omega\) for CuO, NiO and ZnO, respectively, at room temperature. These dropped to 423 \(\Omega\), 14 k\(\Omega\), and 684 k\(\Omega\) at 300oC for CuO, NiO, and ZnO, respectively. Here, CuO and NiO are p-type semiconductors, while ZnO is an n-type semiconductor. The typical carrier type in these binary oxides arises because of particular defect chemistry[43, 44, 45, 46]. The p-type oxides have metal vacancies whereas n-type oxides have oxygen vacancies as the dominant type of defect. These give rise to the acceptor and donor levels within the forbidden gap respectively. In this case, the thin film fabrication was done under significant oxygen partial pressures (30:10 SCCM of Ar and O\({}_{2}\) ratio). It ensures high lattice oxygen content in films, increasing metal vacancies for p-type and reducing oxygen vacancies for n-type. Therefore, the p-type films are more conducting than the n-type oxides under oxygen-rich deposition conditions[46]. #### 3.1.3 Gas Sensing Measurements and data curation In order to generate the response dataset for the gas sensor array with response to selected gases, a large number of experiments were performed. Here, sensor temperature, gas concentration and gas type have been identified as primary parameters for the sensor output. As seen in Fig. 3(a-c), the gas sensor's response was calculated and plotted for each gas at different concentrations. Overall, NiO showed a highly selective response to ethanol but a high response to all the gases. At the same time, ZnO had a consistently low response yet was selective to ethanol (See Fig. 3(d)). The actual data sets are shown in the Supporting information section Fig. S6. The consistently high response NiO may be attributed to their Fig. 3: The individual gas sensing results for four test gases, Toluene, ethanol, acetone and chloroform of the three samples (a) ZnO, (b) NiO and (c) CuO at 200 \({}^{\circ}\)C. (d) The comparison of the response for 1000 ppm of each gas for each of the sensing electrodes showing a preferred selectivity for ethanol in NiO and ZnO whereas CuO sensor does not show any preferred selectivity. commensurate (low and high) defect concentrations respectively, as defects provide an active site for surface oxygen adsorption[47, 48]. The single gas experiment results shown in Fig. S6 are straightforward and are similar to how traditional gas sensors are reported. However, as mentioned earlier, detecting test gases becomes challenging in the presence of other potentially interfering gases. The experiments were designed such that a predetermined concentration of the interfering species is first supplied as a background flow in the chamber, followed by the introduction of the test gas (2-gases) in order to assess the impact of the interfering species (other gas) on the primary analyte (test gas ethanol). Calculations were made using the response values after varying the test gas concentration. The two interfering gases were maintained constant in the next series of trials (3-gases) while the test gas concentration was altered. The representative data for ethanol response in chloroform (2-gases) and in Toluene + chloroform (3-gases) have been shown in Fig. 4(a and b). The other data sets have been shown in the Supporting information section Fig. S7 & Fig. S8 for 2 gas and 3 gas, respectively. The values of response calculated here for 2-gases and 3-gases depict that the presence of any other VOCs led to a drastic reduction in response. The representative data for NiO response in the absence & presence of a single interfering gas and a double interfering gas is shown in Fig. 4(c and d) respectively. Similar results are obtained when the treatment is done for other sensors and/or permutation - combinations of the gases. The supplementary section Fig. S9 contains the response vs concentration data for CuO and ZnO. Therefore, analyzing complex mixtures of gases requires non-linear data processing. Hence, we have employed ML algorithms for classification and regression analysis. The results are presented in subsequent sections. ### Classification and regression analysis of gases using machine learning #### 3.2.1 Gas classification To reduce the complexity of the data while preserving trends and patterns, we used Principal Component Analysis (PCA)[49] on the sensor signal response. The variances of first 5 Figure 4: The response to one gas present alongside another was investigated for all possible combinations. Fig. 4(a) illustrates ethanol sensing in chloroform and 4(b) represents ethanol sensing in chloroform & toluene, at 200 \({}^{\circ}\)C. Fig. 4(c) and 4(d) show the computed response values for NiO in the absence & presence of a single interfering gas and a double interfering gas, respectively. principal components (PC1, PC2, PC3, PC4, and PC5) are shown in Table S3 for 1-gas, 2-gases and 3-gases datasets. Pictorial representation of variability of first 5 PCs have been shown in Fig. S10. Here, we formulated the task as a classification problem to classify the gaseous chemicals, i.e., Acetone, Toluene, Chloroform, and Ethanol. The classification models were developed using some supervised learning techniques, e.g., Logistic Regression[47], K-Nearest Neighbor (KNN)[48], Naive Bayes (NB)[50], Random Forest (RF)[51] and Linear Discriminant Analysis (LDA)[52], based on the PCA results for the gas classification. Different plotted points were dispersed depending on the type of chemicals used as shown in Fig.s S11-S12 and Fig. 5. By taking into account PC1 and PC2, we obtained the 2D plots of Fig.s S11-S12 and Fig. 5 over three datasets. In this instance, PC1, PC2, and PC3 were also employed to produce 3D graphs. In logistic regression[47], the training procedure employed the one-vs-rest scheme since our task involves multiple classes. We used cross-entropy loss and L2 regularization here[53]. In KNN[48], empirically the number of nearest neighbors was set to five, and the distance metric was chosen as Euclidean. In NB[50], every pair of features is conditionally independent given the class variable value, which is a supervised learning technique based on Bayes' theorem. In order to classify our data, we employed the Gaussian Naive Bayes method. The RF and Extra-Trees methods are two averaging algorithms based on randomized decision trees that we employed[51]. Each algorithm uses a perturb & combine method that is tailored for trees. It means adding randomization to the classifier design results in creating a diverse group of classifiers. The average forecast of the individual classifiers is used to represent the ensemble prediction. Using Bayes' rule and fitting conditional class densities to the data, LDA[52] produces a linear decision boundary for classification. The model assumes that all classes have the same covariance matrix and fit a Gaussian density to each class. Fig. S11-S12 and Fig. 5 display the 2D and 3D plots of the three datasets obtained after classification using the above-employed methods. In Table 1, we present the accuracies obtained by the employed models. Here, KNN and random forest attained good accuracies for all three datasets, in contrast to the ML models like logistic regression, NB and LDA. For 1-gas and 2-gases datasets KNN performed the best, and random forest attained the best result for the 2-gases dataset instead of their akin performances. In Fig. S11-S12 and Fig. 5, we can also comprehend misclassification results produced by logistic regression, NB, and RF. For example, in Fig. 5 bottom-left, it can be seen that the Ethanol part has been misclassified as Acetone. #### 3.2.2 Regression analysis: quantification of gases in different mixtures In this analysis, we found that the KNN-based regression[54] significantly exceeded the other algorithms in terms of performance when compared with some other contemporary models, such as Artificial Neural Network (ANN), RF, Decision Tree, and Linear Regression[51, 53 \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **A** & **Model** & **Logistic** & **KNN** & **Naïve** & **Random** & **LDA** \\ **cc** & **Dataset** & **Regression** & & **Bayes** & **Forest** & \\ \cline{2-8} **ur** & **1-gas** & 65.56543 & **99.99802** & 71.75669 & 99.99679 & 60.06396 \\ \cline{2-8} **ac** & **2-gases** & 42.21952 & 99.81154 & 60.93418 & **99.82108** & 39.78625 \\ \cline{2-8} **y** & & 38.73490 & **99.03290** & 51.83471 & 98.70436 & \\ \cline{2-8} **(** & **3-gases** & & & & 39.76216 \\ \cline{2-8} **\%** & & & & & \\ \hline \end{tabular} \end{table} Table 1: Model performances over various gas mixture datasets 56]. The performance of the KNN relies on various parameters, such as the distance metric used to evaluate similar data points, the number of neighbors taken into consideration, and the weighting method used to aggregate their values. In this study, we attempted to enhance the effectiveness of the KNN in estimating the gas concentration in mixes. In order to decrease MSE and increase the R\({}^{2}\), which gauges how much variance can be explained by the model, we set out to identify the optimal set of parameters. To fine-tune the model, we experimented with various distance metrics, such as Euclidean, Manhattan and Minkowski, with p=3 and p=4[34]. We used two weighting schemes: distance and uniform, wherein closer neighbors have a higher weight, and we adjusted the number of neighbors taken into consideration, ranging from 1 to 10. The model's performance was checked by applying cross-validation on the training and validation sets, and the optimum set of parameters was decided based on the parameters with the lowest MSE and optimum R\({}^{2}\). During the hyper parameter tuning procedure for the KNN regression, the best parameter choices for each gas mixture were identified. For all the datasets, i.e., 1-gas, 2-gases, 3-gases, the Euclidean distance metric, the five nearest neighbors, and distance weighting were the most efficient choices. Encouraging results were obtained while analyzing the algorithm's performance with all these ideal parameter configurations. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline **Datas** & **Gas** & **RMS** & & & **NRMS** & & & \\ **et** & **Name** & **E** & & & **E** & & & \\ \hline 1-gas & & 0.0008 & 7.43\(\times\)10 & 0.0000 & 0.0011 & & 0.0034 & \\ & & 6 &.7 & 1 & 4 & & 4 & \\ \hline \end{tabular} \end{table} Table 2: Prediction performance of KNN regression on 1-gas, 2-gases, and 3-gases datasets. \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & 0.0008 & 6.77\(\times\)10 & 0.0000 & 0.0010 & & 0.0032 & \\ Toluene & 2 & \(\cdot^{7}\) & 1 & 9 & & 8 & 0.01095 \\ \hline Ethanol & 0.0007 & 5.82\(\times\)10 & 0.0000 & 0.0010 & & 0.0030 & \\ \(6\) & \(\cdot^{7}\) & 1 & 1 & 0.99997 & & 0.01015 \\ \hline Chlorofer & 0.0015 & 2.35\(\times\)10 & 0.0000 & 0.0020 & & 0.0061 & \\ m & 3 & \(\cdot^{6}\) & 4 & 3 & 0.99990 & & 0.02039 \\ \hline Acetone & 0.0013 & 1.72\(\times\)10 & 0.0000 & 0.0031 & & 0.0095 & \\ & 1 & \(\cdot^{6}\) & 2 & 9 & & 7 & \\ \hline Toluene & 0.0009 & 8.98\(\times\)10 & 0.0000 & 0.0022 & & 0.0067 & \\ & 4 & \(\cdot^{7}\) & 1 & 6 & & 8 & \\ \cline{2-6} gases & 0.0009 & 9.21\(\times\)10 & 0.0000 & 0.0023 & & 0.0069 & \\ Ethanol & 5 & \(\cdot^{7}\) & 1 & 0 & & 2 & \\ \cline{2-6} & Chlorofer & 0.0019 & 3.79\(\times\)10 & 0.0000 & 0.0046 & & 0.0140 & \\ m & 4 & \(\cdot^{6}\) & 6 & 6 & & 0 & 0.04669 \\ \hline Acetone & 0.0016 & 2.67\(\times\)10 & 0.0000 & 0.0039 & & 0.0117 & \\ & 3 & \(\cdot^{6}\) & 5 & 3 & & 9 & \\ \cline{2-6} \(3\)- & Toluene & 0.0020 & 4.19\(\times\)10 & 0.0000 & 0.0049 & & 0.0148 & \\ gases & 4 & \(\cdot^{6}\) & 6 & 6 & & 8 & \\ \cline{2-6} & Ethanol & 0.0019 & 3.87\(\times\)10 & 0.0000 & 0.0047 & & 0.0142 & \\ & 6 & \(\cdot^{6}\) & 5 & 4 & & 2 & \\ \hline \end{tabular} Table 2 presents the prediction performances of KNN regression on 1-gas, 2-gases, and 3-gases datasets, respectively, regarding RMSE, MSE, MAE, NRMSE, R2, LoD, LoQ. The model successfully predicted the target variable for the 1-gas mixture with R\({}^{2}\) of more than 0.99, showing its high prediction performance. Also, it was determined that the corresponding errors (RMSE, MSE, MAE, and NRMSE) were shallow. The model also obtained an outstanding R\({}^{2}\), i.e., greater than 0.99 for the 2-gases and 3-gases mixtures, implying a solid connection between observed and predicted values. Also, errors were near zeros, implying comparatively smaller magnitudes of the prediction mistakes. The model also excelled in other performance metrics, e.g., LoD and LoQ, when examined on the instances of the 1-gas, 2-gases and 3-gases datasets. In Fig. 6, we present the regression plots obtained using KNN regression, where the x and y axis denote expected and obtained chemical concentrations separately for Acetone, Toluene, Ethanol and Chloroform over 1-gas, 2-gases, and 3-gases datasets. As mentioned earlier, we have used ANN, Random Forest, Decision Tree, and Linear Regression for comparative prediction analysis. The ANN can learn and adapt to new data, making it a powerful tool for solving complex problems. However, ANN requires a lot of data and computational power to train and optimize, and its results may only sometimes be interpretable. Here, in the ANN model, we had one neuron on the output layer that matched the concentration of the varying gas. The model comprised six hidden layers containing 128, Fig. 6: Prediction plots of KNN regression: 1st Column: 1-gas dataset, 2nd Column: 2-gases dataset and 3rd Column: 3-gases dataset. Row-wise, the prediction of Acetone, Toluene, Ethanol and Chloroform respectively. 256, 512, 64, and 32 neurons. All hidden layers employed the ReLU (Rectified Linear Unit) activation function to capture the non-linearity[57]. We utilized a linear activation function in the output layer. The learning parameters for the ANN model were optimized on the training set using the Adam optimization function. Here, the training effectiveness was assessed using the loss function MSE. The following hyper-parameters were empirically fixed on the validation set: learning rate \(=10^{-3}\), Adam's first and second moment estimates 0.9 and 0.999, and zero-denominator remover \(=10^{-7}\). In linear regression[55], we model the relationship between the dependent and one or more independent variables. Here, we identify the line of best fit that minimizes the sum of squared errors between the predicted and actual values. In decision tree regression[56], we use a tree-like model of decisions and their possible consequences for prediction. However, they can be prone to overfitting and may need to be more accurate in certain situations. Random forest[51] ensembles multiple decision trees to improve performance and reduce overfitting. It randomly selects a subset of features and data samples for each tree to make it robust to noise and outliers. It also offers feature importance ranking and can handle missing data. However, it may perform poorly on imbalanced datasets and can be computationally expensive for large datasets. In Tables S4, S5 and S6, we compare the experimental results obtained on 1-gas, 2-gases and 3-gases datasets using KNN regression, ANN, random forest, decision tree, and linear regression models. The evaluation results regarding metrics RMSE, MSE, MAE, NRMSE, R\({}^{2}\), LoD and LoQ are shown here for predicting Acetone, Toluene, Ethanol, and Chloroform gases. Overall, it can be observed from these tables that KNN regression outperformed here over all the datasets. For better visibility, we summarize Tables S4, S5 and S6 and compare the results concerning only R\({}^{2}\) in Table 3. The KNN-based regression technique achieved exceptional performance across all three datasets, achieving R\({}^{2}\) of more than 0.99, in stark contrast to the contemporary regression models, such as ANN, random forest, decision tree, and linear regression. Only in the 2-gases dataset, for chloroform prediction, random forest performed slightly better than KNN regression. The performance of the random forest was also quite similar to the KNN regression here. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Dataset** & **Gas** & \begin{tabular}{c} **KNN** \\ **Regressio** \\ **n** \\ \end{tabular} & \begin{tabular}{c} **Random** \\ **ANN** \\ **n** \\ \end{tabular} & \begin{tabular}{c} **Decision** \\ **Forest** \\ \end{tabular} & \begin{tabular}{c} **Linear** \\ **Tree** \\ \end{tabular} \\ \hline **1-gas** & \begin{tabular}{c} Acetone \\ **1-gas** \\ \end{tabular} & \begin{tabular}{c} **0.99997** \\ \end{tabular} & 0.43819 & 0.99412 & 0.71512 & 0.82034 \\ \cline{2-7} & \begin{tabular}{c} Toluene \\ **1-gas** \\ \end{tabular} & \begin{tabular}{c} **0.99997** \\ \end{tabular} & 0.00000 & 0.99348 & 0.71391 & 0.81937 \\ \cline{2-7} & Ethanol & **0.99997** & 0.00000 & 0.99426 & 0.71547 & 0.81945 \\ \cline{2-7} & Chlorofor & \begin{tabular}{c} **0.99990** \\ m \\ \end{tabular} & 0.99435 & 0.99428 & 0.71526 & 0.82019 \\ \hline **2-gases** & \begin{tabular}{c} Acetone \\ **1-gas** \\ \end{tabular} & \begin{tabular}{c} **0.99996** \\ \end{tabular} & 0.24191 & 0.99992 & 0.88192 & 0.97698 \\ \cline{2-7} & Toluene & **0.99998** & 0.39794 & 0.99993 & 0.88135 & 0.97672 \\ \cline{2-7} & Ethanol & **0.99998** & 0.21728 & 0.99997 & 0.87986 & 0.97710 \\ \cline{2-7} & Chlorofor & \begin{tabular}{c} 0.99992 \\ m \\ \end{tabular} & 0.999656 & **0.99993** & 0.88072 & 0.97695 \\ \hline **3-gases** & Acetone & **0.99994** & 0.85374 & 0.99972 & 0.92822 & 0.97455 \\ \cline{2-7} & Toluene & **0.99991** & 0.48192 & 0.99984 & 0.92787 & 0.97506 \\ \hline \end{tabular} \end{table} Table 3: Comparison of R\({}^{2}\) obtained by employed ML-based regression architectures ## 4 Discussion Although metal oxide thin films are the most successful sensor materials, the major limitation of these materials is their lack of selectivity. The systematic way of characterizing gas sensors devices involves one-by-one exposure to each gas and characterizing the sensitivity as shown in Fig. 1. In such cases, the sensor may show a significantly preferred sensitivity, called selectivity towards a particular gas (like ZnO and NiO shows for ethanol in 1-gas case). However, it gets challenging when another potential interfering gas exists in the atmosphere. Although the other interfering gas may not have high sensitivity in the absence of other gases, it adversely affects the response in otherwise preferred (selective) detection, as seen in Fig. 3. Ethanol gas response when studied in the presence of other single or double gases, the response is substantially reduced (by order of magnitude). Therefore, using conventional analysis methods, gas mixtures are challenging to analyze using a single sensor or even with an array of sensors. Albeit, the sensors utilized in the study are robust and sensitive and show good microstructural traits as required for an ideal metal oxide material for high responsivity[46, 58]. We employed ML-based methods to analyze the sensor array response of such a complex mixture where there is maximum cross-reactivity for one sensor (CuO) while the other two show some preferred selectivity (NiO and ZnO) towards ethanol. Our analysis involved ML algorithms like RF, KNN, Decision Tree, Linear Regression, Logistic Regression, Naive Bayes, LDA, ANN, and SVM. Among these, RF and KNN gave the best results with extraordinary accuracy of more than 99%. The algorithms could classify and identify the gas type and reasonably estimate the gas concentration of the varying chemicals for 1-gas, 2-gases and 3-gases datasets. The level of complexity of data and the resources used, such as no of sensors in the array, no of gases studied, the model used and the complexity of data in this study have been compared with that of other studies reported in the literature. For instance, Djedidi O. _et al.[59]_ created a method to use a single temperature-modulated MOS sensor and a data-driven model to detect and identify various gas species and their mixtures. By taking the characteristics from dynamic curves and introducing a four-sensor array, Chu J. _et al.[59]_ could distinguish between 11 different NO\({}_{2}\) and CO mixes and identifies different target gases using BPNN. The categorization of VOC species and concentrations using a 108-device graphene-based sensor array swept at high speeds has been shown in the study conducted by Capman N S S. _et al.[60]_. To increase selectivity, the array was functionalized with 36 different chemical receptors. All devices were virtually probed simultaneously to gather a cross-reactive data set for ML algorithms. To discriminate between 5 distinct reducing gases, two multi-sensor chips made of SnO\({}_{2}\) NWs covered with Ag and Pt NPs were combined by Thai N X. _et al.[61]_. The "brain" of the system (based on the SVM) is trained using a first dataset of 4D points, and the sensor performance is tested using any subsequent point. With practical machine learning algorithms and MDS (Molecular Dynamic Simulations), Huang S. _et al.[62]_ have shown an ultrasensitive, highly discriminative graphene nanosensing platform for detecting and identifying NH3 and PH3 at room temperature. Kanaparthi. _et al.[63]_ have developed an analytical technique that uses a single chemiresistive ZnO gas sensor to detect NH\({}_{3}\), CO\({}_{2}\) and H\({}_{2}\)S gases selectively at significantly low power consumption. In order to anticipate the gas present in the air, ML techniques including NB, LR, SVM and RF were used for the data comprised of sensor responses and ternary logic. Over a single chemiresistive sensor, Acharya S. and coworkers[64] used signal transform methods combined with ML technologies, which allowed for accurate quantification and selective identification of the tested VOCs. The feature extraction technique suggested in the study by Xu Y. _et al.[65]_ is based on KPCA. Qualitative identification of mixed gas is made possible by the binary mixed gas identification model of the KNN classification method. A regression approach based on MVRVM was suggested to obtain quantitative gas concentration detection for the qualitative identification findings. Sett A. _et al.[66]_ used ZnO nanorods to create a susceptible, stable, and reliable VOC sensor. In reaction to three VOCs, the sensor showed high responsiveness and stability. Features were taken out and supplied into PCA as input. Ref[67]shows that applying statistical shape space pre-processing to the signal of temperature-modulated metal oxide gas sensors improves the selectivity of gas identification with an ANN-based ML algorithm compared to other signal processing methods like PCA, DWT, polynomial curve fitting, and data normalization. Intrinsic CuO and ZnO heterostructures with different weight percentages of CuO-ZnO were made and used as resistance sensors to find four volatile organic compounds. The SVM algorithm with stacked k-fold cross-validation was used for classification, and for measurement, the MLR method was used[68]. On the other hand, in this work, we have used only three sensors that operate at the same temperature and show a distinct mix of selective (NiO and ZnO) and non-selective sensors (CuO) for ethanol vapors. Using two algorithms we obtained the best possible classification (qualitative) and regression (quantitative) identification of gases. Moreover, the gases identified in the study are highly likely to indicate underlying physiological conditions in several diseases. Therefore, sensor and analysis studies have high significance for biomedical diagnostics and point-of-care devices. In Table 4, we present a brief comparison with some state-of-the-art methods. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **No. of sensors** & **No. of gases together** & **Comple xity** & **Models Used** & **Ref.** \\ \hline **1** (WO\({}_{3}\)) & **3** (CO, O\({}_{3}\), NO\({}_{2}\)) & Medium & SVM & 55 \\ \hline **4** (Commercial MOS Sensors TGS 2600, TGS2602, TGS 2610, TGS 2620) & **2** (NO\({}_{2}\), CO) & Medium & BPNN + CNN & 56 \\ \hline **1** (Graphene) & **36** VOC Receptors & High & PCA + RF & 57 \\ \hline **1** (SnO\({}_{2}\) Nanowires) & **5** (Acetone, Ammonia, H\({}_{2}\), H\({}_{2}\)S, Ethanol) & Medium & SVM & 58 \\ \hline **1** (Graphene) & **2** (NH\({}_{3}\), PH\({}_{3}\)) & Low & PCA + LDA & 59 \\ \hline **1** (ZnO) & **3** (separate)_ (H\({}_{2}\)S, NH\({}_{3}\), CO\({}_{2}\)) & Low & NB + LR + SVM+ RF & 60 \\ \hline **1** (SnO\({}_{2}\)) & **4**(separate) (Formaldehyde, Methanol, Propanol, Toluene) & Low & FFT + DWT & 61 \\ \hline \end{tabular} \end{table} Table 4: Comparative analysis with some state-of-the-art studies. **5** (Commercial MOS Sensors TGS2600, TGS2610, **2** (CH\({}_{4}\), CO) Medium KPCA + KNN + TGS2611, TGS2602, TGS2620) **7** (separate) (Toluene, Acetone, NH\({}_{3}\), Ethanol, 2-Propanol, Formaldehyde, Methanol) **3** (SnO\({}_{2}\), Au/ SnO\({}_{2}\), AuPd/ SnO\({}_{2}\)) **3** (CuO, ZnO, CuO- ZnO) **4** (Methanol, Acetonitrile, Isopropanol, Toluene) **3** (ZnO, NiO, CuO) **4** (Ethanol, Acetone, Toluene, Chloroform) **SVM:** _Support Vector Machine_, **BPNN:** _Back Propagation Neural Network_, **CNN:** _Convolutional Neural Network_, **PCA:** _Principal Component Analysis_, **RF:** _Random Forest_, **LDA:** _Linear Discriminant Analysis_, **NB:** _Naive Bayes_, **LR:** _Logistic Regression_, **FFT:** ## 5 Conclusion In this study, we fabricated a gas sensor array consisting of three metal oxides, i.e., ZnO, NiO and CuO. Each sensor in the array was extensively characterized using state-of-the-art surface and material characterization techniques (e.g., SEM and XRD). Each of these materials is highly responsive to a large number of gases, generating cross-reactive and complex chemiresistive signals, which is a boon as well as a bane at the same time as it can be used to detect many gases; nevertheless, they would lack conclusiveness. To handle such complex data set, ML algorithms have been used to classify and predict the levels of individual gases in mixtures. To get the best algorithms out of several that we tried, the parameters of the algorithms have been extensively optimized toward the classification and prediction of different analyte gases. We anticipate that the proposed sensor array can be used for the analysis of different VOCs in complex mixtures (e.g., breath) for non-invasive diagnostic of disease and its monitoring at the point-of-care. In our future studies, we plan to miniaturize the proposed sensor array and modify the sensor surface with different nanomaterial-based coatings to enhance the signal-to-noise ratio and to generate a variable data set from complex mixtures that will further be analyzed using advanced ML algorithms to classify and predict the individual gas levels in complex mixtures. The developed sensor array will be used to diagnose different diseases at the point-of-need non-invasively, which can improve the quality of life of individuals by reducing the cost of diagnostics and treatment monitoring of certain disease, and time of diagnostics by enabling on-site disease diagnostics capabilities. ## Supplementary Material The Supplementary Material includes the sample fabrication details, the device design, the I-V characteristics, The Gas sensing data as well as the different ML model parameters, etc. ## Acknowledgment The authors are thankful to SERB core research grant (CRG/2022/006973) Govt. of India for the funding support received. The Central Instrumentation Facility of IISER Thiruvananthapuram is also acknowledged for the XRD and SEM facilities. ## Data availability statement The data is available with the corresponding author upon reasonable request. ## Conflict of Interest The authors declare no conflict of interest.
2305.17222
Karma: Resource Allocation for Dynamic Demands
We consider the problem of fair resource allocation in a system where user demands are dynamic, that is, where user demands vary over time. Our key observation is that the classical max-min fairness algorithm for resource allocation provides many desirable properties (e.g., Pareto efficiency, strategy-proofness, and fairness), but only under the strong assumption of user demands being static over time. For the realistic case of dynamic user demands, the max-min fairness algorithm loses one or more of these properties. We present Karma, a new resource allocation mechanism for dynamic user demands. The key technical contribution in Karma is a credit-based resource allocation algorithm: in each quantum, users donate their unused resources and are assigned credits when other users borrow these resources; Karma carefully orchestrates the exchange of credits across users (based on their instantaneous demands, donated resources and borrowed resources), and performs prioritized resource allocation based on users' credits. We theoretically establish Karma guarantees related to Pareto efficiency, strategy-proofness, and fairness for dynamic user demands. Empirical evaluations over production workloads show that these properties translate well into practice: Karma is able to reduce disparity in performance across users to a bare minimum while maintaining Pareto-optimal system-wide performance.
Midhul Vuppalapati, Giannis Fikioris, Rachit Agarwal, Asaf Cidon, Anurag Khandelwal, Eva Tardos
2023-05-26T19:30:48Z
http://arxiv.org/abs/2305.17222v2
# Karma: Resource Allocation for Dynamic Demands ###### Abstract We consider the problem of fair resource allocation in a system where user demands are dynamic, that is, where user demands vary over time. Our key observation is that the classical max-min fairness algorithm for resource allocation provides many desirable properties (_e.g._, Pareto efficiency, strategy-proofness, and fairness), but only under the strong assumption of user demands being static over time. For the realistic case of dynamic user demands, the max-min fairness algorithm loses one or more of these properties. We present Karma, a new resource allocation mechanism for dynamic user demands. The key technical contribution in Karma is a credit-based resource allocation algorithm: in each quantum, users donate their unused resources and are assigned credits when other users borrow these resources; Karma carefully orchestrates the exchange of credits across users (based on their instantaneous demands, donated resources and borrowed resources), and performs prioritized resource allocation based on users' credits. We theoretically establish Karma guarantees related to Pareto efficiency, strategy-proofness, and fairness for dynamic user demands. Empirical evaluations over production workloads show that these properties translate well into practice: Karma is able to reduce disparity in performance across users to a bare minimum while maintaining Pareto-optimal system-wide performance. ## 1 Introduction Resource allocation is a fundamental problem in computer systems, spanning private and public clouds, computer networks, hypervisors, etc. There is a large and active body of research on designing resource allocation mechanisms that achieve Pareto efficiency (high resource utilization) and strategy-proofness (selfish users should not be able to benefit by lying about their demands) while ensuring that resources are allocated fairly among users, _e.g._, [31, 33, 40, 58, 60, 67, 68]. For a system containing a single resource, the two most popular allocation mechanisms are strict partitioning [9, 72] and max-min fairness [31, 33, 37, 41, 50, 51, 58, 60, 67]. The former allocates the resource equally across all users ("fair share"), independent of their demands; this guarantees strategy-proofness and fairness, but not Pareto efficiency since resources can be underutilized when one or more users have demands lower than the fair share. Max-min fairness alleviates limitations of strict partitioning by taking user demands into account: it maximizes the minimum allocation across users while ensuring that each user's allocation is no more than their demand. A classical result shows that resource allocation based on max-min fairness guarantees each of the three desirable properties--Pareto efficiency, strategy-proofness, and fairness. These powerful properties have, over decades, motivated efforts in both systems and theory communities on generalizations of max-min fairness for allocating multiple resources [31, 32, 33], for incorporating application performance goals and deadlines [47, 48, 32, 40], and for new models of resource allocation [67, 22, 17, 25, 34, 60], to name a few. This paper explores a complementary problem--resource allocation of a single elastic resource in a system where user demands are dynamic, that is, vary over time. Dynamic user demands are the norm in most real-world deployments [12, 16, 42, 46, 61, 71, 72, 79]; for instance, analysis of production workloads in SS2 reveals that user demands vary by as much as \(17\times\) within minutes, with majority of users having demands with standard deviation \(0.5-43\times\) of the average over time. We show in SS2 that, for systems with such dynamic user demands, resource allocation based on the max-min fairness algorithm fails to guarantee one or more of its properties: (1) if the allocation is done based on demands at \(t=0\), Pareto efficiency and strategy-proofness are no longer guaranteed; and, (2) if the allocation is done periodically, _long-term_ fairness is no longer guaranteed--for \(n\) users with the same average demand, the max-min fairness algorithm may allocate some user as much as \(\Omega(n)\) more resources than other users over time. We present Karma, a new resource allocation mechanism for dynamic user demands. The key technical contribution of Karma is a credit-based resource allocation algorithm: in each quantum, users receive credits when they donate a part of their fair share of resources (_e.g._, if their demand is less than their fair share); users can use these credits to borrow resources in any future quantum when their demand is higher than their fair share. When the supply of resources from donors is equal to the demand from borrowers, it is easy to exchange resources and credits among users. The key algorithmic challenge that Karma resolves is when supply is not equal to demand--in such scenarios, Karma carefully orchestrates resources and credits between donors and borrowers: donors are prioritized so as to keep credits across users as balanced as possible, and borrowers are prioritized so as to keep the resource allocation as fair as possible. We theoretically establish Karma guarantees for dynamic user demands. Karma guarantees Pareto efficiency at all times: in each quantum, it allocates resources such that it is not possible to increase the allocation of a user without decreasing the allocation of at least another user. For strategy-proofness, Karma guarantees that a selfish user cannot increase their aggregate resource allocation by _over_-reporting their demands in any quantum. In addition, we show a new surprising phenomenon (that may primarily be of theoretical interest): if a user had perfect knowledge about the future demands of all other users, the user can increase its own aggregate allocation by a small constant factor by _under_-reporting its demand in some quanta; however, for \(n\) users, imprecision in this future knowledge could lead to the user losing \(\Omega(n)\) factor of their aggregate resource allocation by under-reporting their demand in any quantum. Put together, these results enable Karma to provide powerful guarantees related to strategy-proofness. Finally, for fairness, we prove that given a set of (past) allocations, Karma guarantees an optimally-fair resource allocation. We also establish that Karma guarantees similar properties even when multiple selfish users can collude, and even when different users have different fair shares. We have realized Karma on top of Jiffy [42], an open-sourced multi-tenant elastic memory system; an end-to-end implementation of Karma is available at [https://github.com/resource-disaggregation/karma](https://github.com/resource-disaggregation/karma). Evaluation of Karma over production workloads demonstrates that Karma's theoretical guarantees translate well into practice: it matches the max-min fairness algorithm in terms of resource utilization, while significantly improving the long-term fairness of resources allocated across users. Karma's fairer resource allocation directly translates to application-level performance; for instance, over evaluated workloads, Karma keeps the _average_ performance (across users) the same as the max-min fairness algorithm, while reducing performance _disparity_ across users by as much as \(\sim\)2.4\(\times\). Karma also incentivizes users to share resources: our evaluation shows that (1) Karma-conformant users achieve much more desirable allocation and performance compared to users who prefer a dedicated fair share of resources; and, (2) if users were to turn Karma-conformant, they can improve their performance by better matching their allocations with their demands over time. ## 2 Motivation We begin by outlining our motivating use cases, followed by an in-depth discussion on the limitations of the classic max-min fairness algorithm for dynamic user demands. **Motivating use cases.** Fair resource allocation is an important problem in private clouds where resources are shared by multiple users or teams within the same organization [12, 16, 17, 31, 32, 33, 34, 37, 40, 41, 46, 47, 60, 61, 71, 72, 79, 80]; our primary use cases are from such private clouds. Karma may also be useful for emerging use cases from multi-tenant public clouds where spare resources may be allocated to tenants while providing performance isolation [8, 14, 39, 42, 58, 64, 65, 67]. We discuss motivating scenarios in both contexts below. One scenario is shared analytics clusters. For instance, companies like Microsoft, Google, and Alibaba employ schedulers [33, 36, 70, 71, 80] that allocate resources across multiple internal teams that run long-running jobs (_e.g._, for data analytics [81, 23]) on a shared set of resources. Consider memory as a shared resource; in many of these frameworks, main memory is used to cache frequently accessed data from slower persistent storage and to store intermediate data generated during job execution. Indeed, increasing the allocated memory improves job performance; however, since memory is limited and is shared across multiple teams, ensuring resource allocation fairness is also a key requirement. Moreover, since these jobs are usually long-running, their performance depends on long-term memory allocations, rather than instantaneous allocations [16, 33, 46]. Another use case is shared caches: many companies (_e.g._ Facebook [9, 53, 12] and Twitter [79]) operate clusters of in-memory key-value caches, such as memcached or Redis, serving a wide array of internal applications. In this use case, the memory demand of each application may be computed as the amount of memory that would be required to fit hot objects within the cache [18, 19, 53, 79]. In such settings, efficient and fair sharing of caches is of utmost importance [9, 19, 53, 72]: to maintain service level agreements, it is important to have consistently good performance over long periods of time, rather than excellent performance at some times and very poor performance at other times (see [9, 19, 53, 72] for more discussion on the importance of long-term performance). Third, fair resource allocation while ensuring high utilization is also a goal in inter-datacenter bandwidth allocation [41, 37, 50]. Existing traffic engineering solutions used in production environments perform periodic max-min fair resource allocation to account for dynamic user demands [41, 50, 37]. Our work demonstrates that periodically performing max-min fair resource allocation over such dynamic demands leads to unfair resource allocation across users. Finally, an interesting use case in the public cloud context is that of burstable VMs [2, 4] that use virtual currency to enable resource allocation over dynamic user demands. These VMs share resources with VMs from other users and are charged on an instance-specific baseline. When resource utilization is below the baseline, users accumulate virtual currency that they can later use to gain resources beyond the baseline during periods of high demand. Given that Burstable VMs are primarily useful for dynamic user demands, they will likely need resource allocation mechanisms that guarantee high utilization, strategy-proofness, and fair resource allocation. **Dynamic user demands.** Increasingly many applications running data analytics or key-value caches operate on data collected from social media, application and network logs, mobile systems, etc. A unique characteristic of these data is that they are less controllable by the organization because they are generated by entities outside of the organization. As a result, applications can observe highly time-varying dynamic resource demands [12, 16, 42, 46, 61, 64, 71, 72, 79]. To build a deeper understanding of variation in user demands over time, we analyze two publicly-available production workloads: (1) Google [61] resource usage information across 8 clusters (\(1000-2000\) users per cluster) over a 30 day period; and, (2) Snowflake [72], a cloud-based database query engine that provides resource usage statistics for over 2000 users over a 14 day period. To characterize user demand variability over time, we compute--for each user--the ratio of the standard deviation and mean of their demands over the entire period. Figure 1 (left) shows that \(40-70\%\) of all users in both Google and Snowflake workloads have a standard deviation in CPU and memory demands at least \(0.5\times\) their mean, indicating high variability in demands for most users. Furthermore, the standard deviation in demands of as many as \(20\%\) of the users can be as high as their mean demand, with some users having extremely high variance in demands (standard deviations up to \(12-43\times\) the mean). Similar observations have been made for time-varying user demands in inter-datacenter networks; for instance, production studies [5] show that, on average, user demands vary by \(35\%\) within 5-minute intervals, with some demands varying by as much as \(45\%\) within a short period of time. Figure 1 (center) shows the CPU and memory demands for a randomly-sampled user from the Snowflake trace over a 15 minute window (we show only one user and only 15 minute window for clarity; analyzing a sample of 100 users, we find \(87\%\) of the users to have similar demand patterns). The figure shows that user demands can change dramatically over tens of seconds, by as much as \(6\times\) and \(2\times\) for compute and memory, respectively. Similarly, we see significant variation in demands even for a random user from the Google trace (shown in Figure 1 (right)). **Max-min fairness guarantees fail for dynamic user demands.** The classical max-min fairness algorithm for resource allocation provides many desirable properties, _e.g._, Pareto efficiency, strategy-proofness, and fairness. However, buried under the proofs is the assumption that user demands are static over time, an assumption that does not hold in practice (as demonstrated in Figure 1). For the realistic case of dynamic user demands, max-min fairness can be applied in two ways, each of which leads to violating one or more of its properties. We will demonstrate this using the example in Figure 2; here, time is divided into five quanta and three users have demands varying across quanta. First, one can naively perform max-min fair allocation just once based on user demands at quantum \(t=0\). This results in max-min fairness losing both Pareto efficiency and strategy-proofness. In the example of Figure 2, since allocations will only be done based on the demands specified by the users at \(t=0\), if users were to specify their true demands, user C will obtain an allocation of 1 unit leading to a total useful allocation of 3 units over the entire duration (as shown in Figure 2 (middle, top)); if user C were to lie Figure 1: **Analysis of Google and Snowflake workloads suggests that a large fraction of users have dynamic demands (left) that can change dramatically over short timescales (center, right) (Left) CDFs, across users, of the ratio of standard deviation and mean of each user’s demand. (Center) For a randomly sampled user in the Snowflake trace, the variation in the user’s CPU and memory demands (normalized by minimum demand) over a 15 minute period. (Right) For a randomly sampled user in the Google trace, the variation in the user’s CPU and memory demands (normalized by minimum demand) over a 2 hour period.** and over-report their demand at \(t=0\) as 2 units, then they can achieve a more desirable total useful allocation of 5 units (Figure 2 (middle, bottom)). This breaks strategy-proofness. In addition, max-min fairness is also not Pareto efficient: for many quanta, resources allocated to users will be underutilized as is evident in Figure 2 (middle). A better way to apply max-min fairness for dynamic user demands is to periodically reallocate resources based on users' instantaneous demands (_e.g._, every quantum of time periods, as in several operating systems and hypervisors [3, 73]). This trivially guarantees Pareto efficiency and strategy-proofness but results in extremely unfair allocation across users. Figure 2 (right, top) shows an example where max-min fairness can result in \(2\times\) disparity between resources allocated to users over the 5 quanta--user A receives a total allocation of 10 slices, while user C receives a total allocation of only 5 slices, despite them having the same average demand; this example can be easily extended to demonstrate that max-min fairness can, for \(n\) users, result in resource allocations where some user gets a factor of \(\Omega(n)\) larger amount of resources than other users (proof in SSA.1). Such disparity in resource allocations also leads to disparity in application-level performance across users since, as discussed above in use cases, many applications require consistently good performance over long periods of time, rather than excellent performance at some times and very poor performance at other times [29, 33, 69, 22]. We will demonstrate, in the evaluation section, that users experience significant disparity in application-level performance due to such disparate resource allocations. For the rest of the paper, we focus on long-term fairness; informally, an allocation is considered fair if all users have the same aggregate resource allocation over time. Our goal is to design a resource allocation mechanism that, for dynamic user demands, guarantees Pareto efficiency, strategy-proofness, and fairness. ## 3 Karma Karma is a resource allocation mechanism for dynamic user demands. Karma uses _credits_ (SS3.1, SS3.2)--users receive credits when they donate a part of their fair share of resources (_e.g._, when their demand is less than their fair share), and can use these credits to borrow resources beyond their fair share during periods of high demand. Karma carefully orchestrates the exchange of resources and credits between donors and borrowers: donors are prioritized in a manner that ensures credit distribution across users remains as balanced as possible, and borrowers are prioritized in a manner that keeps the resource allocation as fair as possible. We will prove theoretically in SS3.3 that, while simple in hindsight, this allocation mechanism simultaneously achieves Pareto efficiency, strategy-proofness, and fairness for dynamic user demands. ### Preliminaries We consider the following setup for the problem: we have \(n\) users sharing a single resource (CPU, memory, GPUs, etc.); each user has a fair share of \(f\) resource units (each unit is referred to as a _slice_), and thus the pool has \(n\times f\) slices of the resource (as we discuss in SS3.4, all our results hold for users having different fair shares). Time is divided into quanta, users demand a certain number of resource slices every quantum, and Karma performs resource (re)allocation at the beginning of each quantum. While user demands during each quantum can be arbitrary, unsatisfied demands in one quantum do not carry over to the next. Similar to prior work [60, 67, 31, 58], we assume that users are not adversarial (that is, do not lie about their demands simply to hurt others' allocations), but are otherwise selfish and strategic (willing to misreport their demands to maximize their allocations). ### Karma design Let \(0\leq\alpha\leq 1\) be a parameter. Karma guarantees that each user is allocated an \(\alpha\) fraction of its fair share \((=\alpha\cdot f)\) in each quantum; we refer to this as the guaranteed share. Karma maintains a pool of resource slices--karmaPool--that, at any point in time, contains two types of slices: * **Shared slices** are the slices in the resource pool that are not guaranteed to any user. It is easy to see that the number of shared slices in the system is \(n\cdot f-n\cdot\alpha\cdot f=n\cdot(1-\alpha)\cdot f\). * **Donated slices**, that are donated by users whose demands are smaller than their guaranteed share. We use these two sets of slices in the following manner. In any given quantum, if a user has demand less than its guaranteed share, then the user is said to be "donating" as many slices as the difference between the user's guaranteed share and demand in that quantum. A user that has demand larger than its guaranteed share is said to be "borrowing" slices beyond its guaranteed share, which the system can potentially supply using either shared slices or donated slices. Figure 2: **Classical max-min fairness guarantees break for dynamic user demands.** Here, 6 units of a resource are shared by 3 users (fair share of 2). Discussion in §2. #### 3.2.1 Karma credits Karma allocates resources not just based on users' instantaneous demands, but also based on their past allocations. To maintain past user allocation information, Karma uses credits. Users earn credits in three ways. First, each user is bootstrapped with a fixed number of initial credits upon joining the system (we discuss the precise number once we have enough context, in SS3.4); second, each user is allocated \((1-\alpha)\cdot f\) free credits every quantum as compensation for contributing \((1-\alpha)\) fraction of its fair share to shared slices. Finally, users earn one credit when some other user borrows one of their _donated_ slices (one credit per quantum per slice). Unlike earning credits, there is only one way for any user to lose credits: for every slice borrowed from the karmaPool (donated or shared), the user loses one credit. #### 3.2.2 Prioritized resource allocation We now describe Karma's resource allocation algorithm, that orchestrates resources and credits across users (Algorithm 1). To make the discussion succinct, we refer to the sum of user demands beyond their guaranteed share as "borrower demand"; that is, to compute borrower demand for any given quantum, we take all users with demand greater than their guaranteed share and sum up the difference between their demand (in that quantum) and \(\alpha\cdot f\). In quanta when borrower demand is equal to the supply (number of slices in karmaPool), Karma's decision-making is trivial: simply allocate all slices in karmaPool to the borrowers, and update credits for all users as described in the previous subsection. The key algorithmic challenge that Karma resolves is when the supply is either more or less than the borrower demand. We describe Karma allocation mechanism for such scenarios next and then provide an illustrative example. **Orchestrating resources and credits when supply \(>\) borrower demand.** When supply is greater than borrower demand, there are enough slices in karmaPool to satisfy the demands of all borrowers. In such a case, Karma prioritizes the allocation of donated slices over shared slices (so that donors get credits), and across multiple donated slices, prioritizes the allocation of a slice from the donor that has the smallest number of credits--this allows "poorer" donors to earn more credits, and moves the system towards a more balanced distribution of credits across users. Intuitively, credits capture the allocation obtained by a user until the last quantum--users who obtained lower allocations in the past will have a higher than average (across users) number of credits, while those who received a surplus of allocations will have a below-average number of credits. Hence, balancing the number of credits across users over time allows Karma to move towards a more equitable set of total allocations across users. Once all donated slices are allocated, Karma allocates shared slices to satisfy the remaining borrower demands. **Orchestrating resources and credits when supply \(<\) borrower demand.** When supply is less than demand, karmaPool does not have enough slices to satisfy all borrower demands. In such a scenario, Karma prioritizes allocating slices to users with the maximum number of credits. This strategy essentially favors users that had fewer allocations in the past (and thus, a larger number of credits), hence moving the system towards a more balanced allocation of resources across users, promoting fairness. At the same time, reducing the credits for the users with the most credits also moves the system to a more balanced distribution of credits across users. **Illustrative example.** We now illustrate through a concrete example. The running example in Figure 3 shows the execution of Karma's algorithm for the example from Figure 2 for \(\alpha=0.5\): that is three users A, B, and C, each with a fair share 2 slices (\(f=2\)), and a guaranteed share of 1 slice. Recall that, since \((1-\alpha)\cdot f=1\), each user receives 1 credit every quantum, and suppose all users are bootstrapped with 6 initial credits. In the first quantum, C's demand is equal to the guaranteed share, while A and B request 2 and 1 slices beyond the guaranteed share, respectively. Since supply (\(=3\) shared slices in karmaPool) is equal to borrower demand, Karma uses the shared slices to allocate slices beyond the guaranteed share for A and B and satisfies their demands. This results in a final allocation of 3 slices for \(A\), 2 slices for \(B\), and 1 slice for \(C\). \(A\) loses 2 credits, and \(B\) loses 1 credits, and no one gains any credits. In the second quantum, A demands 3 slices, while B and C donate 1 slice each. The total supply (=5, with 2 donated slices and 3 shared slices) exceeds the borrower demand. A is allocated 3 slices and it loses 3 credits (since its allocation is 2 slices above its guaranteed share). B and C receive 1 credit each since their donated slices are used. Similarly, in the third quantum, B demands 3 slices, while A and C donate 1 slice each. Since total supply exceeds borrower demand, B receives the 3 slices it asked for, and loses 2 credits; A and C gain 1 credit each. The fourth quantum is important: here, demand exceeds supply, and there are no donated slices. Now, unlike classic max-min fairness, Karma will prioritize the allocation of resources based on the credits of each tenant. Since at the start of this quantum, C has 11 credits, while A and B have only 6 and 7 credits respectively, C will be able to get 3 extra slices from the pool of shared slices by using 3 credits and achieve an allocation of 4. A and B will get their guaranteed allocation of 1 and do not gain or lose any credits. In the fifth quantum, once again, demand exceeds supply. C has 9 credits, B has 8 credits, and A has 7 credits. Karma first prioritizes allocating to C giving it 1 extra slice, at which point both C and B have equal credits (8). Next, they both get 1 extra slice each, at which point the supply is exhausted. The final resulting allocation is 1 slice for A, 2 slices for B, and 3 slices for C. In the end, A, B, and C end up with the exact same total allocation (8 slices) and number of credits (unlike max-min fairness where user allocations had a disparity of \(2\times\)). ### Karma Properties & Guarantees In this section, we present a theoretical analysis of Karma. Recall from SS3.1 that, similar to all prior works, users are considered selfish and strategic (that is, are willing to misreport their demands to maximize their allocations), but not adversarial (that is, do not lie about their demands simply to hurt others' allocations). For the purpose of our theoretical analysis, we assume that Karma is initialized with a large enough number of initial credits so that users do not run out of credits during the execution of the algorithm (we discuss how to achieve this in practice in SS 3.4). All our results hold for \(\alpha=0\); extending our results to \(\alpha>0\) is an interesting open question. Finally, while we provide inline intuition for each of our results, full proofs are presented in SSA.2. We define Pareto efficiency on a per-quantum basis. An allocation is said to be Pareto efficient if it is not possible to increase the allocation of a user without decreasing the allocation of at least one other user by a similar total amount during that quantum. Note that, Pareto efficiency on a per-quantum basis implies Pareto efficiency over time. **Theorem 1**.: _Karma is Pareto efficient._ Karma's Pareto efficiency follows trivially from the observation that similar to max-min fairness, Karma allocation satisfies the two properties: (1) no user is allocated more resources than its demand, and (2) either all resources are allocated or all demands are satisfied. For strategy-proofness, we make two important notes. First, if one assumes that the system has a priori knowledge of all future user demands, the resource allocation problem can be solved trivially using dynamic programming; however, for many use cases, it is hard to have a priori knowledge of all future user demands. This leads to our second note: Karma is solving an "online" problem (that is, it does not assume a priori knowledge of future user demands), and thus, we prove online strategy-proofness [7] defined as follows: assume that all users are honest during quanta 0 to \(q-1\); then, a mechanism is said to be online strategy-proof if, for any quantum \(q\), a user cannot increase its allocation during quantum \(q\) by lying about its demand during quantum \(q\). **Theorem 2**.: _Karma is online strategy-proof._ To prove Theorem 2, we actually prove a stronger result stated below. Karma's online strategy-proofness trivially follows from this. Figure 3: **Karma resource allocation for the running example of Figure 2:** Recall that there are 6 resource slices, 3 users each with average demand and fair share equal to 2. We show the case of the guaranteed share being 1 (\(\alpha\!=\!0.5\)), with 6 bootstrapping (initial) credits for each user. Note that each user receives 1 free credit every quantum. Karma achieves significantly improved fair allocation than max-min fairness—it allocates each user an equal allocation of 8 resource slices over time. **Lemma 1**.: _A user cannot increase its useful resource allocation by specifying a demand higher than its real demand in any quantum._ The proof for the lemma is a bit involved, but intuitively, it shows the following. The immediate effect of a user specifying a demand higher than its actual demand is that if the user is allocated more resources than its actual demand, these extra resources do not contribute to its utility, but do put the user into a disadvantageous position: not only can this user lose credits (either because it's asking for resources beyond its guaranteed share, or because it could have gained credits if this extra resource could have been allocated to some borrower), but also because other users get fewer resources; this makes other users be favored by the allocation algorithm in the future while making the lying user less favored. Thus, the user cannot increase its long-term "useful" allocation by specifying a demand higher than the real demand in any quantum. Specifically, it is possible that when a user over-reports its demand during quantum \(q^{\prime}\), the user receives an increased instantaneous allocation during some future quantum \(q\!>\!q^{\prime}\); however, we are able to show that, in this case, the user will also receive reduced instantaneous allocation(s) during other quantum(s) in between \(q^{\prime}\) and \(q\), leading to either a lower or equal total allocation over the period between \(q^{\prime}\) and \(q\). The hardness in the proof stems from carefully analyzing such cascade effects: a small change in users' resource allocation in any quantum can result in complex changes in future allocations that may lead to higher instantaneous but equal or lower total allocations in future quanta. Once we prove this lemma, the proof for Karma's online strategy-proofness follows immediately. While analyzing Karma properties, we encountered a new, surprising, phenomenon that may be of further theoretical interest: we show that a user that _knows all future demands of all other users_ can report a demand that is lower than its actual demand in the current quantum to increase its allocation in future quanta by a small constant factor. However, any imprecision in the knowledge of all future demands of all other users could result in the user losing a factor of \(\Omega(n)\) of its total allocation. **Lemma 2**.: _A user cannot increase its total useful allocation by a factor more than \(1.5\times\) by specifying a demand less than its real demand in any quantum. Gaining this useful allocation requires the user to know the future demands of all users. If the user does not have a precise knowledge of all future demands of all users, it can lose its useful allocation by a factor of \(\frac{n+2}{2}\) (for \(n\!\geq\!3\)) by specifying a demand less than its real demand._ We provide intuition for this phenomenon using an example (Figure 4). In the left figure, user A is able to gain 1 extra slice in its overall allocation by under-reporting its demand (reporting 0 instead of 8) in the first quantum. By under-reporting, its allocation in the first quantum reduces, enabling it to get more resources during the second quantum when it competes with user C. In the third quantum, it is able to recover the resources it lost in the first quantum from user B, resulting in an overall gain. To see the flip-side, if the demands of other users had been as shown in Figure 4 (right), then user A sees a \(3\times\) degradation in overall allocation. To prove the first part of the Lemma 2, we consider an arbitrary user Alice and an arbitrary time period, and compare two scenarios--one where Alice is truthful (hereby called the truthful scenario) and one where Alice is deviating by under-reporting her demand during some quantum (hereby called the deviating scenario). Our key insight for the proof is that bounding the increase in total allocation of _all users_ is easier than reasoning about the increase in total allocation of an individual user (Alice) since even a small change in Alice's demand during one quantum can result in cascading effects on the total allocation of other users as well. To that end, we prove the following claim: the total amount of resources all the users have earned in excess in the deviating scenario compared to the truthful one can be at most as large as Alice's total allocation in the truthful scenario. We prove this claim based on the following observation: whenever Alice under-reports her demand she is effectively "donating" the allocation she would have gotten in the truthful scenario to the other users whose allocations in the deviating scenario increase. Since Karma is Pareto efficient, the total Figure 4: **The phenomenon of users (left) gaining a small factor of improvement in their allocations by specifying demands less than their real demands, by exploiting knowledge of all future demands of all users; (right) any imprecision in the knowledge of future demands of all users could result in a significant reduction in useful allocations of the lying user.** The resource pool has 8 slices, and 4 users with fair share of 2 and guaranteed share of 0 (\(\alpha\!=\!0\)). gain in allocation across users during this quantum is limited by the amount donated by Alice which is in turn bound by Alice's own allocation during this quantum in the truthful scenario. By applying this reasoning iteratively across all quanta1, we can show that the total increase in allocation across all users cannot exceed the total allocation of Alice in the truthful scenario. This already implies a \(2\times\) upper bound on the maximum increase in total allocation that Alice can achieve. Footnote 1: It turns out that Alice under-reporting in a given quantum cannot cause cascading increases in total allocation across users in future quanta if Alice does not under-report in future quanta. This is because Karma prioritizes allocation to users with high credits (or equivalently low total allocations). To tighten the upper bound, we prove a second claim: if Alice receives higher total allocation in the deviating scenario compared to the truthful scenario, then there must exist some other user Bob who gained an even larger increase in total allocation than Alice. Putting together the above two claims allows us to establish the desired upper bound. Based on the first claim, the total gain in allocation across all users cannot exceed Alice's total allocation in the truthful scenario. This implies that the sum of total gains across Alice and Bob cannot exceed Alice's total allocation in the truthful scenario. Since Bob's gain is at least as large as Alice's gain (based on the second claim), this implies that Alice's gain is at most half the total allocation of Alice in the truthful scenario--a gain of at most \(1.5\times\), thus proving the first part of Lemma 2. The second part of the lemma is proven by first creating a set of demands where a user can under-report its demand during quantum \(q\) to earn increased total allocation by some quantum \(q^{\prime}>q\). Then we create a set of demands that are identical up to quantum \(q\) but vastly different from quanta \(q+1\) to \(q^{\prime}\). If the user (in the hope of facing the first set of demands) under-reports its demand on quantum \(q\) but ends up facing the second set of demands then this results in vastly different allocations by quantum \(q^{\prime}\). By correctly picking the two sets of demands we get the desired bounds. In SSA.2, we prove an even stronger result that extends Karma properties from Theorem 1, Theorem 2, Lemma 1 and Lemma 2 to the case of multiple colluding users: **Theorem 3**.: _No group of colluding users can increase their allocation by specifying a demand higher than their real demand. Additionally, for any group of colluding users, under-reporting demands cannot lead to more than a \(2\times\) improvement in their useful resource allocation. Finally, even if users form coalitions, Karma is Pareto efficient and online strategy-proof._ Recall that Karma focuses on long-term fairness without a priori knowledge of future user demands. To that end, the following theorem summarizes Karma's fairness guarantees: **Theorem 4**.: _For any quantum \(q\), given fixed user allocations from quantum \(0\) to quantum \(q-1\), and user demands at quantum \(q\), Karma maximizes the minimum total allocation from quantum \(0\) to quantum \(q\) across users._ The proof for the above theorem follows from the prioritized resource allocation mechanism of Karma. Intuitively, given allocations from quantum \(0\) to \(q-1\), the user with the least total allocation up to quantum \(q-1\) will have the largest number of credits. In quantum \(q\), Karma will prioritize the allocation of resources to this user (until it is no longer the one with the minimum total allocation, after which it will prioritize the next user with the minimum total allocation, and so on), thus maximizing the minimum total allocation from quantum \(0\) to \(q\) across users--this is the best one can do in quantum \(q\) given past allocations. ### Discussion Finally, we briefly discuss some additional aspects of Karma design not included in the previous subsections. **Bootstrapping Karma with initial credits.** Recall that, to bootstrap users, Karma allocates each user an initial number of credits. The precise number of initial credits has little impact on Karma's behavior; after all, credits in Karma essentially capture a relative ordering between users, rather than having any absolute meaning. The only importance of the number of credits is to ensure that no user runs out of credits at any quantum (which, in turn, could lead to violation of Karma's Pareto efficiency guarantees): even if spare resources are available, a user with high demand may not be able to borrow resources beyond the guaranteed share (line 7 of Algorithm 1) due to running out of credits. Thus, Karma sets the number of initial credits to a large numerical value to ensure that no user ever runs out of credits2, Footnote 2: For example, in a system with 100 users with fair share of 100 slices, setting initial credits to say \(10^{13}\) will ensure that even a worst-case user with highest possible demand (10000 slices) during all quanta cannot run out of credits for \(\sim\)31 years, which is good enough for all practical purposes. **User churn.** Fairness is relatively ill-defined when users can join and leave the system on a short-term basis (_e.g._, when a user runs a short query with large parallelism, and then leaves the cluster). Also, recall from our motivating scenarios, fair resource allocation in private clouds is usually performed for long-running services. However, Karma still handles user churn since, in many realistic scenarios, the set of all users of the system may not be known upfront during system initialization. For users that join and leave over longer timescales, Karma handles user churn with a simple mechanism: its credits. When a new user joins, either the resource pool size remains fixed and the fair share of all users is reduced proportionally or the resource pool size increases and the fair share of users remains the same. The credits of the existing \(n-1\) users do not change, and the new user is bootstrapped with initial credits equal to the current average number of credits across the existing \(n-1\) users. Intuitively, users who have donated more resources than they have borrowed will have above-average credits, and those who have borrowed more than they have donated will have below-average credits. As such, initializing the new user with the average number of credits (heuristically) puts the new user on equal footing with an existing user that has borrowed and donated equal amounts of resources over time. When a user leaves the system, the fair share of the remaining users is increased proportionally (or resource pool size reduces while maintaining the same fair share), and there is no change in their credits. **Users with different fair shares.** We have presented Karma's algorithm for the case of users having the same fair share merely for simplicity: all our results extend to the case of users having different fair shares. To generalize the algorithm to users with different fair shares, users with larger weights are charged fewer credits to borrow resources beyond their guaranteed share when compared to users with smaller weights. Intuitively, this enables users with larger weights to obtain more resources than users with smaller weights for the same number of credits. We achieve this by updating Line 20 of Algorithm 1 to decrement credits by \(\frac{1}{n\cdot w_{i}}\) instead of 1, where \(w_{i}\) is the normalized weight of the corresponding user, and \(n\) is the number of users. For users with different fair shares, this generalization leads to the same properties and guarantees as discussed in SS3.3 (the only difference, is that the upper bound factor in Lemma 2 changes from \(1.5\times\) to \(2\times\)). A full description of the weighted version of the algorithm along with proofs of guarantees can be found in SSA.6. **System parameters, and interpretation for \(\alpha\).** Karma has only one parameter: \(\alpha\); one can think of resource slice size and quantum duration as parameters, but these are irrelevant to Karma's guarantees: they hold for any slice size and quantum duration, as long as demands change at coarse timescales than the quantum duration. The \(\alpha\) parameter in Karma provides a tradeoff between instantaneous and long-term fairness. Providers can choose any \(\alpha\) depending on the desired properties. Intuitively, an \(\alpha\) smaller than 1 leads to a larger portion of shared slices, giving Karma's algorithm more flexibility in adjusting allocations to achieve better long-term fairness. ## 4 Karma Implementation Details We have implemented Karma on top of Jiffy [42], an open-sourced elastic far memory system. Jiffy has a standard distributed data store architecture (Figure 5(a)): resources are partitioned into fixed-sized slices (blocks of memory) across a number of resource servers (memory servers), identified by their unique sliceIDs (referred to as blockIDs in Jiffy). A logically centralized controller tracks the available and allocated slices across the various resource servers and stores a mapping that translates sliceIDs to the corresponding resource server. We have implemented Karma as a new resource allocation algorithm at the Jiffy controller3. Footnote 3: Karma can thus directly piggyback on Jiffy’s existing mechanisms for controller fault tolerance [42, Section 4] to persist its state across failures. Users interact with the system through a client library that provides APIs for requesting resource allocation and accessing allocated resource slices. Users express their demands to the controller through resource requests which specify the number of slices required. The controller periodically performs resource allocation using the Karma algorithm and provides users with the sliceIDs of the resource slices that are allocated to them. Users can then directly access these slices from the resource servers through read or write API calls without requiring controller interposition. In the rest of this section, we discuss the key data structures and mechanisms required to integrate Karma with Jiffy. Karma employs three key data structures to efficiently implement the policies and mechanisms outlined in SS3: karmaPool, a credit map, and a rate map. **karmaPool.** Recall from SS3.2 that the karmaPool tracks the pool of donated slices and shared slices, and needs to be updated when resource allocations change. Also, the resource allocation algorithm should be able to efficiently select donated slices from a particular user while satisfying borrower demands (SS3.2.2). To this end, the karmaPool is implemented as a hash map, mapping userIDs to the list of sliceIDs corresponding to slices donated by them. The list of sliceIDs corresponding to shared slices is stored in a separate entry of the same hash map. When resource allocations change, the corresponding sliceIDs are added to or removed from the corresponding lists. As such, karmaPool supports all updates in \(O(1)\) time. **Credit Tracking.** Karma employs two data structures for tracking and allocating credits across various users: a rate map and a credit map. The rate map maps each user to the _rate_ at which it earns or spends its credits every quantum, that is, the difference between the user's guaranteed share and the number of its allocated slices in that quantum. The rate is positive when the user is earning (that is, has donated slices) and negative when it is spending credits (that is, has borrowed slices), respectively. The credit map, on the other hand, maps each user to a counter corresponding to its current credits. Separating the rate map and credit map facilitates efficient credit tracking at each quantum: Karma simply iterates through the rate map entries, and updates the credit counters in the credit map based on the corresponding user credit rates. Since the rate map only contains entries for users with non-zero rates, Karma can efficiently update credits for only the relevant users. At the same time, Employing a hash-map for each of them permits \(O(1)\) updates to the user credit rate or number of credits while performing resource allocation. **Borrowing and donating slices.** Karma realizes its credit-based prioritized allocation algorithm (SS3.2) using two modules at the controller. First is a _slice allocator_ that maintains the karmaPool to track and update slice allocations across users, and, second a _credit tracker_ that maintains the current number of credits for any user (via Credit Map) and how it should be updated (via Rate Map). Figure 5(b) shows these modules along with the data structures they manage. The slice allocator intercepts resource requests from users, periodically executes the Karma resource allocation algorithm (Algorithm 1) to compute allocations based on the user demands, and updates slices in the karmaPool accordingly. It interacts with the credit tracker to query and update user credits. A naive implementation of Algorithm 1 runs in \(O(n\cdot f\cdot\log n)\) time, where \(n\) is the number of users, and \(f\) is the fair share4. Instead of computing allocations one slice at a time, we use an optimized implementation that carefully computes them in a batched fashion. This enables the slice allocator to support resource allocation at fine-grained timescales. Footnote 4: The loop in Line 10 of Algorithm 1 takes \(O(n\cdot f)\) iterations and each iteration would take \(O(logn)\) time to find the donor/borrower with the minimum/maximum credits (if we were to maintain min/max heaps for the donor and borrower sets). **Consistent hand-off of resources.** Since users are allowed to directly access slices from resource servers, we need to ensure consistent hand-off of slices from one user to another when slices are reallocated. For example, say user \(U_{1}\) has a slice during a given quantum, and in the next quantum, this slice is allocated to user \(U_{2}\). We need to ensure that (1) \(U_{1}\)'s data is flushed to persistent storage before \(U_{2}\) overwrites it (2) \(U_{1}\) should not be able to read/write to the slice after \(U_{2}\) has accessed it (for example, there could be in-flight read/write requests to the slice which were initiated before \(U_{1}\) gets to know it's allocation changed). Karma ensures the above by maintaining a monotonically increasing sequence number and current userID for each slice, at both the controller (within the karmaPool) and the resource servers (as slice metadata). On slice allocation, its userID is updated and its sequence number is incremented at the controller, and the sequence number is returned to the user. Subsequent user reads and writes to the slice specify this userID and sequence number. A slice read succeeds only if the accompanying sequence number is the same as the current slice sequence number, while a slice write succeeds only if the accompanying sequence number is the same or greater than the current sequence number. If a write necessitates an overwrite of the current slice content and metadata, the old slice content is transparently flushed persistent storage (_e.g._, S3) before the overwrite. In our example above, \(U_{2}\)'s first access to the slice after re-allocation will trigger a flush of \(U_{1}\)'s data to S3 and update the slice sequence number. Following this \(U_{1}\)'s accesses to this slice will fail since the current sequence number of the slices is higher. \(U_{1}\) can then read/write this data from persistent storage. Implementing consistent resource hand-off in Jiffy required minor changes to the controller (to track sequence numbers per slice), memory servers (to perform sequence number checking), and the client library (to tag requests with sequence numbers). ## 5 Evaluation We have already established Karma properties theoretically in SS3. In this section, we evaluate how Karma's properties translate to application-layer benefits over an Amazon EC2 testbed with real-world workloads. Our evaluation demonstrates that: * Karma reduces the performance disparity between different users by \(\sim\) 2.4\(\times\) relative to classic max-min fairness, without compromising on system-wide utilization or average performance (SS5.1); * Karma incentivizes users to share resources, quantifying Karma's online strategy-proofness property (SS5.2); We primarily focus on the shared cache use case from SS2 for the following reason. While datasets for the shared data analytics clusters use case are publicly available (_e.g._, Google and Snowflake datasets), they do not provide user queries that may impact our final conclusions. For the shared cache use case, we do have all the information we need: these datasets provide information on the working set size of each user over time, which can be fed into an end-to-end multi-tenant in-memory cache system running on Amazon EC2. We, thus, focus on this use case. **Experimental setup.** Our experimental setup consists of a distributed elastic in-memory cache shared across multiple users backed by a remote persistent storage system. For the cache, we use Jiffy [42], augmented with our implementation of Karma (SS4) and other evaluated schemes. If the evaluated scheme does not allocate sufficient slices to a user on Jiffy to fit its entire working set, the remaining data is accessed from remote persistent storage. When slices are reallocated between users across quanta, the corresponding data is moved between Jiffy and persistent storage through the consistent hand-off mechanism described in SS4. We deployed our setup on Amazon EC2 using c5n.9xlarge instances (36 vCPUs, 96GB DRAM, 50Gbps network bandwidth). We host the Jiffy controller and resource servers across 7 instances and use 25 instances for the users/clients that issue queries to Jiffy. We use Amazon S3 as the persistent storage system. **Workload.** We use the publicly available Snowflake dataset [72] that provides dynamic user demands in terms of memory usage for each customer from Snowflake's production cluster. We use these demands as the dynamic Figure 5: **Karma Design. See §4 for details.** working set size for individual users. For each user, we issue data access queries using the standard YCSB-A workload [20] (50% read, 50% write) with uniform random access distribution, with queries during each quantum being sampled (according to the YCSB parameters) within the instantaneous working set size of that user. If a query references data that is currently cached in Jiffy, then it is serviced directly from the corresponding resource server; otherwise, it is serviced from the persistent storage. Default parameters.Unless specified otherwise, we randomly choose 100 users (out of \(\sim\) 2000 users) over a randomly-chosen 15 minute time window (out of a 14-day period) in the Snowflake workload. To test for extreme scenarios, we set the length of each quantum to be one second (that is, a total of 900 quanta). The fair share of each user is 10 slices, and the total memory capacity of the system is set to the number of users times the fair share (1000 slices). Each slice is 128MB in size, while each query corresponds to a read or write to a 1KB chunk of data (the default size in the YCSB workload). Compared schemes.We compare Karma to strict partitioning and max-min fairness, since they correspond to the two most popular fair allocation schemes, and represent extremes in resource allocation and performance. When evaluating Karma, we set the number of initial credits to a large value5. The fraction of fair share that is guaranteed (\(\alpha\)) is 0.5 by default. Footnote 5: As discussed in §3.4, the precise value is unimportant. Here, we set it to 900,000, so that even if a user was allocated the full system capacity for the entire duration (1000\(\times\)900) it would not run out of credits. Metrics.We evaluate system-wide resource utilization, along with both per-user and system-wide performance--key metrics for any resource allocation mechanism. For performance, we measure both throughput and latency (average and 99.9th percentile tail). We define performance _disparity_ for an allocation scheme as the ratio of median to minimum performance (that is, throughput or latency) observed across various users. For any given user, we define _welfare_ over time \(t\) as \(\frac{\sum_{\text{c}}\text{allocations}}{\sum_{\text{c}}\text{demands}}\), that is, the fraction of its total demands satisfied by the allocation scheme. We define _fairness_ as \(\frac{min\text{user}\text{welfare}}{max\text{welfare}}\) (higher is better, 1 is optimal), as a measure of welfare disparity between users. ### Understanding Karma Benefits We now evaluate Karma's benefits in terms of reducing disparity across users' application-level performance as well as resource allocation. Karma reduces performance disparity between users.Figure 6(a) shows the throughput distribution across users for our compared schemes; the y-axis is presented in log-scale to Figure 6: **Understanding Karma benefits. (a) Karma enables a much tighter throughput distribution across users (colored arrows show the absolute gap between median and minimum throughput across users). (b, c) It also enables a tighter distribution of average and tail latencies across users (again, colored arrows show the absolute gap between median and maximum latency across users). (d) Karma achieves much lower throughput disparity—ratio of median to minimum values of throughput across users—than classic max-min fairness. (e) It also significantly reduces the gap between the users with minimum and maximum overall allocations, (f) while achieving similar system-wide performance as max-min fairness.** focus on the users at the tail of the distribution, which observe the most performance disparity. Since Karma strives to balance fairness over time, it significantly narrows the throughput distribution across users compared to the two baselines: the ratio between the maximum and minimum throughput across all users is 7.8\(\times\) with strict partitioning and 4.3\(\times\) with max-min fairness, but only 1.8\(\times\) for Karma. As Figure 6(d) shows, Karma lowers the throughput disparity across users by 2.4\(\times\) compared to max-min fairness. Karma also reduces average latency disparity (Figure 6(b)) by 2.4\(\times\) and 99.9th percentile latency disparity (Figure 6(c)) by 1.2\(\times\) compared to max-min fairness by enabling a tighter distribution for both latencies. Equitability in performance across users for a scheme is closely tied to how fairly resources are allocated across users. Specifically, because of the large gap between elastic memory (Jiffy) and S3 latencies (50-100\(\times\)), accesses to slices in S3 result in significantly lower throughput than accesses to slices in elastic memory. As a result, users' average throughput ends up being roughly proportional to their total allocation of slices in elastic memory over time. Similarly, since a larger total allocation results in a smaller fraction of requests going to S3, average and tail latencies also reduce. **Karma reduces disparity in allocations.** We now quantify disparities in overall allocations obtained by users across our compared schemes via our fairness metric in Figure 6(e). Due to dynamic demands, strict partitioning exhibits very poor fairness, since users with very bursty demands end up getting much lower total allocations than users who have steady demands6. While, max-min fairness observes better fairness compared to strict partitioning, the best-off user still receives 4\(\times\) higher allocation than the worst-off user, resulting in poor absolute fairness. Karma achieves significantly better fairness with the best-off user receiving only 1.5\(\times\) higher allocation than the worst-off user. It is able to achieve this by prioritizing the allocation of resources beyond the fair share to users with more credits (SS3.2.2). Footnote 6: Note that only _useful_ allocations are considered—strict partitioning guarantees a fixed allocation at all times, but resources may remain unused when demand is low. **Karma achieves Pareto efficiency and high system-wide performance.** Karma achieves the same overall resource utilization as max-min fairness (\(\sim\) 95%). This is because Karma is Pareto efficient (SS3.3) similar to max-min fairness and thus achieves near-optimal utilization. We find that the optimal utilization is \(<\) 100% since some quanta observe total user demands less than system capacity. Max-min fairness observes 1.4\(\times\) higher system-wide throughput (that is, throughput aggregated across all users) than strict partitioning (Figure 6(f)) since it permits allocations beyond the fair share, allowing more requests to be served on faster elastic memory. Karma observes system-wide performance similar to max-min fairness for similar reasons; the slight variations are attributed to variance in S3 latencies. ### Karma Incentives We now empirically demonstrate that Karma incentivizes users to donate resources instead of hoarding them, to improve their own as well as overall system welfare. To this end, we vary the fraction of users using Karma that are _conformant_ or _non-conformant_. A conformant user is truthful about its demands and donates its resources when its demand is less than its fair share. A non-conformant user, on the other hand, always asks for the maximum of its demand or its fair share (that is, it over-reports its demand during some quanta). **Resource utilization and system-wide performance improve with more conformant users.** Figure 7(a) and Figure 7(b) show that Karma's system-wide utilization and performance improve as the fraction of conformant users increases. This is because as more users donate resources when they do not need them, other users can use these resources, improving overall utilization and performance. When none of the users are conformant, since no one ever donates any resources, Karma essentially reduces to strict partitioning, hence achieving low overall utilization and performance. When all users are conformant, Karma achieves optimal utilization and performance, similar to classic max-min fairness. **Becoming conformant improves user welfare.** Figure 7(c) shows the average welfare gain non-conformant users would achieve if they were to become conformant. When non-conformant users become conformant, it leads to significant (1.17-1.6\(\times\)) welfare gains for them, empirically Figure 7: **Karma incentivizes resource sharing. All metrics are computed as averages (with error bars) for three random selections of users being non-conformant. See §5.2 for details.** validating Karma's property that users have nothing to gain by over-reporting their demand (SS3.3). Note that the gain varies with the number of conformant users in the system--the gains from non-conformant users becoming conformant are higher when the percentage of conformant users is low. As expected, the gains show diminishing returns as more users in the system become conformant as overall utilization is already high. ### Karma Sensitivity Analysis We now show sensitivity analysis with the only parameter in the Karma algorithm-the instantaneous guarantee (\(\alpha\)). Figure 8 shows the resource utilization, system-wide performance, and fairness with \(\alpha\) varying between 0 and 1. Karma continues to match the resource utilization and system-wide performance of max-min fairness independent of \(\alpha\) (Figure 8(a) and Figure 8(b)). Varying \(\alpha\) has an impact on the long-term fairness achieved by Karma (Figure 8(c)), with smaller values of \(\alpha\) resulting in improved fairness, thus validating our discussion in SS3.4. Even for \(\alpha=1\), Karma is able to achieve significantly better fairness compared to max-min fairness. This is because, while it allocates resources up to the fair share identically to max-min fairness, it prioritizes allocation beyond the fair share based on credits. ## 6 Related Work There is a large and active body of work on resource allocation and scheduling, exploring various models and settings; it would be a futile attempt to compare Karma with each individual work. We do not know of any other resource allocation mechanism that guarantees Pareto efficiency, strategy-proofness, and fairness similar to Karma for the case of dynamic user demands; nevertheless, we discuss below the most closely related works. **Max-min fairness variants in cloud resource allocation and cluster scheduling.** Many works study variants of max-min fairness for cloud resource allocation and cluster scheduling [8, 10, 17, 31, 32, 33, 34, 45, 47, 58, 59, 60, 65, 77, 77], including recent work on ML job scheduling [15, 35, 48, 51, 56]. We make three important notes here. First, while dominant resource fairness (DRF) [31] has generalized max-min fairness to multiple resources, it makes the same assumptions as max-min fairness: user demands being static over time; our goals are different: we have identified and resolved the problems with max-min fairness for the case of a single resource but over dynamic user demands. It is an interesting open problem to generalize Karma for the case of multiple resources. Second, cluster scheduling has been studied under several metrics beyond fair resource allocation (_e.g._, job completion time, data locality, priorities, etc.). Themis [48] considers long-term fairness but defines a new ML workload-specific notion of fairness, and is therefore not directly comparable to Karma. Our goals are most aligned with those works that study fair allocation under strategic users while guaranteeing Pareto efficiency. To that end, the closest to Karma is Carbyne[33]. However, Carbyne not only assumes non-strategic users but also, for the single-resource case (the focus of this paper), Carbyne converges to max-min fairness. As discussed earlier, generalizing Karma to multiple resources remains an open problem; a solution for that problem must be compared against Carbyne. Finally, fairness in application-perceived performance is only indirectly related to fairness in resource allocation: other factors like software systems (_e.g._, hypervisors and storage systems) and resource preemption granularity can impact performance. Similar to other mechanisms [31, 32, 33, 34, 34, 40, 46, 67, 68, 72, 77, 80, 16, 10, 77], Karma's properties are independent of these system-level factors; while our evaluation shows that Karma properties translate to application-level benefits, absolute numbers depend on the underlying system implementation. **Allocation of time-shared resources.** Generalized Processor Sharing (GPS) [55] is an idealized algorithm for sharing a network link which assumes that traffic is infinitesimally divisible (fluid model). For equal-sized packets and equal flow weights, GPS reduces to Uniform Processor Sharing [55, Section 2], which is equivalent to max-min fairness. GPS guarantees fairness over arbitrary time intervals only under the assumption that flows are _continuously backlogged_[55, Section 2]. This assumption implies that Figure 8: **Sensitivity analysis with varying instantaneous guarantee (\(\alpha\)) (a, b) Karma matches the resource utilization and system-wide performance of max-min fairness independent of \(\alpha\) (c) Smaller values of \(\alpha\) result in improved long-term fairness.** flows always have demand greater than their fair share, making it trivial to guarantee a max-min fair share of the network bandwidth over arbitrary time intervals. Classical fair-queueing algorithms [11, 24, 49, 66, 83] in computer networks approximate GPS with the constraint of packet-by-packet scheduling. Under this constraint, varying-sized packets and different flow weights make it hard to realize fairness efficiently; thus, the technical question that these algorithms solve is to achieve fairness approximately equal to GPS with minimal complexity. Karma focuses on a different problem--we show that GPS guarantees (equivalent to max-min fairness) are not sufficient when demands are dynamic and present new mechanisms to achieve fairness while maintaining other properties for such dynamic demands. Stride [74] scheduling essentially approximates GPS in the context of CPU scheduling [74, Section 7], and thus the above discussion applies to it as well. DRF-Q [30] generalizes DRF to support both space and time-shared resources, but is explicitly designed to be memoryless similar to max-min fairness, and therefore suffers from similar issues for long-term fairness. Least Attained Service (LAS) [13, 54, 44] is a classical job scheduling algorithm that has been applied to packet scheduling [13], GPU cluster scheduling [35], and memory controller scheduling [43]. For \(\alpha=0\), Karma behaves similarly to LAS, and for \(\alpha>0\), Karma generalizes LAS with instantaneous guarantees. Moreover, our results from SS3.3 establish strategy-proofness properties of LAS for dynamic user demands, which may be of independent interest. **Theory works.** Several recent papers in the theory community study the problem of resource allocation for dynamic user demands. Freeman et al. [27] and Hossain et al. [38] consider dynamic demands under a different setting, where users can benefit when they are allocated resources above their demand; under this setting, they focus on instantaneous fairness (which is non-trivial since users can be allocated resources beyond their demand). Karma instead focuses on long-term fairness under the traditional model, where users do not benefit from resources beyond their demands. Sadok et al. [63] present minor improvements over max-min fairness for dynamic demands. Their mechanism allocates resources in a strategy-proof manner according to max-min fairness while marginally penalizing users with larger past allocations using a parameter \(\delta\in[0,1)\). For both \(\delta=0\) and \(\delta\to 1\), the penalty goes to \(0\) for every past allocation, and the mechanism becomes identical to max-min fairness; for other values of \(\delta\), the penalty is at most a \(\delta(1-\delta)\leq 1/4\) fraction of past allocation surplus, and it reduces exponentially with time (users who were allocated large amounts of resources further in the past receive an even smaller penalty). Thus, for all values of \(\delta\), and in particular, for \(\delta=0\) and \(\delta\to 1\), their mechanism suffers from the same problems as max-min fairness. Aleksandrov et al. [7] and Zeng et al. [82] consider dynamic demands, but in a significantly different setting than ours where resources arrive over time. **Pricing- and credit-based resource allocation.** Another stream of work related to Karma is pricing-based and bidding-based mechanisms for resource allocation, _e.g._, spot instance marketplace and virtual machine auctions [84, 85, 1, 6, 28, 1, 6]. While interesting, this line of work does not focus on fair resource allocation and is not applicable to use cases that Karma targets. XChange [75] proposes a market-based approach to fair resource allocation in multi-core architectures but focuses on instantaneous fairness rather than long-term fairness, unlike Karma. It assigns a "budget" of virtual currency to each user which can be used to bid for resources. This budget is however reset during every time quantum, and therefore information about past allocations is not carried over. Credits are used in many other game theoretic contexts [62, 25, 25], _e.g._, in peer-to-peer and cooperative caching settings to incentivize good behavior among participants with static demands [78, 21, 57]. However, we are not aware of any credit-based mechanisms that deal with resource allocation in the context of dynamic user demands. ## 7 Conclusion This paper builds upon the observation that the classical max-min fairness algorithm for resource allocation loses one or more of its desirable properties--Pareto efficiency, strategy-proofness, and/or fairness--for the realistic case of dynamic user demands. We present Karma, a new resource allocation mechanism for dynamic user demands, and theoretically establish Karma guarantees related to Pareto efficiency, strategy-proofness, and fairness for dynamic user demands. Experimental evaluation of a realization of Karma in a multi-tenant elastic memory system demonstrates that Karma's theoretical properties translate well into practice: it reduces application-level performance disparity by as much as \(2.4\times\) when compared to max-min fairness while maintaining high resource utilization and system-wide performance. Karma opens several exciting avenues for future research. These include (but are not limited to) extending Karma theoretical analysis for \(\alpha>0\), generalizing Karma to allocate multiple resource types (similar to DRF), extending Karma to handle all-or-nothing or gang-scheduling constraints which are prevalent in the context of GPU resource allocation [15, 48], and applying Karma to other use cases such as inter-datacenter network bandwidth allocation and resource allocation for burstable VMs in the cloud. ## Acknowledgements We thank our shepherd, Sebastian Angel, and the OSDI reviewers for their insightful feedback. This research was supported in part by NSF CNS-1704742, CNS-2047220, CNS-2047283, CNS-2104292, CNS-2143868, AFOSR grants FA9550-19-1-0183, FA9550-23-1-0068, a NetAPp Faculty Fellowship, an NDSEG fellowship, a Sloan fellowship, and gifts from Samsung, VMware, and Enfabrica.
2305.18111
The Minimax Risk in Testing Uniformity of Poisson Data under Missing Ball Alternatives within a Hypercube
We study the problem of testing the goodness of fit of occurrences of items from many categories to an identical Poisson distribution over the categories. As a class of alternative hypotheses, we consider the removal of an $\ell_p$ ball, $p \leq 2$, of radius $\epsilon$ from a hypercube around the sequence of uniform Poisson rates. When the expected number of samples $n$ and number of categories $N$ go to infinity while $\epsilon$ is small, the minimax risk asymptotes to $2\Phi(-n N^{2-2/p} \epsilon^2/\sqrt{8N})$; $\Phi(x)$ is the normal CDF. This result allows the comparison of the many estimators previously proposed for this problem at the constant level, rather than at the rate of convergence of the risk or the scaling order of the sample complexity. The minimax test relies exclusively on collisions in the small sample limit but behaves like the chisquared test otherwise. Empirical studies over a range of problem parameters show that the asymptotic risk estimate is accurate in finite samples and that the minimax test is significantly better than the chisquared test or a test that only uses collisions. Our analysis combines standard ideas from non-parametric hypothesis testing with new results in the low count limit of multiple Poisson distributions, including the convexity of certain kernels and a central limit theorem of linear test statistics.
Alon Kipnis
2023-05-29T14:26:42Z
http://arxiv.org/abs/2305.18111v7
The Minimax Risk in Testing the Histogram of Discrete Distributions for Uniformity under Missing Ball Alternatives ###### Abstract We consider the problem of testing the fit of a discrete sample of items from many categories to the uniform distribution over the categories. As a class of alternative hypotheses, we consider the removal of an \(\ell_{p}\) ball of radius \(\epsilon\) around the uniform rate sequence for \(p\leq 2\). We deliver a sharp characterization of the asymptotic minimax risk when \(\epsilon\to 0\) as the number of samples and number of dimensions go to infinity, for testing based on the occurrences' histogram (number of absent categories, singletons, collisions,...). For example, with \(p=1\) and in the limit of a small expected number of samples \(n\) compared to the number of categories \(N\) (aka "sub-linear" regime), the minimax risk \(R_{\epsilon}^{*}\) asymptotes to \(2\bar{\Phi}\left(n\epsilon^{2}/\sqrt{8N}\right)\), with \(\bar{\Phi}(x)\) the normal survival function. Empirical studies over a range of problem parameters show that this estimate is accurate in finite samples and that our test is significantly better than the chisquared test or a test that only uses collisions. Our analysis relies on the asymptotic normality of histogram ordinates, the equivalence between the minimax setting and a Bayesian setting, and the characterization of the least favorable prior by reducing a multi-dimensional optimization problem to a one-dimensional problem. ## I Introduction ### _Background_ We have data recording the occurrences of items from a large number of \(N\) categories. We are interested to test if the occurrences are uniform in the sense that they all obey the same Poisson law independently across categories, or perhaps they obey Poisson laws of inhomogeneous rate sequence taken from an alternative class obtained by the removal of an \(\ell_{p}\) ball of radius \(\epsilon\) around the uniform frequency distribution \((1/N,\ldots,1/N)\in\mathbb{R}_{+}^{N}\). In particular, we seek to characterize the minimax risk \(R_{\epsilon}^{\star}\) against this alternative class as a function of \(\epsilon\), \(N\), and the average number of items in the sample \(n\). The minimax analysis is commonly understood as a two-person game of the statistician versus Nature: Nature plays a choice of a frequency distribution \(Q\in\mathbb{R}_{+}^{N}\) which may be in the null or the alternative. Such a choice gives rise to a distribution of the data. The statistician plays an estimator \(\psi\) to decide whether \(Q\) is in the null or the alternative. The problem of testing for the uniformity of a sample over \(N\) categories arises frequently in science and engineering. As recent examples, the uniformity of sequences of bitstrings appears to play significant roles in the validity of certain claims in quantum theory and computing [1, 2, 3]. This problem is closely related to non-parametric hypothesis testing on densities [4, 5, 6] and to testing for the uniformity of a multinomial sample [7, 8, 9, 10]. In these contexts, [4] characterized the minimax risk when each sequence in the alternative is a binned version of a smooth density function and showed that the minimax test is based on the natural chisquared statistic. In recent years, works originating from the field of property testing in computer science [11] focused on testing uniformity against discrete distributions that do not necessarily arise as binned versions of smooth densities [8, 9, 12, 13, 14]. Instead, they may be unrestricted or obey other properties [15, 16, 17]. Furthermore, the focus is usually on the case of \(n\) much smaller than \(N\), denote as the "sub-linear" regime. These works implicitly use a type of minimax risk analysis by considering the sample complexity, which in most cases amounts to the scaling rule of the number of samples guaranteeing vanishing minimax risk in the other problem parameters. Nevertheless, these previous works neither provide the minimax risk nor the minimax test which remained open problems. The goal of the present work is to fill these gaps for tests based on the sample histogram. ### _Contributions_ In this work, we make no assumptions about the smoothness of sequences in the alternative. We deliver expressions for the asymptotic minimax risk and the minimax test when testing based on the counts' histogram (number of missing categories, singletons, collisions,...) in the limit of small \(\epsilon\) and large \(n\) and \(N\). We assume that \(p\in(0,2]\) and that \(n/N\) is uniformly bounded from above, a situation that covers the so-called "sub-linear" regime \(n\ll N\). Under this situation, the minimax risk \(R_{\epsilon}^{\star}\) asymptotes to \[2\bar{\Phi}\left(\eta/2\right),\quad\bar{\Phi}(x):=\frac{1}{\sqrt{2\pi}}\int_{ x}^{\infty}e^{-x^{2}/2}dx.\] Here \(\eta\) is a function of \(n\), \(\epsilon\), \(N\), and \(p\), for which we provide an explicit expression in Corollary 2.2. As a numerical example, suppose that we have \(N=10^{5}\) categories and are interested in testing uniformity against any alternative separated by at least \(\epsilon=0.2\) in the \(\ell_{1}\) norm. Our analysis implies that we must draw about \(n\approx 4\cdot 10^{4}\) samples to guarantee that the sum of Type I and Type II error probabilities does not exceed \(R_{\epsilon}^{\star}=0.25\). For comparison, with \(n=2\cdot 10^{4}\) the estimated risk is \(R^{\star}\approx 0.34\) which is much larger than what we specified; numerical evaluations show that these asymptotic estimates are fairly accurate. We emphasize that analysis involving only the scaling order of the sample complexity guaranteeing a vanishing minimax risk (e.g., as in [18]) cannot deliver sample size estimates as above, cannot compare among several potential testing procedures in the minimax sense, and does not yield the minimax test [19]. In addition to the theoretical analysis, we use numerical simulations to demonstrate the dominance of the minimax test over collision-based and chisquared tests under the least favorable member of the alternative set. Our methodology is based on the reduction of the minimax problem to a Bayesian problem [4, 20, 21, 22, 6, 23, 24] and on the transition from minimization over sequence priors to a series of one-dimensional optimization problems by adapting methods from [25, 22]. It seems possible to extend our analysis to test non-uniform distributions and the removal of convex bodies other than the \(\ell_{p}\) balls by following similar arguments as in [4] and [20]; we leave these extensions as future work. For example, we anticipate that an additional \(\ell_{q}\) restriction for \(q\to 0\) on the class of alternatives would lead to the least favorable prior in the sparse settings in a form considered in [26][27]. ### _Problem Formulation_ Let \(O_{1},\ldots,O_{N}\) record the occurrences of items from \(N\) categories in the data. We assume that the occurrences are independent and that \(O_{i}\) is sampled from a Poisson distribution of rate \(nQ_{i}\). We are interested to test whether \(Q_{i}=1/N\) for all \(i=1,\ldots,N\), or not. We reduce attention to the counts' histogram ordinates (aka the data's _pattern_[28] or _fingerprint_[29]) \[X_{m}=\sum_{i=1}^{N}\mathbf{1}\{O_{i}=m\},\qquad m=0,1,2,\ldots.\] For example, \(X_{0}\) is the number of categories not represented in the sample, \(X_{1}\) is the number of singletons, and \(X_{2}\) is the number of collisions or coincidences. As a set of alternative rate sequences \(V_{\epsilon}\), we consider \[V_{\epsilon}:=B_{\xi/N}^{\infty}(U)\setminus B_{\epsilon}^{p}(U),\] where \[B_{\xi/N}^{\infty}(U) :=\{Q\,:\,\|U-Q\|_{\infty}\leq\xi/N\}\,,\quad\xi>0,\] \[B_{\epsilon}^{p}(P) :=\{Q\,:\,\|U-Q\|_{p}\leq\epsilon\}\,,\;\;\epsilon>0,\;\;\;p\in(0,2].\] In words, the set of alternatives \(V_{\epsilon}\) is an \(\ell_{\infty}\) ball of radius \(\xi/N\) around \(U\) with an \(\ell_{p}\) ball removed. The maximal deviation of \(\xi/N\) says that the per-coordinate departure in the alternative sequence is at most proportional to \(\|U\|_{\infty}=1/N\), a requirement that seems reasonable when focusing on small departures. As it turns out, \(\xi\) has no effect on our analysis as long as \(\xi\to 0\) as \(n\) goes to infinity slowly enough such that \(V_{\epsilon}\) is non-empty (the situation appears to be different in the case \(p>2\) that is not considered here). Furthermore, our analysis reveals that the \(\ell_{\infty}\) constraint is benign in the sense that the least favorable sequence in the alternatives class is in the interior of \(B_{\xi/N}^{\infty}(U)\). We note that \(V_{\epsilon}\) is empty unless \[\epsilon\leq\xi N^{-1+1/p}, \tag{1}\] an assumption we use throughout to obtain a meaningful setting. In particular, \(\epsilon\to 0\) because \(\xi\to 0\), although this convergence may be arbitrarily slow when \(p=1\). To summarize, the choice of \(Q\) give rise to a distribution over \(X_{0},X_{1},\ldots\); we test \[H_{0}\,:\,Q=U\;\;\text{versus}\;\;H_{1}\,:\,Q\in V_{\epsilon}. \tag{2}\] Given a test \(\psi:\mathbb{N}^{\infty}\to\{0,1\}\) and a specific sequence of frequencies \(Q\), the risk of \(\psi=\psi(X_{0},X_{1},\ldots)\) is \[R(\psi,Q):=\Pr\left[1-\psi|H_{1}\right]+\Pr\left[\psi|H_{0}\right].\] The goal of our study is the minimax risk defined as \[R_{\epsilon}^{\star}:=\inf_{\psi}\sup_{Q\in V_{\epsilon}}R(\psi,Q). \tag{3}\] ### _Contributions_ We derive an expression for \(R_{\epsilon}^{*}\) and the minimax test based on \(X_{0},X_{1},\ldots\) in the asymptotic setting of \(\epsilon\to 0\) as \(n\to\infty\) while \(n/N\) is bounded from above (Corollary 2.2). In particular, in the small sample limit \(n/N\to 0\) with \(p=1\), we have \[R_{\epsilon}^{*}=2\bar{\Phi}\left(\frac{\epsilon^{2}n}{\sqrt{8N}}\right)+o(1), \tag{4}\] where \(o(1)\) denotes a sequence tending to zero uniformly in \(N\) and \(\epsilon\). We derive these results by studying (3) via a Bayesian setup in which \(Q\) is randomly sampled from a prior distribution over positive sequences such that the mean sequence belongs to \(V_{\epsilon}\)[4, 20, 21, 22, 6, 23, 24]. We first derive an asymptotically optimal test against such prior using standard linear discriminant analysis (Theorem 1). The set of alternative priors is associated with a convex set of kernels, hence the least favorable prior arises as the solution to a tractable optimization problem involving only two parameters (Theorem 2). The least favorable prior implies the minimax test and the minimax risk. Finally, we conduct numerical simulations to verify the asymptotic analysis in finite sampling conditions and to demonstrate the dominance of the minimax test over the chisquared test and a test that is based on collisions. ### _Paper Outline_ The analysis and results are described in Section II. In Section III, we consider the small sample limit \(n/N\to 0\). In Section IV, we report on numerical simulations. Additional discussion and remarks are provided in Section V. All the proofs are provided in the Appendix. ## II Analysis and Main Results ### _Bayesian Setup_ Assume that the sequence \(Q\) is sampled from some prior \(\pi^{N}\) over \(\mathbb{R}_{+}^{N}\), where \(\mathbb{R}_{+}:=[0,\infty)\). The Bayes risk of a test \(\psi\) is defined as \[\rho(\pi^{N};\psi)=\mathbb{E}_{Q\sim\pi^{N}}\left[R(\psi,Q)\right].\] We consider a set of priors in which the \(i\)-th coordinate of each member \(\pi^{N}\) is a probability measure over the real line such that each \(Q\sim\pi^{N}\) belongs to \(V_{\epsilon}\) "on average": \[\Pi_{\epsilon}:=\left\{\pi^{N}\,:\,\mathbb{E}_{Q\sim\pi^{N}} \left[\|Q-U\|_{p}^{p}\right]\geq\epsilon^{p},\right. \tag{5}\] \[\left.\mathbb{E}_{Q\sim\pi^{N}}\left[\|Q-U\|_{\infty}\right]\leq \xi/N\right\},\] where we used the notation \[\mathbb{E}_{Q\sim\pi^{N}}\left[F(Q)\right] =\sum_{i=1}^{N}\mathbb{E}_{Q_{i}\sim\pi_{i}}\left[F(Q_{i})\right]\] \[=\sum_{i=1}^{N}\int_{\mathbb{R}}F(q)\pi_{i}(dq),\] for a function \(F:\mathbb{R}\to\mathbb{R}\), assuming all per-coordinate expectations exist. The minimax Bayes risk over \(\Pi_{\epsilon}\) is defined as \[\rho^{*}(\Pi_{\epsilon}):=\inf_{\psi}\sup_{\pi^{N}\in\Pi_{\epsilon}}\rho(\pi^{ N},\psi),\] and by the minimax theorem \[\rho^{*}(\Pi_{\epsilon})=\sup_{\pi^{N}\in\Pi_{\epsilon}}\inf_{\psi}\rho(\pi^{ N},\psi). \tag{6}\] A prior \(\pi^{*N}\in\Pi_{\epsilon}\) attaining the supremum above, if exists, is called _least favorable_. Arguing as in [20], we have \[R_{\epsilon}^{*}=\rho^{*}(\Pi_{\epsilon})+o(1), \tag{7}\] where \(o(1)\to 0\) as \(\epsilon\to 0\). As an example of an interesting set of priors, consider the set \(\Pi_{\epsilon}^{3}\) consisting of the product of three-point (or two points if \(\eta=1\)) priors that are symmetric around \(U_{i}=1/N\): \[\pi_{i}(\eta,\mu)=(1-\eta)\delta_{U_{i}}+\frac{\eta}{2}\delta_{U_{i}+\mu}+ \frac{\eta}{2}\delta_{U_{i}-\mu}, \tag{8}\] for \(i=1,\ldots,N\). Here \((\eta,\mu)\in\mathbb{R}_{+}^{2}\) satisfy the constraints \[\mathbb{E}_{Q\sim\pi^{N}}\left[\|Q-U\|_{p}^{p}\right]=\eta\mu^{p}\geq \epsilon^{p}/N\text{ and }|\mu|\leq\xi/N.\] One of our key results (Theorem 2) says that \(\Pi_{\epsilon}^{3}\) contains the (unique) least favorable prior \(\pi^{*N}\) within \(\Pi_{\epsilon}\), hence \[\rho^{*}(\Pi_{\epsilon}^{3})=\rho^{*}(\Pi_{\epsilon})=R^{*}(P,\epsilon)+o(1).\] In the next section, we describe a general testing procedure for the problem (2) and characterize its asymptotic risk under a prior \(\pi^{N}\in\Pi_{\epsilon}\). ### _Asymptotic Bayes Risk_ Let \(\mathrm{P}_{\lambda}(m)\) be the Poisson probability mass function (pmf): \[\mathrm{P}_{\lambda}(m):=\Pr\left[\mathrm{Pois}(\lambda)=m\right]=e^{-\lambda }\frac{\lambda^{m}}{m!}.\] Set \(\lambda:=\lambda_{n,N}:=nU_{i}=n/N\). For \(m=0,1,\ldots\), \(X_{m}\) is a binomial random variable with mean \[\mu_{m}^{0}:=\mathbb{E}\left[X_{m}\right]=\sum_{i=1}^{N}\mathrm{P}_{nU_{i}}(m )=N\cdot\mathrm{P}_{\lambda}(m).\] The covariance of \(X_{m}\) and \(X_{k}\) is given by \[\mathrm{Cov}[X_{m},X_{k}]=N\begin{cases}\mathrm{P}_{\lambda}(m)(1-\mathrm{P}_{ \lambda}(m))&m=k\\ -\mathrm{P}_{\lambda}(m)\mathrm{P}_{\lambda}(k)&m\neq k.\end{cases}\] Henceforth, we write the covariance kernel using the infinite matrix \(\Sigma\) such that \(\Sigma_{m+1,k+1}=\mathrm{Cov}[X_{m},X_{k}]\). Note that \(\Sigma=\mathrm{diag}(\mu^{0})-\mu^{0}\mu^{0^{\top}}/N\) is singular because \(\Sigma\mathbf{1}=0\), where \(\mathbf{1}=(1,1,\ldots)\). We consider test statistics of the form \[T(w)=\langle w,X\rangle=\sum_{m=0}^{\infty}X_{m}w_{m} \tag{9}\] for some weights vector (kernel) \(w\). Note that the expression in (9) is well-defined because with probability one there exists \(m_{0}\) such that \(X_{m}=0\) for all \(m\geq m_{0}\). Consequently, without loss of generality, we assume that \(w\) has finite support. We also exclude the case \(w=0\). Note that \[T(w)=\sum_{i=1}^{N}A_{i},\qquad A_{i}:=w_{O_{i}},\] and that \(\{w_{O_{i}}\}_{i=1}^{N}\) are iid with finite mean and variance under the null. Consequently, under the null \(T\) is asymptotically normal with mean and variance \[\mathbb{E}\left[T(w)\right] =\sum_{m=0}^{\infty}w_{m}\mu_{m}^{0}=\langle w,\mu^{0}\rangle\] \[\mathrm{Var}\left[T(w)\right] =\langle w,\Sigma w\rangle.\] It follows that a test asymptotically of size \(\alpha\) against \(H_{0}\) can be obtained by rejecting when \[\frac{T-\langle w,\mu^{0}\rangle}{\sqrt{\langle w,\Sigma w\rangle}}>z^{1- \alpha},\qquad\bar{\Phi}(z^{1-\alpha})=\alpha. \tag{10}\] Next, we characterize the kernel \(w\) that maximizes the power of this test under an alternative \(\pi^{N}\in\Pi_{\epsilon}\). ### _Bayes Optimal Test_ Under a mean shift alternative \(\mu^{1}\) for \(X\), the vector \(w\) that maximizes the power of (10) is provided via standard linear discriminant analysis [33] by \[w^{*}=\Sigma^{\dagger}(\mu^{1}-\mu^{0}), \tag{11}\] where \[\Sigma^{\dagger}:=\frac{1}{N}(\mathrm{diag}(\mu^{0}/N)^{-1}-\mathbf{1}\mathbf{ 1}^{\top})\] is the Moore-Penrose pseudoinverse of \(\Sigma\). In what follows, we characterize the mean shift in \(X\) and \(T(w)\) due to a prior \(\pi^{N}\in\Pi_{\epsilon}\) on \(Q\) and argue that the variance remains asymptotically unchanged. For \(x\in\mathbb{R}\), \(\lambda>0\), and \(m=0,1,\ldots\), define \[h_{m,\lambda}(x):=e^{-x}(1+\frac{x}{\lambda})^{m}-1,\] and \[\Delta_{m}(\pi^{N}):= \mathrm{P}_{\lambda}(m)\sum_{i=1}^{N}\int_{\mathbb{R}}h_{m, \lambda}(n(t-U_{i}))\pi_{i}(dt).\] For a random \(Q\sim\pi^{N}\), the mean of \(X_{m}\) is given by \[\mu_{m}^{1} :=\mathbb{E}_{Q\sim\pi^{N}}\left[\sum_{i=1}^{N}\mathrm{P}_{nQ_{i} }(m)\right]=\sum_{i=1}^{N}\int_{\mathbb{R}}\mathrm{P}_{nt}(m)\pi_{i}(dt)\] \[=\mu_{m}^{0}+\mathrm{P}_{\lambda}(m)\sum_{i=1}^{N}\int_{\mathbb{ R}}h_{m,\lambda}(n(t-U_{i}))\pi_{i}(dt)\] \[=\mu_{m}^{0}+\Delta_{m}(\pi^{N}) \tag{12}\] Therefore, we may think about \(h_{m,\lambda}(nt)\) as the relative contribution of a perturbation of \(U_{i}\) by \(t\) to the expected difference in \(X_{m}\), and on \(\Delta_{m}(\pi^{N})\) as the overall expected difference in \(X_{m}\) as a result of the perturbations of the sequence \(U\) describe by \(\pi^{N}\). Denote by \(w(\pi^{N})\) the vector \[w(\pi^{N}):=\Sigma^{\dagger}\Delta(\pi^{N}) \tag{13}\] The following theorem characterizes the power and the Bayes risk of the optimal test for testing \(H_{0}\) versus a simple alternative defined by a prior \(\pi^{N}\) on \(Q\) that is close enough to the null. **Theorem 1**.: _Consider an asymptotic setting in which \(\xi\to 0\) as \(n\to\infty\) while \(n/N=O(1)\). Let \(\pi^{N}=\prod_{i=1}^{N}\pi_{i}\) be any product measure on \(\mathbb{R}^{N}\) that satisfy_ \[\int_{\mathbb{R}}h_{m,\lambda}(nt)\pi_{i}(dt)=O(\xi),\quad\text{as}\quad\xi \to 0, \tag{14}\] _for all \(m=0,1,\ldots\), \(\lambda>0\), and \(i=1,\ldots,N\). Define_ \[\|\pi^{N}\|^{2}:=\langle w(\pi^{N}),\Sigma w(\pi^{N})\rangle=\langle\Delta(\pi^ {N}),\Sigma^{\dagger}\Delta(\pi^{N})\rangle.\] * _The test_ \(\psi(\alpha,\pi^{N})\) _that rejects_ \(H_{0}\) _when_ \[\frac{\langle w(\pi^{N}),X-\mu^{0}\rangle}{\|\pi^{N}\|}\geq z^{1-\alpha}\] _is asymptotically of size_ \(\alpha\) _and power_ \(1-\beta(\pi^{N})\)_, for_ \[\beta(\pi^{N})=\bar{\Phi}\left(\|\pi^{N}\|-z^{1-\alpha}\right).\] _Consequently,_ \[\rho(\pi^{N},\psi(\alpha,\pi^{N}))=\alpha+\beta(\pi^{N})+o(1).\] 2. _Let_ \(\psi^{*}(\pi^{N})\) _be the test that rejects_ \(H_{0}\) _when_ \[\frac{\langle w(\pi^{N}),X-\mu^{0}\rangle}{\|\pi^{N}\|}\geq\frac{\|\pi^{N}\|}{2}.\] (15) _Then,_ \[\rho(\pi,\psi^{*}(\pi^{N}))=2\bar{\Phi}(\|\pi^{N}\|/2)+o(1).\] An immediate corollary of Theorem 1 is that over all tests based on statistics of the form (9), the test of minimal asymptotic risk is provided by (15) and the least favorable prior has minimal \(\|\pi^{N}\|\). ### _The Minimax Test_ Since priors in \(\Pi_{\epsilon}\) satisfy assumption (14) (see the proof of Theorem 2 below), the least favorable prior in \(\Pi_{\epsilon}\) corresponds to the solution of the optimization problem \[\begin{split}\text{minimize:}&\|\pi^{N}\|\\ \text{subject to:}&\pi\in\Pi_{\epsilon},\end{split} \tag{16}\] which is analogous to the optimization problems studied in [20, Part II] and [22]. A solution is provided in the following Theorem. The proof is in Section B. **Theorem 2**.: _Consider an asymptotic setting in which \(n\to\infty\), \(\lambda=n/N\leq M\), \(\epsilon<\xi N^{-1+1/p}\), and \(\xi\to 0\). Let \(\pi^{*N}\) be the product prior of_ \[\pi_{i}^{*}=\frac{1}{2}\delta_{(1+\epsilon N^{1-1/p})/N}+\frac{1}{2}\delta_{( 1-\epsilon N^{1-1/p})/N},\quad i=1,\ldots,N. \tag{17}\] _Then_ \[\rho(\pi^{*N},\psi^{*})=\rho^{*}(\Pi_{\epsilon}),\qquad\psi^{*}:=\psi^{*}(\pi ^{*N}).\] Theorem 2 says that the least favorable prior in \(\Pi_{\epsilon}\) is unique and of the form (8) with \(\eta=1\) and \(\mu=\epsilon N^{-1/p}\). Consequently, the minimax test is obtained from (15) by using the kernel \(w(\pi^{*N})=\Sigma^{\dagger}\Delta(\pi^{*N})\), where the \(m\)-th coordinate of \(\Delta(\pi^{*N})\) given by \[\Delta_{m}(\pi^{*N})=N\mathrm{P}_{\lambda}(m)g_{m,\lambda}(n\epsilon N^{-1/p}),\] where \[g_{m,\lambda}(x):=\frac{h_{m,\lambda}(x)+h_{m,\lambda}(-x)}{2}.\] Notice that \(\sum_{m=0}^{\infty}\Delta_{m}(\pi^{*N})=0\) Namely, the mean of \(X_{m}\) under the least favorable prior equals the mean of \(X_{m}\) under the null. We conclude: **Corollary 2.1**.: _Let \(\pi^{*N}\) be the least favorable prior of (17). Then_ \[w(\pi^{*N})=(\mathrm{diag}(\mu^{0}))^{-1}\Delta(\pi^{*N}) \tag{18}\] _and_ \[\|\pi^{*N}\|^{2} =\Delta^{\top}(\pi^{*N})\frac{1}{N}\left(\mathrm{diag}(N/\mu^{0} )\right)\Delta(\pi^{*N})\] \[=N\sum_{m=0}^{\infty}\mathrm{P}_{\lambda}(m)g_{m,\lambda}^{2}(n \epsilon N^{-1/p}). \tag{19}\] Combining (7) with Theorems 1 and 2, we obtain an exact characterization of the asymptotic minimax risk. **Corollary 2.2**.: _Under the asymptotic setting of Theorem 2,_ \[\frac{R_{\epsilon}^{*}}{2\bar{\Phi}(\|\pi^{*N}\|/2)}\to 1, \tag{20}\] _where \(\|\pi^{*N}\|^{2}\) is given in (19)._ In the next section, we study the properties of this risk as \(\lambda\to 0\). ## III The minimax risk in the small sample limit We now focus on the small sample limit \(\lambda=n/N\to 0\). We use \[g_{m,\lambda}(\lambda\epsilon N^{1-1/p}) =\frac{\epsilon^{2}N^{2-2/p}}{2}\left[m(m-1)-2m\lambda\right. \tag{21}\] \[\left.+\lambda^{2}+o(\lambda^{2})\right]+o(\epsilon^{2}),\] and \[\mathrm{P}_{\lambda}(m)=o(\lambda^{2})+\begin{cases}1-\lambda+\lambda^{2}/2&m= 0,\\ \lambda-\lambda^{2}&m=1,\\ \lambda^{2}/2&m=2,\\ 0&m\geq 3.\end{cases} \tag{22}\] to conclude that \[\|\pi^{*N}\|^{2}=\frac{N^{1+4(1-1/p)}\epsilon^{4}\lambda^{2}}{2}+o(\lambda^{2 }). \tag{23}\] We obtain: **Corollary 2.3**.: _Under the asymptotic setting of Theorem 2 and \(n/N\to 0\),_ \[R_{\epsilon}^{*}=2\bar{\Phi}\left(\frac{\epsilon^{2}n}{\sqrt{8N}}N^{2-2/p} \right)+o(1).\] In particular, vanishing minimax risk requires \[\frac{n\epsilon^{2}}{\sqrt{N}}N^{2-2/p}\to\infty,\quad\text{as}\quad\epsilon \to 0,\quad n,N\to\infty. \tag{24}\] For \(p=1\), this condition coincides with the condition for vanishing minimax risk in a related multinomial sampling model derived in [9]. The test used in [9] only considers collisions, i.e. the statistics \(X_{2}\). As it turns out, the collision-based test attains the minimax risk only in the limit \(\lambda\to 0\). Figure 1 demonstrates this point by illustrating the weights of the minimax test and their expected contribution to the minimax test statistic for several values of \(\lambda<1\). ## IV Empirical Comparisons In Figure 2, we consider the case \(p=1\) and compare the asymptotic minimax risk \(R_{\epsilon}^{*}\) obtained from (20) to a Monte-Carlo simulated empirical risk under the prior \(\pi^{*N}\) of (17) and under the null. In each configuration, we evaluate the empirical risk by averaging the Type I error rate over \(10,000\) independent trials under the null and the Type II error rate over \(10,000\) independent trials under the alternative. We also consider two tests that are sub-optimal in the minimax sense: (1) a test that uses collisions via the statistic \[T_{\text{col}}:=\sum_{m=2}^{\infty}X_{m}, \tag{25}\] and a test that is based on the chisquared statistic \[T_{\chi^{2}}:=\sum_{i=1}^{N}\frac{(O_{i}-nU_{i})^{2}}{nU_{i}}=\sum_{m=0}^{ \infty}\frac{(m-\lambda)^{2}}{\lambda}X_{m}. \tag{26}\] For both of these tests, we choose the threshold above which we reject in a way that minimizes the _empirical risk_ over every pair of trials. Therefore, the reported risk of these tests is overly optimistic compared to the practical situation in which the threshold is selected independently of the data (the reported risk for these tests converges to their expected optimized risk from below as we increase the number of trials in each configuration). ## V Additional Discussion ### _Relation to Multinomial Sampling_ Consider testing the fit of the data to \(\bar{n}\) draws from a multinomial distribution with equal class frequencies, where \[\bar{n}:=\sum_{i=1}^{n}O_{i}=\sum_{m=0}^{n}m\cdot X_{m}.\] This problem was studied in [34, 9], and arises from our setting by conditioning on the value of \(\bar{n}\) and considering the set of alternatives obtained by the intersection of \(V_{\epsilon}\) with the unit sphere in \(\mathbb{R}^{N}\). This last restriction on the set of alternatives does not affect the asymptotic minimax risk because the support of the least favorable prior concentrates on this sphere due to the law of large numbers. On the other hand, when \(n\ll N\), the removal of one degree of freedom when going from the multinomial case to the Poisson case may be significant, and so is the gain of a test that can adjust to it [35]. ### _Extensions to Non-uniform Sequences and the Removal of Non-Ball Shapes_ The similarity between our analysis to those of [4, 22] suggests a generalization of our setting to the removal of other geometric shapes like ellipsoids and Besov bodies, and to the consideration of possibly inhomogeneous multivariate product Poisson distributions. Such extensions necessarily lead to multi-dimensional versions of the optimization problem studied in the proof of Theorem (2) in which the parameter is a pair of positive sequences \((\eta)\) and \((\lambda)\) rather than a pair of positive numbers. Another important extension is Fig. 1: Weights of the minimax kernel and their expected contribution to the minimax test statistic. Left: normalized weights \(\bar{w}^{*}:=w(\pi^{*N})/\|w(\pi^{*N})\|\). The \(\bar{w}_{1}^{*}\) is the weight for missing categories, \(\bar{w}_{1}^{*}\) for singletons, \(\bar{w}_{2}^{*}\) for collisions or coincidences, etc. Right: normalized expected value \(w_{m}(\pi^{*N})\Delta_{m}(\pi^{*N})\), \(m=0,\ldots,7\). Different colors correspond to different values of \(\lambda=n/N\). Here \(n=10,000\) are \(\epsilon=0.1\) are fixed. For \(\lambda\to 0\), the minimax test relies only on the collision statistics \(X_{2}\). attained by replacing the \(\ell_{\infty}\) constraint on \(V_{\epsilon}\) to \(\ell_{q}\) for any \(q>0\). For example, the case \(q\ll 1\) is related to sparse alternatives as considered in [27] and [26]. ### _Information-Theoretic Optimality_ Our analysis leaves unresolved the characterization of the minimax risk in the general setting that does not necessarily rely on the counts' histogram. We conjecture that this risk coincides with \(R_{\epsilon}\) under the assumptions \(n\to\infty\), \(\epsilon\to 0\), \(\lambda=O(1)\) we considered. This conjecture is based on the intuition that under these assumptions the departures from the null in individual categories are on the small deviation scale, hence there seems to be no loss of signal by considering \(X_{0},X_{1},\ldots\) compared to the counts \(O_{1},\ldots,O_{N}\). This situation is in contrast to the larger but very rare departures considered in [26, 36, 37].
2302.05681
An EPTAS for Budgeted Matching and Budgeted Matroid Intersection
We study the budgeted versions of the well known matching and matroid intersection problems. While both problems admit a polynomial-time approximation scheme (PTAS) [Berger et al. (Math. Programming, 2011), Chekuri, Vondrak and Zenklusen (SODA 2011)], it has been an intriguing open question whether these problems admit a fully PTAS (FPTAS), or even an efficient PTAS (EPTAS). In this paper we answer the second part of this question affirmatively, by presenting an EPTAS for budgeted matching and budgeted matroid intersection. A main component of our scheme is a novel construction of representative sets for desired solutions, whose cardinality depends only on $\varepsilon$, the accuracy parameter. Thus, enumerating over solutions within a representative set leads to an EPTAS. This crucially distinguishes our algorithms from previous approaches, which rely on exhaustive enumeration over the solution set. Our ideas for constructing representative sets may find use in tackling other budgeted optimization problems, and are thus of independent interest.
Ilan Doron-Arad, Ariel Kulik, Hadas Shachnai
2023-02-11T12:28:57Z
http://arxiv.org/abs/2302.05681v1
# An EPTAS for Budgeted Matching and Budgeted Matroid Intersection via Representative Sets ###### Abstract We study the budgeted versions of the well known matching and matroid intersection problems. While both problems admit a _polynomial-time approximation scheme (PTAS)_[Berger et al. (Math. Programming, 2011), Chekuri, Vondrak and Zenklusen (SODA 2011)], it has been an intriguing open question whether these problems admit a _fully_ PTAS (FPTAS), or even an _efficient_ PTAS (EPTAS). In this paper we answer the second part of this question affirmatively, by presenting an EPTAS for budgeted matching and budgeted matroid intersection. A main component of our scheme is a novel construction of _representative sets_ for desired solutions, whose cardinality depends only on \(\varepsilon\), the accuracy parameter. Thus, enumerating over solutions within a representative set leads to an EPTAS. This crucially distinguishes our algorithms from previous approaches, which rely on _exhaustive_ enumeration over the solution set. Our ideas for constructing representative sets may find use in tackling other budgeted optimization problems, and are thus of independent interest. budgeted matching, budgeted matroid intersection, efficient polynomial-time approximation scheme. 10.4230/LIPIcs...23 1 ## 1 Introduction A wide range of NP-hard combinatorial optimization problems can be formulated as follows. We are given a ground set \(E\) and a family \(\mathcal{M}\) of subsets of \(E\) called the _feasible sets_. The elements in the ground set are associated with a cost function \(c:E\to\mathbb{R}_{\geq 0}\) and a profit function \(p:E\to\mathbb{R}\), and we are also given a budget \(\beta\in\mathbb{R}_{\geq 0}\). A _solution_ is a feasible set \(S\in\mathcal{M}\) of bounded cost \(c(S)\leq\beta\).1 Generally, the goal is to find a solution \(S\) of maximum profit, that is: Footnote 1: For a function \(f:A\to\mathbb{R}\) and a subset of elements \(C\subseteq A\), we define \(f(C)=\sum_{e\in C}f(e)\). \[\max p(S)\text{ s.t. }S\in\mathcal{M},c(S)\leq\beta. \tag{1}\] Notable examples include shortest weight-constrained path [6], constrained minimum spanning trees [15], and knapsack with a conflict graph [14]. In this work, we focus on two prominent problems which can be formulated as (1). In the _budgeted matching (BM)_ problem we are given an undirected graph \(G=(V,E)\), profit and cost functions on the edges \(p,c:E\to\mathbb{R}_{\geq 0}\), and a budget \(\beta\in\mathbb{R}_{\geq 0}\). A _solution_ is a _matching_\(S\subseteq E\) in \(G\) such that \(c(S)\leq\beta\). The goal is to find a solution \(S\) such that the total profit \(p(S)\) is maximized. Observe that BM can be formulated using (1), by letting \(\mathcal{M}\) be the set of matchings in \(G\). In the _budgeted matroid intersection (BI)_ problem we are given two matroids \((E,\mathcal{I}_{1})\) and \((E,\mathcal{I}_{2})\) over a ground set \(E\), profit and cost functions on the elements \(p,c:E\to\mathbb{R}_{\geq 0}\), and a budget \(\beta\in\mathbb{R}_{\geq 0}\). Each matroid is given by a membership oracle. A _solution_ is a _common independent set_\(S\in\mathcal{I}_{1}\cap\mathcal{I}_{2}\) such that \(c(S)\leq\beta\); the goal is to find a solution \(S\) of maximum total profit \(p(S)\). The formulation of BI as (1) follows by defining the feasible sets as all common independent sets \(\mathcal{M}=\mathcal{I}_{1}\cap\mathcal{I}_{2}\). Let \(\mathrm{OPT}(I)\) be the value of an optimal solution for an instance \(I\) of a maximization problem \(\Pi\). For \(\alpha\in(0,1]\), we say that \(\mathcal{A}\) is an \(\alpha\)-approximation algorithm for \(\Pi\) if, for any instance \(I\) of \(\Pi\), \(\mathcal{A}\) outputs a solution of value at least \(\alpha\cdot\mathrm{OPT}(I)\). A _polynomial-time approximation scheme_ (PTAS) for \(\Pi\) is a family of algorithms \((A_{\varepsilon})_{\varepsilon>0}\) such that, for any \(\varepsilon>0\), \(A_{\varepsilon}\) is a polynomial-time \((1-\varepsilon)\)-approximation algorithm for \(\Pi\). As \(\varepsilon\) gets smaller, a running time of the form \(n^{\Theta\left(\frac{1}{\varepsilon}\right)}\) for a PTAS may become prohibitively large and thus impractical; therefore, it is natural to seek approximation schemes with better running times. Two families of such schemes have been extensively studied: an _efficient PTAS_ (EPTAS) is a PTAS \((A_{\varepsilon})_{\varepsilon>0}\) whose running time is of the form \(f\left(\frac{1}{\varepsilon}\right)\cdot n^{O(1)}\), where \(f\) is an arbitrary computable function, and \(n\) is the bit-length encoding size of the input instance. In a _fully PTAS_ (FPTAS) the running time of \(A_{\varepsilon}\) is of the form \(\left(\frac{n}{\varepsilon}\right)^{O(1)}\). For comprehensive surveys on approximation schemes see, e.g., [17, 7]. The state of the art for BM and BI is a PTAS developed by Berger et al. [1]. Similar results for both problems follow from a later work of Chekuri et al. [3] for the multi-budgeted variants of BM and BI. The running times of the above schemes are dominated by exhaustive enumeration which finds a set of \(\Theta\left(\frac{1}{\varepsilon}\right)\) elements of highest profits in the solution. In this paper we optimize the enumeration procedure using a novel approach, which enables to substantially reduce the size of the domain over which we seek an efficient solution. Our main results are the following. There is an _EPTAS_ for the budgeted matching problem. There is an _EPTAS_ for the budgeted matroid intersection problem. ### Related Work BM and BI are immediate generalizations of the classic \(0/1\)-knapsack problem. While the knapsack problem is known to be NP-hard, it admits an FPTAS. This raises a natural question whether BM and BI admit an FPTAS as well. The papers [1, 3] along with our results can be viewed as first steps towards answering this question. Berger et al. [1] developed the first PTAS for BM and BI. Their approach includes an elegant combinatorial algorithm for _patching_ two solutions for the _Lagrangian relaxation_ of the underlying problem (i.e., BM or BI); one solution is feasible but has small profit, while the other solution has high profit but is infeasible. The scheme of [1] enumerates over solutions containing only high profit elements and uses the combinatorial algorithm to add low profit elements. This process may result in losing (twice) the profit of a low profit element, leading to a PTAS. Chekuri et al. [3] developed a PTAS for multi-budgeted matching and a randomized PTAS for multi-budgeted matroid intersection; these are variants of BM and BI, respectively, in which the costs are \(d\)-dimensional, for some constant \(d\geq 2\). They incorporate a non-trivial martingale based analysis to derive the results, along with enumeration to facilitate the selection of profitable elements for the solution. The paper [3] generalizes a previous result of Grandoni and Zenklusen [8], who obtained a PTAS for multi-budgeted matching and multi-budgeted matroid intersection in _representable matroids_.2 For \(d\geq 2\), the multi-budgeted variants of BM and BI generalize the two-dimensional knapsack problem, and thus do not admit an EPTAS unless W[1] = FPT [10]. Footnote 2: Representable matroids are also known as _linear matroids_. An evidence for the difficulty of attaining an FPTAS for BM comes from the _exact_ variant of the problem. In this setting, we are given a graph \(G=(V,E)\), a cost function \(c:E\to\mathbb{R}_{\geq 0}\), and a _target_\(B\in\mathbb{R}_{\geq 0}\); the goal is to find a perfect matching \(S\subseteq E\) with exact specified cost \(c(S)=B\). There is a randomized pseudo-polynomial time algorithm for exact matching [12]. On the other hand, it is a long standing open question whether exact matching admits a deterministic pseudo-polynomial time algorithm [13]. Interestingly, as noted by Berger et al. [1], an FPTAS for BM would give an affirmative answer also for the latter question. An FPTAS for BI would have similar implications for the _exact_ matroid intersection problem, which admits a randomized (but not a deterministic) pseudo-polynomial time algorithm [2]. While the above does not rule out the existence of an FPTAS for BM or BI, it indicates that improving our results from EPTAS to FPTAS might be a difficult task. For the budgeted matroid independent set (i.e., the special case of BI of two identical matroids), Doron-Arad et al. [5] developed an EPTAS using _representative sets_ to enhance enumeration over elements of high profits.3 Their scheme exploits integrality properties of matroid polytopes under budget constraints (introduced in [8]) to efficiently combine elements of low profit into the solution. Footnote 3: We elaborate below on the framework of [5] vs. our notion of representative sets. ### Contribution and Techniques Given an instance \(I\) of BM or BI, we say that an element \(e\) is _profitable_ if \(p(e)>\varepsilon\cdot\operatorname{OPT}(I)\); otherwise, \(e\) is _non-profitable_. The scheme for BM and BI of Berger et al. [1] distinguishes between profitable and non-profitable elements. In the main loop, the algorithm enumerates over all potential solutions containing only profitable elements.4 Each solution is extended to include non-profitable elements using a combinatorial algorithm. The algorithm outputs a solution of highest profit. Overall, this process may lose at most twice the profit of a non-profitable element in comparison to the optimum, which effectively preserves the approximation guarantee; however, an exhaustive enumeration over the profitable elements renders the running time \(n^{\Omega\left(\frac{1}{2}\right)}\). In stark contrast, in this paper we introduce a new approach which enhances the enumeration over profitable elements, leading to an EPTAS. Footnote 4: A similar technique is used also by Chekuri et al. [3]. We restrict the enumeration to only a small subset of elements called _representative set_; that is, a subset of elements \(R\subseteq E\) satisfying the following property: there is a solution \(S\) such that the profitable elements in \(S\) are a subset of \(R\), and the profit of \(S\) is at least \((1-O(\varepsilon))\cdot\operatorname{OPT}(I)\). If one finds efficiently a representative set \(R\) of cardinality \(|R|\leq f\left(\frac{1}{\varepsilon}\right)\) for some computable function \(f\), obtaining an EPTAS is straightforward based on the approach of [1]. Our scheme generalizes the _representative set_ framework of Doron-Arad et al. [5], developed originally for budgeted matroid independent set. They construct a representative set as a basis of minimum cost of a matroid, which can be implemented using a greedy algorithm. Alas, a greedy analogue for the setting of matching and matroid intersection fails; we give an example in Figure 1.5 Hence, we take a different approach. Our main technical contribution is in the novel construction of representative sets for each of our problems. For BM we design a surprisingly simple algorithm which finds a representative set using a union of multiple matchings. To this end, we partition the edges in \(G\) into _profit classes_ such that each profit class contains edges of _similar_ profits. We then use the greedy approach to repeatedly find in each profit class a union of disjoint matchings, where each matching has a bounded cardinality and is greedily selected to minimize cost. Intuitively, to show that the above yields a representative set, consider a profitable edge \(e\) in some optimal solution. Suppose that \(e\) is not chosen to our union of matchings, then we consider two cases. If each matching selected in the profit class of \(e\) contains an edge that is adjacent to (i.e., shares a vertex with) \(e\), we show that at least one of these edges can be exchanged with \(e\); otherwise, there exists a matching with no edge adjacent to \(e\). In this case, we show that our greedy selection guarantees the existence of an edge in this matching which can be exchanged with \(e\), implying the above is a representative set (see the details in Section 4). For BI, we design a recursive algorithm that relies on an _asymmetric interpretation_ of the two given matroids. In each recursive call, we are given an independent set \(S\in\mathcal{I}_{1}\). The algorithm adds to the constructed representative set a minimum cost basis \(B_{S}\) of the second matroid \((E,\mathcal{I}_{2})\), with the crucial restriction that any element \(e\in B_{S}\) must satisfy \(S\cup\{e\}\in\mathcal{I}_{1}\). Succeeding recursive calls will then use the set \(S\cup\{e\}\), for every \(e\in B_{S}\). Thus, we limit the search space to \(\mathcal{I}_{1}\), while bases are constructed w.r.t. \(\mathcal{I}_{2}\). To show that the algorithm yields a representative set, consider a profitable element \(f\) in an optimal solution. We construct a sequence of elements which are independent w.r.t. \(\mathcal{I}_{1}\) and can be exchanged with \(f\) w.r.t. \(\mathcal{I}_{2}\). Using matroid properties we show that one of these elements can be exchanged with \(f\) w.r.t. both matroids (see the details in Section 5). Interestingly, our framework for solving BM and BI (presented in Section 3) can be extended to solve other problems formulated as (1) which possess similar _exchange properties_. We elaborate on that in Section 6. Moreover, our algorithms for constructing representative sets for BM and BI may find use in tackling other budgeted optimization problems (see Section 6), and are thus of independent interest. **Organization of the paper:** In Section 2 we give some definitions and notation. Section 3 presents our framework that yields an EPTAS for each of the problems. In Sections 4 and 5 we describe the algorithms for constructing representative sets for BM and BI, respectively. We conclude in Section 6 with a summary and some directions for future work. Due to space constraints, some of the proofs are given in the Appendix. ## 2 Preliminaries For simplicity of the notation, for any set \(A\) and an element \(e\), we use \(A+e\) and \(A-e\) to denote \(A\cup\{e\}\) and \(A\setminus\{e\}\), respectively. Also, for any \(k\in\mathbb{R}\) let \([k]=\{1,2,\ldots,\lfloor k\rfloor\}\). For a function \(f:A\rightarrow\mathbb{R}_{\geq 0}\) and a subset of elements \(C\subseteq A\), let \(f|_{C}:C\rightarrow\mathbb{R}_{\geq 0}\) be the _restriction_ of \(f\) to \(C\), such that \(\forall e\in C:f|_{C}(e)=f(e)\). ### Matching and Matroids Given an undirected graph \(G=(V,E)\), a _matching_ of \(G\) is a subset of edges \(M\subseteq E\) such that each vertex appears as an endpoint in at most one edge in \(M\), i.e., for all \(v\in V\) it holds that \(|\{\{u,v\}\in M\ |\ u\in V\}|\leq 1\). We denote by \(V(M)=\{v\in V\ |\ \exists u\in V\ \text{s.t.}\ \{u,v\}\in M\}\) the set of endpoints of a matching \(M\) of \(G\). Let \(E\) be a finite ground set and \(\mathcal{I}\subseteq 2^{E}\) a non-empty set containing subsets of \(E\) called the _independent sets_ of \(E\). Then \(\mathcal{M}=(E,\mathcal{I})\) is a _matroid_ if the following hold. 1. (Hereditary Property) For all \(A\in\mathcal{I}\) and \(B\subseteq A\), it holds that \(B\in\mathcal{I}\). 2. (Exchange Property) For any \(A,B\in\mathcal{I}\) where \(|A|>|B|\), there is \(e\in A\setminus B\) such that \(B+e\in\mathcal{I}\). A _basis_ of a matroid \(\mathcal{G}=(E,\mathcal{I})\) is an independent set \(B\in\mathcal{I}\) such that for all \(e\in E\setminus B\) it holds that \(B+e\notin\mathcal{I}\). Given a cost function \(c:E\to\mathbb{R}_{\geq 0}\), we say that a basis \(B\) of \(\mathcal{G}\) is a _minimum_ basis of \(\mathcal{G}\) w.r.t. \(c\) if, for any basis \(A\) of \(\mathcal{G}\) it holds that \(c(B)\leq c(A)\). A minimum basis of \(\mathcal{G}\) w.r.t. \(c\) can be easily constructed in polynomial-time using a greedy approach (see, e.g., [4]). In the following we define several matroid operations. Note that the structures resulting from the operations outlined in Definition 3 are matroids. (see, e.g., [16]). **Definition 3**: _Let \(\mathcal{G}=(E,\mathcal{I})\) be a matroid._ 1. _(restriction) For any_ \(F\subseteq E\) _define_ \(\mathcal{I}_{\cap F}=\{A\in\mathcal{I}\ |\ A\subseteq F\}\) _and_ \(\mathcal{G}\cap F=(F,\mathcal{I}_{\cap F})\)_._ 2. _(thinning) For any_ \(F\in\mathcal{I}\) _define_ \(\mathcal{I}/F=\{A\subseteq E\setminus F\ |\ A\cup F\in\mathcal{I}\}\) _and_ \(\mathcal{G}/F=(E\setminus F,\mathcal{I}/F)\)_._6__ Footnote 6: Thinning is generally known as contraction; we use the term thinning to avoid confusion with edge contraction in graphs. 3. _(truncation) For any_ \(q\in\mathbb{N}\) _define_ \(\mathcal{I}_{\leq q}=\{A\in\mathcal{I}\ |\ |A|\leq q\}\) _and_ \([\mathcal{G}]_{\leq q}=(E,\mathcal{I}_{\leq q})\)_._ ### Instance Definition We give a unified definition for instances of budgeted matching and budgeted matroid intersection. Given a ground set \(E\) of elements, we say that \(\mathcal{C}\) is a _constraint_ of \(E\) if one of the following holds. * \(\mathcal{C}=(V,E)\) is a _matching constraint_, where \(\mathcal{C}\) is an undirected graph. Let \(\mathcal{M}(\mathcal{C})=\{M\subseteq E\ |\ M\) is a matching in \(\mathcal{C}\}\) be the _feasible sets_ of \(\mathcal{C}\). Given a subset of edges \(F\subseteq E\), let \(E/F=\{\{u,v\}\in E\ |\ u,v\notin V(F)\}\) be the _thinning_ of \(F\) on \(E\), and let \(\mathcal{C}/F=(V,E/F)\) be the _thinning_ of \(F\) on \(\mathcal{C}\). * \(\mathcal{C}=(\mathcal{I}_{1},\mathcal{I}_{2})\) is a _matroid intersection constraint_, where \((E,\mathcal{I}_{1})\) and \((E,\mathcal{I}_{2})\) are matroids. Throughout this paper, we assume that each of the matroids is given by an independence oracle. That is, determining whether some \(F\subseteq E\) belongs to \(\mathcal{I}_{1}\) or to \(\mathcal{I}_{2}\) requires a single call to the corresponding oracle of \(\mathcal{I}_{1}\) or \(\mathcal{I}_{2}\), respectively. Let \(\mathcal{M}(\mathcal{C})=\mathcal{I}_{1}\cap\mathcal{I}_{2}\) be the collection of _feasible sets_ of \(\mathcal{C}\). In addition, given some \(F\subseteq E\), let \(\mathcal{C}/F=(\mathcal{I}_{1}/F,\mathcal{I}_{2}/F)\) be the _thinning_ of \(F\) on \(\mathcal{C}\). We say that \(\mathcal{C}\) is a _single matroid constraint_ if \(\mathcal{I}_{1}=\mathcal{I}_{2}\) When understood from the context, we simply use \(\mathcal{M}=\mathcal{M}(\mathcal{C})\). Define an instance of the _budgeted constrained (BC)_ problem as a tuple \(I=(E,\mathcal{C},c,p,\beta)\), where \(E\) is a ground set of elements, \(\mathcal{C}\) is a constraint of \(E\), \(c:E\to\mathbb{R}_{\geq 0}\) is a cost function, \(p:E\to\mathbb{R}_{\geq 0}\) is a profit function, and \(\beta\in\mathbb{R}_{\geq 0}\) is a budget. If \(\mathcal{C}\) is a matching constraint then \(I\) is a BM instance; otherwise, \(I\) is a BI instance. A _solution_ of \(I\) is a feasible set \(S\in\mathcal{M}(\mathcal{C})\) such that \(c(S)\leq\beta\). The objective is to find a solution \(S\) of \(I\) such that \(p(S)\) is maximized. Let \(|I|\) denote the encoding size of a BC instance \(I\), and \(\operatorname{poly}(|I|)\) be a polynomial size in \(|I|\). ## 3 The Algorithm In this section we present an EPTAS for the BC problem. Our first step is to determine the set of _profitable_ elements in the constructed solution.7 To this end, we generalize the _representative set_ notion of [5] to the setting of BC.8 Our scheme relies on initially finding a set of profitable elements of small cardinality, from which the most profitable elements are selected for the solution using enumeration. Then, _non-profitable_ elements are added to the solution using a techniques of [1]. For the remainder of this section, fix a BC instance \(I=(E,\mathcal{C},c,p,\beta)\) and an error parameter \(0<\varepsilon<\frac{1}{2}\). Let \(H(I,\varepsilon)=\{e\in E\mid p(e)>\varepsilon\cdot\mathrm{OPT}(I)\}\) be the set of _profitable_ elements in \(I\), and \(E\setminus H(I,\varepsilon)\) the set of _non-profitable_ elements; when understood from the context, we use \(H=H(I,\varepsilon)\). Now, a representative set is a subset of elements which contains the profitable elements of an _almost_ optimal solution. Formally, Let \(I=(E,\mathcal{C},c,p,\beta)\) be a \(\mathrm{BC}\) instance, \(0<\varepsilon<\frac{1}{2}\) and \(R\subseteq E\). We say that \(R\) is a _representative set_ of \(I\) and \(\varepsilon\) if there is a solution \(S\) of \(I\) such that the following holds. 1. \(S\cap H\subseteq R\). 2. \(p\left(S\right)\geq(1-4\varepsilon)\cdot\mathrm{OPT}(I)\). The work of [5] laid the foundations for the following notions of _replacements_ and _strict representative sets (SRS)_, for the special case of BC where \(\mathcal{C}\) is a single matroid constraint. Below we generalize the definitions of replacements and SRS. Intuitively, a replacement of a solution \(S\) for \(I\) of bounded cardinality is another solution for \(I\) which preserves the attributes of the profitable elements in \(S\) (i.e., \(S\cap H\)). In particular, the profit of the replacement is close to \(p(S\cap H)\), whereas the cost and the number of profitable elements can only be smaller. An SRS is a subset of elements containing a replacement for any solution for \(I\) of bounded cardinality. The formal definitions of replacement and SRS for general BC instances are given in Definitions 5 and 6, respectively. Let \(q(\varepsilon)=\left\lceil\varepsilon^{-\varepsilon^{-1}}\right\rceil\), and \(\mathcal{M}_{\leq q(\varepsilon)}=\{A\in\mathcal{M}\mid|A|\leq q(\varepsilon)\}\) be all _bounded feasible sets_ of \(\mathcal{C}\) and \(\varepsilon\). Recall that we use \(\mathcal{M}=\mathcal{M}(\mathcal{C})\) for the feasible sets of \(\mathcal{C}\); similar simplification in notation is used also for bounded feasible sets. Given a \(\mathrm{BC}\) instance \(I=(E,\mathcal{C},c,p,\beta),0<\varepsilon<\frac{1}{2}\), \(S\in\mathcal{M}_{\leq q(\varepsilon)}\), and \(Z_{S}\subseteq E\), we say that \(Z_{S}\) is a _replacement_ of \(S\) for \(I\) and \(\varepsilon\) if the following holds: 1. \((S\setminus H)\cup Z_{S}\in\mathcal{M}_{\leq q(\varepsilon)}\). 2. \(c(Z_{S})\leq c(S\cap H)\). 3. \(p\left((S\setminus H)\cup Z_{S}\right)\geq(1-\varepsilon)\cdot p(S)\). 4. \(|Z_{S}|\leq|S\cap H|\). Given a \(\mathrm{BC}\) instance \(I=(E,\mathcal{C},c,p,\beta),0<\varepsilon<\frac{1}{2}\), and \(R\subseteq E\), we say that \(R\) is a _strict representative set (SRS)_ of \(I\) and \(\varepsilon\) if, for any \(S\in\mathcal{M}_{\leq q(\varepsilon)}\), there is a replacement \(Z_{S}\subseteq R\) of \(S\) for \(I\) and \(\varepsilon\). Observe that given any solution \(S\) of \(I\) such that \(|S|\leq q(\varepsilon)\), it holds that \(S\cap H\) is a replacement of \(S\); also, \(E\) is an SRS. In the next result, we demonstrate the power of SRS in solving BC. Specifically, we show that any SRS \(R\subseteq E\) is also a representative set. Hence, using enumeration on subsets of \(R\) we can find a subset of elements that can be extended by only non-profitable elements to an _almost_ optimal solution (see Algorithm 2). Let \(I=(E,\mathcal{C},c,p,\beta)\) be a \(\mathrm{BC}\) instance, let \(0<\varepsilon<\frac{1}{2}\), and let \(R\) be an SRS of \(I\) and \(\varepsilon\). Then \(R\) is a representative set of \(I\) and \(\varepsilon\). The proof of Lemma 3 is given in Appendix 0.A. We proceed to construct an SRS whose cardinality depends only on \(\varepsilon\). First, we partition the profitable elements (and possibly some more elements) into a small number of _profit classes_, where elements from the same profit class have _similar_ profits. The profit classes are derived from a 2-approximation \(\alpha\) for \(\mathrm{OPT}(I)\) which can be easily computed in polynomial time. Specifically, for all \(r\in[\log_{1-\varepsilon}\left(\frac{\varepsilon}{2}\right)+1]\) define the \(r\)-_profit class_ as \[\mathcal{K}_{r}(\alpha)=\left\{e\in E\ \middle|\ \frac{p(e)}{2\cdot\alpha}\in \left((1-\varepsilon)^{r},(1-\varepsilon)^{r-1}\right]\right\}. \tag{2}\] In the following, we give a definition of an _exchange set_ for each profit class. This facilitates the construction of an SRS. In words, a subset of elements \(X\) is an exchange set for some profit class \(\mathcal{K}_{r}(\alpha)\) if any feasible set \(\Delta\) and element \(a\in(\Delta\cap\mathcal{K}_{r}(\alpha))\setminus X\) can be replaced (while maintaining feasibility) by some element \(b\in(X\cap\mathcal{K}_{r}(\alpha))\setminus\Delta\) such that the cost of \(b\) is no larger than the cost of \(a\). Formally, Let \(I=(E,\mathcal{C},c,p,\beta)\) be a \(\mathrm{BC}\) instance, \(0<\varepsilon<\frac{1}{2}\), \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\), \(r\in[\log_{1-\varepsilon}\left(\frac{\varepsilon}{2}\right)+1]\), and \(X\subseteq\mathcal{K}_{r}(\alpha)\). We say that \(X\) is an _exchange set_ for \(I,\varepsilon,\alpha,\) and \(r\) if: * For all \(\Delta\in\mathcal{M}_{\leq q(\varepsilon)}\) and \(a\in(\Delta\cap\mathcal{K}_{r}(\alpha))\setminus X\) there is \(b\in(\mathcal{K}_{r}(\alpha)\cap X)\setminus\Delta\) satisfying * \(c(b)\leq c(a)\). * \(\Delta-a+b\in\mathcal{M}_{\leq q(\varepsilon)}\). The similarity between SRS and exchange sets is not coincidental. We show that if a set \(R\subseteq E\) satisfies that \(R\cap\mathcal{K}_{r}(\alpha)\) is an exchange set for any \(r\in[\log_{1-\varepsilon}\left(\frac{\varepsilon}{2}\right)+1]\), then \(R\) is an SRS, and thus also a representative set by Lemma 3. This allows us to construct an SRS using a union of disjoint exchange sets, one for each profit class. Let \(I=(E,\mathcal{C},c,p,\beta)\) be a \(\mathrm{BC}\) instance, \(0<\varepsilon<\frac{1}{2}\), \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\) and \(R\subseteq E\). If for all \(r\in[\log_{1-\varepsilon}\left(\frac{\varepsilon}{2}\right)+1]\) it holds that \(R\cap\mathcal{K}_{r}(\alpha)\) is an exchange set for \(I,\varepsilon,\alpha,\) and \(r\), then \(R\) is a representative set of \(I\) and \(\varepsilon\). We give the formal proof in Appendix A. We now present a unified algorithm for finding a representative set for both types of constraints, namely, matching or matroid intersection constraints. This is achieved by taking the union of exchange sets of all profit classes. Nevertheless, for the construction of exchange sets we distinguish between the two types of constraints. This results also in different sizes for the obtained representative sets. Our algorithms for finding the exchange sets are the core technical contribution of this paper. For matching constraints, we design an algorithm which constructs an exchange set for any profit class by finding multiple matchings of \(\mathcal{C}\) from the given profit class. Each matching has a bounded cardinality, and the edges are chosen using a greedy approach to minimize the cost. We give the full details and a formal proof of Lemma 3 in Section 4. There is an algorithm \(\mathsf{ExSet}\)-\(\mathsf{Matr}\mathsf{id}\mathsf{intersection}\) that given a \(\mathrm{BM}\) instance \(I\), \(0<\varepsilon<\frac{1}{2}\), \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\), and \(r\in[\log_{1-\varepsilon}\left(\frac{\varepsilon}{2}\right)+1]\), returns in time \(q(\varepsilon)\cdot\mathrm{poly}(|I|)\) an exchange set \(X\) for \(I,\varepsilon,\alpha,\) and \(r\), such that \(|X|\leq 18\cdot q(\varepsilon)^{2}\). Our algorithm for matroid intersection constraints is more involved and generates an exchange set by an _asymmetric interpretation_ of the two given matroids. We give the full details in Section 5 and a formal proof of Lemma 3 in Section 5. There is an algorithm \(\mathsf{ExSet}\)-\(\mathsf{Matroid}\mathsf{intersection}\) that given a \(\mathrm{BI}\) instance \(I\), \(0<\varepsilon<\frac{1}{2}\), \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\), and \(r\in[\log_{1-\varepsilon}\left(\frac{\varepsilon}{2}\right)+1]\), returns in time \(q(\varepsilon)^{O(q(\varepsilon))}\cdot\mathrm{poly}(|I|)\) an exchange set \(X\) for \(I,\varepsilon,\alpha,\) and \(r\), such that \(|X|\leq q(\varepsilon)^{O(q(\varepsilon))}\). Using the above, we design an algorithm that returns a representative set for both types of constraints. This is done by computing the 2-approximation \(\alpha\) of \(\mathrm{OPT}(I)\), and then finding exchange sets for all profit classes, for the corresponding type of constraint. Finally, we return the union of the above exchange sets. The pseudocode of our algorithm, \(\mathsf{RepSet}\), is given in Algorithm 1. ``` input :A BC instance \(I\) and error parameter \(0<\varepsilon<\frac{1}{2}\). output :A representative set \(R\) of \(I\) and \(\varepsilon\). 1 Compute a 2-approximation \(S^{*}\) for \(I\) using a PTAS for BC with parameter \(\varepsilon^{\prime}=\frac{1}{2}\). 2 Set \(\alpha\gets p(S^{*})\). 3 Initialize \(R\leftarrow\emptyset\). 4for\(r\in[\log_{1-\varepsilon}\left(\frac{\varepsilon}{2}\right)+1]\)do 5if\(I\) is a BM instance then 6\(R\gets R\cup\mathsf{ExSet}\mbox{-}\mathsf{Matching}(I,\varepsilon, \alpha,r)\). 7else 8\(R\gets R\cup\mathsf{ExSet}\mbox{-}\mathsf{MatroidIntersection}(I, \varepsilon,\alpha,r)\). 9 Return \(R\). ``` **Algorithm 1**\(\mathsf{RepSet}(I=(E,\mathcal{C},c,p,\beta),\varepsilon)\) Given a BC instance \(I=(E,\mathcal{C},c,p,\beta)\) and \(0<\varepsilon<\frac{1}{2}\), Algorithm 1 returns a representative set \(R\) of \(I\) and \(\varepsilon\), such that one of the following holds. * If \(\mathcal{C}\) is a matching constraint the running time is \(q(\varepsilon)^{2}\cdot\mathrm{poly}(|I|)\), and \(|R|\leq 54\cdot q(\varepsilon)^{3}\). * If \(\mathcal{C}\) is a matroid intersection constraint the running time is \(q(\varepsilon)^{O(q(\varepsilon))}\cdot\mathrm{poly}(|I|)\), and \(|R|\leq q(\varepsilon)^{O(q(\varepsilon))}\). The proof of the lemma is given in Appendix A. Next, we use a result of [1] for adding elements of smaller profits to the solution. The techniques of [1] are based on a non-trivial patching of two solutions of the Lagrangian relaxation of BC (both for matching and matroid intersection constraints). This approach yields a feasible set with almost optimal profit, where in the worst case the difference from the optimum is twice the maximal profit of an element in the instance. Since we use the latter approach only for non-profitable elements, this effectively does not harm our approximation guarantee. The following is a compact statement of the above result of [1]. There is a polynomial-time algorithm \(\mathsf{NonProfitableSolver}\) that given a BC instance \(I=(E,\mathcal{C},c,p,\beta)\) computes a solution \(S\) for \(I\) of profit \(p(S)\geq\mathrm{OPT}(I)-2\cdot\max_{e\in E}p(e)\). Using the algorithm above and our algorithm for computing a representative set, we obtain an EPTAS for BC. Let \(R\) be the representative set returned by \(\mathsf{RepSet}(I,\varepsilon)\). Our scheme enumerates over subsets of \(R\) to select profitable elements for the solution. Using algorithm \(\mathsf{NonProfitableSolver}\) of [1], the solution is extended to include also non-profitable elements. Specifically, let \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\) be a 2-approximation for the optimal profit for \(I\). In addition, let \(E(\alpha)=\{e\in E\ |\ p(e)\leq 2\varepsilon\cdot\alpha\}\) be the set including the non-profitable elements, and possibly also profitable elements \(e\in E\) such that \(p(e)\leq 2\varepsilon\cdot\mathrm{OPT}(I)\). Given a feasible set \(F\in\mathcal{M}\), we define a residual BC instance containing elements which can _extend_\(F\) by adding elements from \(E(\alpha)\). More formally, Given a BC instance \(I=(E,\mathcal{C},c,p,\beta)\), \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\), and \(F\in\mathcal{M}(\mathcal{C})\), the _residual instance of \(F\)_ and \(\alpha\) for \(I\) is the BC instance \(I_{F}(\alpha)=(E_{F},\mathcal{C}_{F},c_{F},p_{F},\beta_{F})\) defined as follows. * \(E_{F}=E(\alpha)\setminus F\). ``` **Algorithm 2**\(\mathsf{RepSet}(I=(E,\mathcal{C},c,p,\beta),\varepsilon)\) Given a BC instance \(I=(E,\mathcal{C},c,p,\beta)\), \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\), and \(F\in\mathcal{M}(\mathcal{C})\), the residual instance of \(F\) and \(\alpha\) for \(I\) is the BC instance \(I_{F}(\alpha)=(E_{F},\mathcal{C}_{F},c_{F},p_{F},\beta_{F})\) defined as follows. The algorithm above is a polynomial-time algorithm \(\mathsf{NonProfitableSolver}\) that given a BC instance \(I=(E,\mathcal{C},c,p,\beta)\) computes a solution \(S\) for \(I\) of profit \(p(S)\geq\mathrm{OPT}(I)-2\cdot\max_{e\in E}p(e)\). Using the algorithm above and our algorithm for computing a representative set, we obtain an EPTAS for BC. Let \(R\) be the representative set returned by \(\mathsf{RepSet}(I,\varepsilon)\). Our scheme enumerates over subsets of \(R\) to select profitable elements for the solution. Using algorithm \(\mathsf{NonProfitableSolver}\) of [1], the solution is extended to include also non-profitable elements. Specifically, let \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\) be a 2-approximation for the optimal profit for \(I\). In addition, let \(E(\alpha)=\{e\in E\ |\ p(e)\leq 2\varepsilon\cdot\alpha\}\) be the set including the non-profitable elements, and possibly also profitable elements \(e\in E\) such that \(p(e)\leq 2\varepsilon\cdot\mathrm{OPT}(I)\). Given a feasible set \(F\in\mathcal{M}\), we define a residual BC instance containing elements which can _extend_\(F\) by adding elements from \(E(\alpha)\). More formally, Given a BC instance \(I=(E,\mathcal{C},c,p,\beta)\), \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\), and \(F\in\mathcal{M}(\mathcal{C})\), the residual instance of \(F\) and \(\alpha\) for \(I\) is the BC instance \(I_{F}(\alpha)=(E_{F},\mathcal{C}_{F},c_{F},p_{F},\beta_{F})\) defined as follows. * \(E_{F}=E(\alpha)\setminus F\). ``` **Algorithm 3**\(\mathsf{RepSet}(I=(E,\mathcal{C},c,p,\beta),\varepsilon)\) Given a BC instance \(I=(E,\mathcal{C},c,p,\beta)\), \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\), and \(F\in\mathcal{M}(\mathcal{C})\), the residual instance of \(F\) and \(\alpha\) for \(I\) is the BC instance \(I_{F}(\alpha)=(E_{F},\mathcal{C}_{F},c_{F},p_{F},\beta_{F})\) defined as follows. Given a BC instance \(I=(E,\mathcal{C},c,p,\beta)\), \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\), and \(F\in\mathcal{M}(\mathcal{C})\), the residual instance of \(F\) and \(\alpha\) for \(I\) is the BC instance \(I_{F}(\alpha)=(E_{F},\mathcal{C}_{F},c_{F},p_{F},\beta_{F})\) defined as follows. * \(\mathcal{C}_{F}=\mathcal{C}/F\)_._ * \(p_{F}=p|_{F}\) _(i.e., the restriction of_ \(p\) _to_ \(F\)_)._ * \(c_{F}=c|_{F}\)_._ * \(\beta_{F}=\beta-c(F)\)_._ Let \(I=(E,\mathcal{C},c,p,\beta)\) be a BC instance, \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\), \(F\in\mathcal{M}(\mathcal{C})\), and let \(T\) be a solution for \(I_{F}(\alpha)\). Then, \(T\cup F\) is a solution for \(I\). For all solutions \(F\subseteq R\) for \(I\) with \(|F|\leq\varepsilon^{-1}\), we find a solution \(T_{F}\) for the residual instance \(I_{F}(\alpha)\) using Algorithm NonProfitableSolver and define \(K_{F}=T_{F}\cup F\) as the _extended solution_ of \(F\). Our scheme iterates over the extended solutions \(K_{F}\) for all such solutions \(F\) and chooses an extended solution \(K_{F^{*}}\) of maximal total profit. The pseudocode of the scheme is given in Algorithm 2. ``` 1:A BC instance \(I\) and an error parameter \(0<\varepsilon<\frac{1}{2}\). 2:A solution for \(I\). 3:Construct the representative set \(R\leftarrow\mathsf{RepSet}(I,\varepsilon)\). 4:Compute a 2-approximation \(S^{*}\) for \(I\) using a PTAS for BC with parameter \(\varepsilon^{\prime}=\frac{1}{2}\). 5:Set \(\alpha\gets p(S^{*})\). 6:Initialize an empty solution \(A\leftarrow\emptyset\). 7:for\(F\subseteq R\) s.t. \(|F|\leq\varepsilon^{-1}\) and \(F\) is a solution of \(I\)do 8: Find a solution for \(I_{F}(\alpha)\) by \(T_{F}\leftarrow\mathsf{NonProfitableSolver}(I_{F}(\alpha))\). 9: Let \(K_{F}\gets T_{F}\cup F\). 10:if\(p\left(K_{F}\right)>p(A)\)then 11: Update \(A\gets K_{F}\) 12:Return \(A\). ``` **Algorithm 2**\(\mathsf{EPTAS}(I=(E,\mathcal{C},c,p,\beta),\varepsilon)\) The running time of Algorithm 2 crucially depends on the cardinality of the representative set. Roughly speaking, the running time is the number of subsets of the representative set containing at most \(\varepsilon^{-1}\) elements, multiplied by a computation time that is polynomial in the encoding size of the instance. Moreover, since \(R=\mathsf{RepSet}(I,\varepsilon)\) is a representative set (by Lemma 3.2), there is an almost optimal solution \(S\) of \(I\) such that the profitable elements in \(S\) are a subset of \(R\). Thus, there is an iteration of the **for** loop in Algorithm 2 such that \(F=S\cap H\). In the proof of Lemma 3.2 we focus on this iteration and show that it yields a solution \(K_{F}\) of \(I\) with an almost optimal profit. Given a BC instance \(I=(E,\mathcal{C},c,p,\beta)\) and \(0<\varepsilon<\frac{1}{2}\), Algorithm 2 returns a solution for \(I\) of profit at least \((1-8\varepsilon)\cdot\mathrm{OPT}(I)\) such that one of the following holds. * If \(I\) is a BM instance the running time is \(2^{O\left(\varepsilon^{-2}\log\frac{1}{\varepsilon}\right)}\cdot\mathrm{poly}(| I|)\). * If \(I\) is a BI instance the running time is \(q(\varepsilon)^{O\left(\varepsilon^{-1}\cdot q(\varepsilon)\right)}\cdot \mathrm{poly}(|I|)\), where \(q(\varepsilon)=\left\lceil\varepsilon^{-\varepsilon^{-1}}\right\rceil\). The proof of Lemma 3.2 is given in Appendix A. We can now prove our main results. **Proofs of Theorem 1 and Theorem 2:** Given a BC instance \(I\) and \(0<\varepsilon<\frac{1}{2}\), using Algorithm 2 for \(I\) with parameter \(\frac{\varepsilon}{8}\) we have by Lemma 3.2 the desired approximation guarantee. Furthermore, the running time is \(2^{O\left(\varepsilon^{-2}\log\frac{1}{\varepsilon}\right)}\cdot\mathrm{poly}(| I|)\) or \(q(\varepsilon)^{O\left(\varepsilon^{-1}\cdot q(\varepsilon)\right)}\cdot\mathrm{poly} (|I|)\), depending on whether \(I\) is a BM instance or a BI instance, respectively. ### 4 Exchange Set for Matching Constraints In this section we design an algorithm for finding an exchange set for a BM instance and a profit class, leading to the proof of Lemma 10. For the remainder of this section, fix a BM instance \(I=(E,\mathcal{C},c,p,\beta)\), an error parameter \(0<\varepsilon<\frac{1}{2}\), a \(2\)-approximation for \(\mathrm{OPT}(I)\), \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\), and an index \(r\in[\log_{1-\varepsilon}\left(\frac{\varepsilon}{2}\right)+1]\) of the profit class \(\mathcal{K}_{r}(\alpha)\). We note that for a single matroid constraint an exchange set can be constructed by finding a minimum cost basis in the matroid [5]. More specifically, given a matroid \(\mathcal{G}=(E,\mathcal{I})\), it is shown in [5] that a minimum cost basis in the matroid \([\mathcal{G}\cap\mathcal{K}_{r}(\alpha)]_{\leq q(\varepsilon)}\) is an exchange set for \(\mathcal{K}_{r}(\alpha)\). Such exchange set can be easily computed using a greedy approach. An analogue for the setting of matching constraints is to find a matching of cardinality \(\Omega(q(\varepsilon))\) and minimum total cost in \(\mathcal{K}_{r}(\alpha)\). However, as shown in Figure 1, this idea fails. Thus, we turn to use a completely different approach. A key observation is that even if a greedy matching algorithm may not suffice for the construction of an exchange set, applying such an algorithm multiple times can be the solution. Thus, as a subroutine our algorithm finds a matching using a greedy approach. The algorithm iteratively selects an edge of minimal cost while ensuring that the selected set of edges is a matching. This is done until the algorithm reaches a given cardinality bound, or no more edges can be added. The pseudocode of GreedyMatching is given in Algorithm 3.9 Footnote 9: Given a graph \(G=(V,E)\) and a matching \(M\) of \(G\), the definition of thinning \(E/M\) is given in Section 2. ``` input :A graph \(G\), an integer \(N\in\mathbb{N}\setminus\{0\}\), and a cost function \(c:E\rightarrow\mathbb{R}_{\geq 0}\). output :A matching \(M\) of \(G\). Initialize \(M\leftarrow\emptyset\). while\(|M|<N\) and \(E/M\neq\emptyset\)do Find \(e\in E/M\) of minimal cost w.r.t. \(c\). Update \(M\gets M+e\). Return \(M\). ``` **Algorithm 3**GreedyMatching(\(G=(V,E),N,c\)) Given a graph \(G=(V,E)\) and two edges \(a,b\in E\), we say that \(a,b\) are _adjacent_ if there are \(x,y,z\in V\) such that \(a=\{x,y\}\) and \(b=\{y,z\}\); for all \(e\in E\), let \(\mathsf{Adj}_{G}(e)\) be the set of Figure 1: An example showing that bipartite matching may not yield an exchange set. Consider the two matchings \(\Delta_{1}=\{a,c\},\Delta_{2}=\{b,d\}\) marked in red and blue, and suppose that \(\mathcal{K}_{r}(\alpha)=\{a,b\}\) is a profit class. The only exchange set for \(\mathcal{K}_{r}(\alpha)\) is \(\{a,b\}\), which is not a matching. Note that a bipartite matching can be cast as matroid intersection. For a bipartite graph \(G=(L\cup R,E)\), define the matroids \(\mathcal{M}_{1}=(E,\mathcal{I}_{1})\) and \(\mathcal{M}_{2}=(E,\mathcal{I}_{2})\), where \(\mathcal{I}_{1}=\{F\subseteq E|\ \forall v\in L:|F\cap N(v)|\leq 1\}\), and \(\mathcal{I}_{2}=\{F\subseteq E|\ \forall v\in R:|F\cap N(v)|\leq 1\}\), where \(N(v)\) is the set of neighbors of \(v\). Thus, bipartite matching is a special case of both matching and matroid intersection. edges adjacent to \(e\) in \(G\). In the next result we show that if an edge \(a\) is not selected for the solution by \(\mathsf{GreedyMatching}\), then either the algorithm selects an adjacent edge of cost at most \(c(a)\), or all of the selected edges have costs at most \(c(a)\). Given a graph \(G=(V,E)\), \(N\in\mathbb{N}\setminus\{0\}\), and \(c:E\to\mathbb{R}_{\geq 0}\), Algorithm 3 returns in polynomial time a matching \(M\) of \(G\) such that for all \(a\in E\setminus M\) one of the following holds. 1. \(|M|\leq N\) and there is \(b\in\mathsf{Adj}_{G}(a)\cap M\) such that \(c(b)\leq c(a)\). 2. \(|M|=N\), for all \(b\in M\) it holds that \(c(b)\leq c(a)\), and \(M+a\) is a matching of \(G\). Proof.: Clearly, Algorithm 3 returns in polynomial time a matching \(M\) of \(G\). Observe that \(|M|\leq N\) by Step 2. To prove that either 1. or 2. hold, we distinguish between two cases. * \(a\notin E/M\). Then \(\mathsf{Adj}_{G}(a)\cap M\neq\emptyset\). Let \(e\) be the first edge in \(\mathsf{Adj}_{G}(a)\cap M\) that is added to \(M\) in Step 4; also, let \(L\) be the set of edges added to \(M\) before \(e\). Then \(a\in E/L\), since \(L\) does not contain edges adjacent to \(a\). By Step 3, it holds that \(c(e)=\min_{e^{\prime}\in E/L}c(e^{\prime})\leq c(a)\). * \(a\in E/M\). Thus, \(|M|=N\); otherwise, \(a\) would be added to \(M\). Also, \(M+a\) is a matching of \(G\). Now, let \(b\in M\), and let \(K\) be the set of edges added to \(M\) before \(b\). Since \(M+a\) is a matching of \(G\), by the hereditary property of \((E,\mathcal{M}(G))\) it holds that \(K+a\) is a matching of \(G\); thus, \(a\in E/K\) and by Step 3 it follows that \(c(b)=\min_{e^{\prime}\in E/K}c(e^{\prime})\leq c(a)\). By Lemma 3.1, we argue that an exchange set can be found by using Algorithm \(\mathsf{GreedyMatching}\) iteratively. Specifically, let \(k(\varepsilon)=6\cdot q(\varepsilon)\) and \(N(\varepsilon)=3\cdot q(\varepsilon)\). We run Algorithm \(\mathsf{GreedyMatching}\) for \(k(\varepsilon)\) iterations, each iteration with a bound \(N(\varepsilon)\) on the cardinality of the matching. In iteration \(i\), we choose a matching \(M_{i}\) from the edges of the profit class \(\mathcal{K}_{r}(\alpha)\) and remove the chosen edges from the graph. Therefore, in the following iterations, edges adjacent to previously chosen edges can be chosen as well. A small-scale illustration of the algorithm is presented in Figure 2. The pseudocode of Algorithm \(\mathsf{ExSet}\)-\(\mathsf{Matching}\), which computes an exchange set for the given profit class, is presented in Algorithm 4. ``` input : a matching-BC instance \(I\), \(0<\varepsilon<\frac{1}{2}\), \(\frac{\mathrm{OPT}(I)}{2}\leq\alpha\leq\mathrm{OPT}(I)\), \(r\in[\log_{1-\varepsilon}\left(\frac{\varepsilon}{2}\right)+1]\). output : An exchange set for \(I,\varepsilon,\alpha\), and \(r\). 1 Initialize \(X\leftarrow\emptyset\) and \(\mathcal{E}_{0}\leftarrow\mathcal{K}_{r}(\alpha)\). for\(i\in\{1,\ldots,k(\varepsilon)\}\)do Define \(G_{i}=(V,\mathcal{E}_{i-1})\) where \(V\) is the vertex set of \(\mathcal{C}\). Compute \(M_{i}\leftarrow\mathsf{GreedyMatching}\left(G_{i},N(\varepsilon),c|_{ \mathcal{E}_{i-1}}\right)\). Update \(X\gets X\cup M_{i}\) and define \(\mathcal{E}_{i}\leftarrow\mathcal{E}_{i-1}\setminus M_{i}\). Return \(X\). ``` **Algorithm \(\mathsf{ExSet}\)-\(\mathsf{Matching}\) outputs a union \(X\) of disjoint matchings \(M_{1},\ldots,M_{k(\varepsilon)}\) taken from the edges of the profit class \(\mathcal{K}_{r}(\alpha)\). For some \(\Delta\in\mathcal{M}(\mathcal{C})\) and \(a\in(\Delta\cap\mathcal{K}_{r}(\alpha))\setminus X\), by Lemma 3.1, there are two options summarizing the main idea in the proof of Lemma 3.1. * all matchings \(M_{i}\) contain some \(b_{i}\) adjacent to \(a\) such that \(c(b_{i})\leq c(a)\). Then, as \(k(\varepsilon)\) is sufficiently large, one such \(b_{i}\) is not adjacent to any edge in \(\Delta-a\). Hence, \(\Delta-a+b_{i}\) is a matching. One such \(M_{i}\) contains only edges of costs at most \(c(a)\); as \(N(\varepsilon)\) is sufficiently large, there is \(b\in M_{i}\) such that \(\Delta-a+b\) is a matching. **Proof of Lemma 10:** For all \(i\in\{1,\ldots,k(\varepsilon)\}\), let \(G_{i}\) and \(M_{i}\) be the outputs of Steps 3 and 4 in iteration \(i\) of the **for** loop in \(\mathsf{ExSet}\)-\(\mathsf{Matching}(I,\varepsilon,\alpha,r)\), respectively. Also, let \(X\) be the output of the algorithm; observe that \(X=\bigcup_{i\in[k(\varepsilon)]}M_{i}\). We show that \(X\) is an exchange set for \(I,\varepsilon,\alpha\) and \(r\) (see Definition 8). Let \(\Delta\in\mathcal{M}_{\leq q(\varepsilon)}\) and \(a\in(\Delta\cap\mathcal{K}_{r}(\alpha))\setminus X\). We use the next inequality in the claim below. \[\frac{k(\varepsilon)}{2}=N(\varepsilon)=3\cdot q(\varepsilon)>2\cdot|\Delta|=| V(\Delta)|. \tag{3}\] The inequality holds since \(\Delta\in\mathcal{M}_{\leq q(\varepsilon)}\). The last equality holds since each vertex appears as an endpoint in a matching at most once. There is \(b\in(X\cap\mathcal{K}_{r}(\alpha))\setminus\Delta\) such that \(\Delta-a+b\in\mathcal{M}_{\leq q(\varepsilon)}\), and \(c(b)\leq c(a)\). Let \(a=\{x,y\}\), \(I=(E,\mathcal{C},c,p,\beta)\), and \(\mathcal{C}=(V,E)\). Since \(a\notin X\), for all \(i\in\{1,\ldots,k(\varepsilon)\}\) it holds that \(a\notin M_{i}\); thus, \(a\in\mathcal{E}_{i}=\mathcal{E}_{i-1}\setminus M_{i}\). Hence, by Lemma 17, one of the following holds. For all \(i\in[k(\varepsilon)]\) there is \(b_{i}\in\mathsf{Adj}_{G_{i}}(a)\cap M_{i}\) such that \(c(b_{i})\leq c(a)\). For \(z\in\{x,y\}\) let \[J_{z}=\{i\in[k(\varepsilon)]\ |\ \exists u\in V:\ b_{i}=\{z,u\}\}\] be the set of indices of edges \(b_{i}\) neighboring to \(z\). Since \(b_{i}\in\mathsf{Adj}_{G_{i}}(a)\) it holds that \(J_{x}\cup J_{y}=[k(\varepsilon)]\). Thus, there is \(z\in\{x,y\}\) such that \(|J_{z}|\geq\frac{k(\varepsilon)}{2}>|V(\Delta)|\), where the last inequality follows from (3). For any \(i\in J_{z}\) let \(v_{i}\in V\) be the vertex connected to \(z\) in \(b_{i}\), that is \(b_{i}=\{z,v_{i}\}\). Since the matchings \(M_{1},\ldots M_{k(\varepsilon)}\) are disjoint and \(b_{i}\in M_{i}\) it follows that the vertices \(v_{i}\) for \(i\in J_{z}\) are all distinct. As \(|J_{z}|>|V(\Delta)|\) there is \(i^{*}\in J_{z}\) such that \(v_{i^{*}}\notin V(\Delta)\). Therefore, \(\Delta-a+b_{i^{*}}\in\mathcal{M}_{\leq q(\varepsilon)}\) and \(c(b_{i^{*}})\leq c(a)\). There is \(i\in\{1,\ldots,k(\varepsilon)\}\) such that \(|M_{i}|=N(\varepsilon)\), for all \(b\in M_{i}\) it holds that \(c(b)\leq c(a)\), and \(M_{i}+a\) is a matching of \(G_{i}\). Then, \[|M_{i}|=N(\varepsilon)>|V(\Delta)|. \tag{4}\] The equality follows by the definition of \(M_{i}\) in Case 2. The inequality follows from (3). Since each vertex appears as an endpoint in a matching at most once, by (4) there is \(b\in M_{i}\) such that both endpoints of \(b\) are not in \(V(\Delta)\). Thus, \(\Delta+b\in\mathcal{M}\); by the hereditary property and since \(a\in\Delta\), it holds that \(\Delta-a+b\in\mathcal{M}_{\leq q(\varepsilon)}\). Figure 2: An illustration of Algorithm \(\mathsf{ExSet}\)-\(\mathsf{Matching}\) with the (illegally small) parameters \(N(\varepsilon)=k(\varepsilon)=3\). The parameters by the edges are the costs. The edges chosen in iterations \(i=1,2,3\) are marked in blue, red, and green, respectively. By Claim 18 and Definition 8, we have that \(X\) is an exchange set for \(I,\varepsilon,\alpha\), and \(r\) as required. To complete the proof of the lemma we show (in Appendix B) the following. \(\rhd\) Claim 19. \(|X|\leq 18\cdot q(\varepsilon)^{2}\), and the running time of Algorithm 4 is \(q(\varepsilon)^{O(q(\varepsilon))}\cdot\operatorname{poly}(|I|)\). ## 5 Exchange Set for Matroid Intersection Constraints In this section, we design an algorithm for finding an exchange set for a profit class in a BI instance, leading to the proof of Lemma 11. For the remainder of this section, fix a BI instance \(I=(E,\mathcal{C},c,p,\beta)\), an error parameter \(0<\varepsilon<\frac{1}{2}\), a \(2\)-approximation for \(\operatorname{OPT}(I)\), \(\frac{\operatorname{OPT}(I)}{2}\leq\alpha\leq\operatorname{OPT}(I)\), and an index \(r\in[\log_{1-\varepsilon}\left(\frac{\varepsilon}{2}\right)+1]\) of the profit class \(\mathcal{K}_{r}(\alpha)\). Also, let \(\mathcal{C}=(\mathcal{I}_{1},\mathcal{I}_{2})\) be the matroid intersection constraint \(\mathcal{C}\) of \(I\). For simplicity, when understood from the context, some of the lemmas in this section consider the given parameters (e.g., \(I\)) without an explicit declaration. Due to space constraints, the proofs of the lemmas in this section are given in Appendix C. As shown in Figure 1, a simple greedy approach which finds a feasible set of minimum cost (within \(\mathcal{K}_{r}(\alpha)\)) in the intersection of the matroids may not output an exchange set for \(\mathcal{K}_{r}(\alpha)\). Instead, our approach builds on some interesting properties of matroid intersection. The next definition presents a _shifting property_ for a feasible set \(\Delta\in\mathcal{M}_{\leq q(\varepsilon)}\) and an element \(a\in\Delta\cap\mathcal{K}_{r}(\alpha)\) w.r.t. the two matroids. We use this property to show that our algorithm constructs an exchange set. Let \(\Delta\in\mathcal{M}_{\leq q(\varepsilon)}\), \(a\in\Delta\cap\mathcal{K}_{r}(\alpha)\) and \(b\in\mathcal{K}_{r}(\alpha)\setminus\Delta\). We say that \(b\) is a shift to \(a\) for \(\Delta\) if \(c(b)\leq c(a)\) and \(\Delta-a+b\in\mathcal{M}_{\leq q(\varepsilon)}\); moreover, \(b\) is a _semi-shift_ to \(a\) for \(\Delta\) if \(c(b)\leq c(a)\) and \(\Delta-a+b\in\mathcal{I}_{2}\) but \(\Delta-a+b\notin\mathcal{I}_{1}\). As a starting point for our exchange set algorithm, we show how to obtain small cardinality sets which contain either a shift or a semi-shift for every pair \(\Delta\in\mathcal{M}_{\leq q(\varepsilon)}\) and \(a\in\Delta\cap\mathcal{K}_{r}(\alpha)\). The proof of the following is given in Appendix C. Let \(U\subseteq\mathcal{K}_{r}(\alpha)\), \(\Delta\in\mathcal{M}_{\leq q(\varepsilon)}\), and \(B\) be a minimum basis of \([(E,\mathcal{I}_{2})\cap U]_{\leq q(\varepsilon)}\) w.r.t. \(c\). Also, let \(a\in(U\cap\Delta)\setminus B\). Then, there is \(b\in B\setminus\Delta\) such that \(b\) is a semi-shift to a for \(\Delta\) or \(b\) is a shift to \(a\) for \(\Delta\). Observe that to have an exchange set, our goal is to find a subset of \(\mathcal{K}_{r}(\alpha)\) which contains a shift for every pair \(\Delta\in\mathcal{M}_{\leq q(\varepsilon)}\) and \(a\in\Delta\cap\mathcal{K}_{r}(\alpha)\). Thus, using Lemma 21 we design the following recursive algorithm \(\mathsf{ExtendChain}\), which finds a union of minimum bases of matroids w.r.t \(\mathcal{I}_{2}\), of increasingly restricted ground sets w.r.t. \(\mathcal{I}_{1}\). The pseudocode of Algorithm \(\mathsf{ExtendChain}\) is given in Algorithm 5. We can view the execution of \(\mathsf{ExtendChain}\) as a tree, where each node (called below a _branch_) corresponds to the subset \(S\subseteq\mathcal{K}_{r}(\alpha)\) in specific recursive call. We now describe the role of \(S\) in Algorithm \(\mathsf{ExtendChain}\). If \(|S|\geq q(\varepsilon)+1\), we simply return \(\emptyset\); such a branch is called a _leaf_, and does not contribute elements to the constructed exchange set. Otherwise, define the _universe_ of the branch \(S\) as \(U_{S}=\{e\in\mathcal{K}_{r}(\alpha)\setminus S\mid S+e\in\mathcal{I}_{1}\}\); that is, elements in the universe of \(S\) can be added to \(S\) to form an independent set w.r.t. \(\mathcal{I}_{1}\). Next, we construct a minimum basis \(B_{S}\) w.r.t. \(c\) of the matroid \([(E,\mathcal{I}_{2})\cap U_{S}]_{\leq q(\varepsilon)}\). Observe that \(B_{S}\) contains up to \(q(\varepsilon)\) elements, taken from the universe of \(S\) and that \(B_{S}\) is independent w.r.t. \(\mathcal{I}_{2}\). Note that the definition of the universe relates to \(\mathcal{I}_{1}\) while the construction of the bases to \(\mathcal{I}_{2}\); thus, the two matroids play completely different roles in the algorithm. For every element \(e\in B_{S}\) we apply Algorithm \(\mathsf{ExtendChain}\) recursively with \(S^{\prime}=S+e\) to find the corresponding basis \(B_{S+e}\). The algorithm returns (using recursion) the union of the constructed bases over all branches. Finally, algorithm \(\mathsf{ExSet}\)-\(\mathsf{MatroidIntersection}\) constructs an exchange set for \(I,\varepsilon,\alpha\), and \(r\) by computing Algorithm \(\mathsf{ExtendChain}\) with the initial branch (i.e., _root_) \(S=\emptyset\): \[\mathsf{ExSet}\mbox{-}\mathsf{MatroidIntersection}(I,\varepsilon,\alpha,r)= \mathsf{ExtendChain}(I,\varepsilon,\alpha,r,\emptyset). \tag{5}\] For an illustration of the algorithm, see Figure 3. In the analysis of the algorithm, we consider branches with useful attributes, called _chains_; these are essentially sequences of semi-shifts to some \(\Delta\in\mathcal{M}_{\leq q(\varepsilon)}\) and \(a\in\Delta\cap\mathcal{K}_{r}(\alpha)\). Let \(X=\mathsf{ExSet}\mbox{-}\mathsf{MatroidIntersection}(I,\varepsilon,\alpha,r)\), and let \(\mathcal{S}\) be the set of all branches \(S\subseteq\mathcal{K}_{r}(\alpha)\) such that \(\mathsf{ExtendChain}(I,\varepsilon,\alpha,r,S)\) is computed during the construction of \(X\). Let \(S\in\mathcal{S}\), \(\Delta\in\mathcal{M}_{\leq q(\varepsilon)}\), and \(a\in(\mathcal{K}_{r}(\alpha)\cap\Delta)\setminus X\). We say that \(S\) is a chain of \(a\) and \(\Delta\) if \(a\in U_{S}\), and for all \(e\in S\) it holds that \(e\) is a semi-shift to \(a\) for \(\Delta\). Note that there must be a chain for \(a\) and \(\Delta\) since the empty set satisfies the conditions of Definition 22. Moreover, we can bound the cardinality of a chain by \(q(\varepsilon)\) using the exchange property of the matroid \((E,\mathcal{I}_{1})\). The above arguments are formalized in the next lemmas. For all \(\Delta\in\mathcal{M}_{\leq q(\varepsilon)}\) and \(a\in(\mathcal{K}_{r}(\alpha)\cap\Delta)\setminus X\) there is \(S\subseteq X\) such that \(S\) is a chain of \(a\) and \(\Delta\). For all \(\Delta\in\mathcal{M}_{\leq q(\varepsilon)}\), \(a\in(\mathcal{K}_{r}(\alpha)\cap\Delta)\setminus X\), and a chain \(S\) of \(a\) and \(\Delta\), it holds that \(|S|\leq q(\varepsilon)\). For a chain \(S\) of \(a\) and \(\Delta\), let \(B_{S}\) be the result of the first computation of Step 4 (i.e., not in a recursive call) in \(\mathsf{ExtendChain}(I,\varepsilon,\alpha,r,S)\). The key argument in the proof of Lemma 11 Figure 3: An illustration of the branches in Algorithm 5 for \(S=\emptyset\). Note that \(B_{\emptyset}=\{a,b\}\), \(B_{\{a\}}=\{c,d\}\) and \(B_{\{b\}}=\{e,f\}\). Also, \(\{a,c\}\) and \(\{a,d\}\) are the child branches of \(\{a\}\). is that for a chain \(S^{*}\) of maximal cardinality, \(B_{S^{*}}\) contains a shift to \(a\) and \(\Delta\), using the maximality of \(S^{*}\) and Lemma 21. For all \(\Delta\in\mathcal{M}_{\leq q(e)}\), \(a\in(\mathcal{K}_{r}(\alpha)\cap\Delta)\setminus X\), and a chain \(S^{*}\) of \(a\) and \(\Delta\) of maximum cardinality, there is a shift \(b^{*}\in B_{S^{*}}\) to a for \(\Delta\). In the proof of Lemma 11, for every \(\Delta\in\mathcal{M}_{\leq q(e)}\) and \(a\in(\mathcal{K}_{r}(\alpha)\cap\Delta)\setminus X\) we take a chain \(S^{*}\) of \(a\) and \(\Delta\) of maximum cardinality (which exists by Lemma 23 and Lemma 24). Then, by Lemma 25, there is a shift \(b^{*}\) to \(a\) for \(\Delta\), and it follows that \(X\) is an exchange set for \(I,\varepsilon,\alpha,\) and \(r\). The formal proof is given in Appendix C. ## 6 Discussion In this paper we present the first EPTAS for budgeted matching and budgeted matroid intersection, thus improving upon the existing PTAS for both problems. We derive our results via a generalization of the representative set framework of Doron-Arad et al. [5]; this ameliorates the exhaustive enumeration applied in similar settings [1, 3]. We note that the framework based on representative sets may be useful for solving other problems formulated as (1). Indeed, the proofs of Lemma 7 and Lemma 9, which establish the representative set framework, are oblivious to the exact type of constraints and only require having a \(k\)-_exchange system_ for some constant \(k\).10 Footnote 10: A set system \((E,\mathcal{I})\) satisfies the \(k\)-exchange property if for all \(A\in\mathcal{I}\) and \(e\in E\) there is \(B\subseteq A,|B|\leq k\), such that \((A\setminus B)\cup\{e\}\in\mathcal{I}\). Furthermore, our exchange sets algorithms can be applied with slight modifications to other variants of our problems and are thus of independent interest. In particular, we can use a generalization of Algorithm 4 to construct an exchange set for the _budgeted b-matching_ problem. Also, we believe that Algorithm 5 can be generalized to construct exchange sets for budgeted _multi-matroid intersection_ for any constant number of matroids; this includes the _budgeted multi-dimensional matching_ problem. While this problem does not admit a PTAS unless P=NP [9], our initial study shows that by constructing a representative set we may obtain an FPT-_approximation scheme_ by parameterizing on the number of elements in the solution.11 Footnote 11: We refer the reader, e.g., to [11] for the definition of parameterized approximation algorithms running in fixed-parameter tractable (FPT)-time. Finally, to resolve the complexity status of BM and BI, the gripping question of whether the problems admit an FPTAS needs to be answered. Unfortunately, this may be a very difficult task. Even for special cases of a single matroid, such as graphic matroid, the existence of an FPTAS is still open. Moreover, a deterministic FPTAS for budgeted matching would solve deterministically the exact matching problem, which has been open for over four decades [13].
2310.05490
Dynamic wetting experiment with nitrogen in a quasi-capillary tube
This work investigates the wetting dynamics of cryogenic fluids in inertia-dominated conditions. We experimentally characterized an oscillating gas-liquid interface of liquid nitrogen in a partially filled U-shaped quartz tube. The experiments were carried out in controlled cryogenic conditions, with interface oscillations produced by releasing the liquid column from an unbalanced position and having nitrogen vapor as the only ullage gas. During the experiments, the interface shape was tracked via image processing and used to fit a model from which the contact angle could be accurately determined. The results show that the dynamic contact angle evolution in advancing conditions is linearly linked to the Capillary number, with a slope depending on whether the interface moves over a dry or a pre-wet surface. However, the contact angle remains close to the one at equilibrium in receding conditions. To analyze the relation between contact angle and interface dynamics, we define an equivalent contact angle as the one that would make a spherical interface produce the same capillary pressure drop as the actual interface shape. The evolution of this equivalent contact angle proved to be independent of the evolution of the actual one, suggesting that the interface shape is not influenced by it. Finally, a theoretical analysis of the interface motion using a simplified model shows that viscous forces dominate the damping of the interface for small tube sizes, while gravity and inertial forces dominate the oscillating dynamics of the liquid column for larger tubes.
Domenico Fiorini, Alessia Simonini, Johan Steelant, David Seveno, Miguel Alfonso Mendez
2023-10-09T07:54:15Z
http://arxiv.org/abs/2310.05490v1
# Dynamic wetting experiments with nitrogen ###### Abstract This work investigates the wetting dynamics of cryogenic fluids in inertia-dominated conditions. We experimentally characterized an oscillating gas-liquid interface of liquid nitrogen in a partially filled U-shaped quartz tube. The experiments were carried out in controlled cryogenic conditions, with interface oscillations produced by releasing the liquid column from an unbalanced position and having nitrogen vapor as the only unlage gas. During the experiments, the interface shape was tracked via image processing and used to fit a model from which the contact angle could be accurately determined. The results show that the dynamic contact angle evolution in advancing conditions is linearly linked to the Capillary number, with a slope depending on whether the interface moves over a dry or a pre-wet surface. However, the contact angle remains close to the one at equilibrium in receding conditions. To analyze the relation between contact angle and interface dynamics, we define an equivalent contact angle as the one that would make a spherical interface produce the same capillary pressure drop as the actual interface shape. The evolution of this equivalent contact angle proved to be independent of the evolution of the actual one, suggesting that the interface shape is not influenced by it. Finally, a theoretical analysis of the interface motion using a simplified model shows that viscous forces dominate the damping of the interface for small tube sizes, while gravity and inertial forces dominate the oscillating dynamics of the liquid column for larger tubes. ## I Introduction Predicting the capillary-driven motion of a liquid is essential in developing propellant management devices [1; 2; 3; 4] and heat transfer systems for space applications [5; 6; 7]. Moreover, capillary forces play a major role in the dynamics of cryogenic propellants in partially filled tanks [8; 9; 10] in microgravity. Modeling the contact line and contact angle dynamics is essential for simulating sloshing motion and the evaporation rate in propellant tanks [11; 10]. Both phenomena need to be accurately controlled to ensure that no gas is fed to the thrusters, to limit undesired loads on the tank walls and perturbations of the spacecraft stability [12]. Developing engineering models for these applications requires experimental data on the dynamic wetting of cryogenic fluids such as liquid Oxygen, Hydrogen, or Methane. These fluids have low surface tension, low viscosity, near-zero contact angle, and high volatility. Experimental data on these liquids are particularly scarce, as most of the literature has focused on fluids with the opposite properties (particularly high viscosity and surface tension [13; 14; 15; 16]). Besides challenging cryogenic temperatures, experiments on the dynamic wetting of cryogenic liquids require complex experimental setups to cope with the high volatility, promoting evaporation/condensation [11] and the possible occurrence of film boiling [17; 18]. Within the framework of microgravity experiments, Friese _et al._[19] investigated the axial sloshing of cryogenic hydrogen and the corresponding contact line in microgravity conditions and super-heated walls. The authors observed that the axial sloshing was only affected when the apparent wall contact line receded and the liquid film at the wall dried out. However, no visualization of the contact line was reported, and the authors suggest performing more experimental and theoretical investigations with smaller test cases to understand the contact line dynamics. The current work presents the experimental characterization of dynamic wetting of liquid nitrogen (LN2) in cryogenic conditions. We measured the evolution of the dynamic contact angle and the gas-liquid interface within a wide range of contact-line velocity and acceleration and analyzed the relative impact of inertia, capillary and viscous forces. The experiments were carried out on a U-shaped quartz tube in which liquid oscillations were produced by releasing the liquid column from an initially unbalanced configuration. The wetting dynamic in this configuration is similar to what is observed in the forced liquid plug flows [20; 21; 22] and has also been investigated by Weisslogel [23], Dollet _et al._[24], and Fiorini _et al._[25]. More specifically, Weisslogel [23] considered silicone oil in microgravity conditions with different surface coating on the two sides to produce a capillary-driven flow while Dollet _et al._[24]'s experiments were carried out in normal gravity conditions using pure water and ethanol to analyze the impact of wetting hysteresis on the oscillating dynamics. Fiorini _et al._[25] focused on the impact of inertia in the dynamic wetting of HFE7200, a well-known cryogenic model fluid, and demineralized water. The rest of the paper is organized as follows. Section II analyze a simple engineering model of the inter face dynamic along with the relevant dimensionless number and scaling considerations. Section III describes the cryogenic facility where the experiments were conducted, along with the experimental procedure, interface tracking, and contact angle measurement. Section IV presents the results for various initial heights and a discussion of the relative importance of the forces governing the flow in this configuration. Conclusions and outlooks for future works are collected in section V. ## II Modeling and scaling considerations A schematic of the U-tube test case is provided in Figure 1 along with the main dimensions and relevant definitions. The U-tube is made of transparent quartz. It has a constant (internal) radius \(R=3.5\) mm and is filled with liquid nitrogen to have a liquid column of axial length \(L=99\pm 2\) mm. A meniscus is formed on each side of the tube. We denote as \(h(r,t)\) the interface height with respect to the equilibrium position (i.e. \(h\to 0\) as \(t\rightarrow\infty\)), with \(r\) the radial coordinate. We denote with \(\bar{h}(t)\) the average column height defined as \[\bar{h}\left(t\right)=\frac{1}{\pi R^{2}}\int_{0}^{R}2\pi\ h\left(r,t\right)rdr\,. \tag{1}\] For later convenience, we define as \(\xi(r,t)\) the interface position with respect to \(h(0,t)\), i.e \(\xi(r,t)=h(r,t)-h(0,t)\) at each time step (see Figure 1). The liquid properties of interest are the liquid dynamic viscosity \(\mu\) and density \(\rho\) and the gas-liquid interface's surface tension \(\sigma\). The experiments begin by releasing the liquid column from an initial height of \(\overline{h}(0)=\pm 18.4\pm 1.5\)mm. This is achieved by initially pressurizing one side of the tube. We consider the modeling of the problem from two different length scales: (1) the interface scale, which controls the interface dynamics and is concerned with forces acting close to it and (2) a tube scale which controls the liquid column's dynamics and is concerned with forces acting all along the tube. We assume that scale (1) controls the shape of the interface, which in turns play a role in (2) through the capillary pressure drop due to the interface curvature. The treatment of scale (2) can be made in terms of integral balance of forces in the liquid column. The integral balance for the column gives \[\ddot{\bar{h}}(t)=\underbrace{-8C_{f}\frac{\mu}{\rho R^{2}}\dot{\bar{h}}(t)}_ {\text{viscous resistance}}-\underbrace{\frac{2g}{L}\ \bar{h}(t)}_{\text{gravity}}-\underbrace{\frac{2\sigma}{\rho R^{2}L}\left(K_{ A}(t)-K_{B}(t)\right)}_{\text{capillary resistance}}, \tag{2}\] where a parabolic velocity profile is assumed in the stream-wise direction when computing the wall shear and \(C_{f}\) is an empirical term to correct this assumption, to handle the loss of parabolicity due to inertia and the viscous losses due to the tube's curvature [24]. The dot denotes differentiation in time. The terms \(K_{A}\) and \(K_{B}\) are linked to the pressure drop produced at the two interfaces (distinguished with A and B) and accounts for the interface curvature. Under the assumption that capillary forces dominates over elongational viscosity, this term can be written as \[K(t)=\int_{0}^{R}\nabla\cdot\mathbf{n}(\xi(r,t))rdr \tag{3}\] where \(\mathbf{n}\) is the normal vector to the interface (see Figure 1), and \(\nabla\cdot()\) is the divergence operator. For capillary tubes, and particularly at the limit \(R/l_{c}\ll 1\), with \(l_{c}=\sqrt{\sigma/(\rho g)}\) the capillary length, or at the limit of Capillary number \(Ca=\mu u_{c}/\sigma\ll 1\), with \(u_{c}\) the contact-line velocity, the interface shape is a spherical cap and thus one recovers \(K=R\cos(\theta_{D})\) with \(\theta_{D}\) the (dynamic) contact angle at the wall. If one of the two sides has a flat interface (non-wetting conditions) or if one considers a straight tube plunged into a bath, equation (2) is essentially a variant of Lucas-Washburn-s equation [26; 27; 28; 29], whose steady state solution is the well known Jurin's law [30]. In quasi capillary tubes (i.e. \(R/l_{c}>1\)), in case of \(Ca>10^{-3}\), or in the presence of large accelerations, the meniscus shape departs from that of a spherical cap. In the modeling framework considered in this work, this results from the interface scale. ### Interface modeling The empirical model of the meniscus interface proposed by Fiorini _et al._[25] computes the meniscus profile \(\xi(r,t)\) as the solution of the following boundary value Figure 1: Schematic of U-tube with dimensions in millimeters. The relevant experimental variables are also shown on the right side of the tube and in the close-up view of the gas-liquid interface. problem: \[\begin{cases}\nabla\cdot\mathbf{n}+l_{c}^{-2}\,\xi(r,t)-3\frac{Ca}{(R-r)}F(\delta) +\frac{H_{a}(r,t)}{\sigma}=0\\ \partial_{r}\xi(R,t)=\mathrm{ctg}(\theta(t))\\ \partial_{r}\xi(0,t)=0\end{cases} \tag{4}\] where \(F(\delta)\) and \(H_{a}(r,t)\) are correcting factors for the viscous and the inertial contribution. The first was proposed by Delon _et al._[31] depending on \(\delta=\mathrm{ctg}(\partial_{r}h(r,t))\), with \(\partial_{r}\) the derivative along the radial direction and \(\mathrm{ctg}()\) the cotangent. This term is \[F(\delta)=\frac{2}{3}\frac{\tan\delta\sin^{2}(\delta)}{\delta-\cos(\delta) \sin(\delta)}\,. \tag{5}\] The second was proposed by Fiorini _et al._[25] and reads \[H_{a}(r,t)=\rho a_{i}(t)l_{h}\big{(}1-e^{-\frac{r-R}{l_{i}}}\big{)}\,, \tag{6}\] with \(a_{i}(t)\) the instantaneous interface acceleration, \(l_{h}=Rc_{t}\) a characteristic length defined by the model parameter \(c_{t}\), and \(l_{i}(t)\) a model parameter controlling how the inertial forces decay towards the walls. The first correction accounts for the viscous dissipation near the contact line [14], as the flow profile must comply with the no slip condition while still allowing for a moving contact line. The second correction accounts for the flow inertia, as the velocity profile far from the interface must adapt to the interface dynamics. The solution to (4) provides the interface shape \(\xi(r,t)\) from which the term \(K(t)\) can be computed in (3) and inserted in (2). At the limit \(l_{c}/R\sim 1\), \(Ca\to 0\) and \(a_{i}\to 0\), equation (4) reduces to a spherical cap and the simplified theory with \(K(t)=R\cos(\theta_{D}(t))\) is recovered. An alternative formulation, when the focus is placed on the modeling of the liquid column at the tube scale, is to introduce an equivalent contact angle \(\theta_{D,m}(t)\) such that \[K(t)=R\cos\theta_{D,m}(t)\,. \tag{7}\] The difference between the equivalent \(\theta_{D,m}(t)\) and the actual \(\theta_{D}(t)\) measures the discrepancy between the assumption of a spherical interface and the true interface. In this work, the actual contact angles were fitted to a modified Voinov-Tanner law [16], in which two unsteady terms are introduced to account for the history of the contact line and the contact-line acceleration \(a_{cl}(t)\). Defining as \(a_{cl}^{*}(t)=a_{cl}(t)/g\) the dimensionless acceleration, and \(\theta=\theta_{D}^{3}-\theta_{S}^{3}\), with \(\theta_{S}\) the static contact angle, the dynamic contact angle satisfies \[\theta(t)+\alpha\dot{\theta}(t)=\beta_{1}Ca(t)+\beta_{2}a_{cl}^{*}(t) \tag{8}\] where \((\alpha,\beta_{1},\beta_{2})\) are empirical coefficients to be calibrated on the experimental data and the dot notation is used for time derivatives. Equation (8) has the following analytical solution \[\theta(t)=\frac{1}{\alpha}e^{-t/\alpha}\Big{(}\!\int_{0}^{t}(\beta_{1}Ca(t^{ \prime})+\beta_{2}a_{cl}^{*}(t^{\prime}))e^{t^{\prime}/\alpha}dt^{\prime}\Big{)}\,. \tag{9}\] This approach is a simplified version of the one proposed by Bian _et al._[32] and attempts to extend the traditional relationship between the contact angle and contact line kinematics to represent the data of this experiment. A much simpler model, which proved much more successful to describe the results of our experiments, is a simple linear trend of the form \[\theta_{D}(t)=\beta_{3}Ca(t)+\theta_{S}\,, \tag{10}\] where the term \(\beta_{3}\) can be seen has the inverse of Hocking contact line mobility coefficient [33] which needs to be calibrated with the data. Similar linear relationship has been observed also by Xia and Steen [34] for the case of sessile droplets on a oscillating support. The linear model is also known as Davis-Hocking correlation [35; 36]. ### Scaling considerations The dimensionless form of equation (2) allows for analyzing the scaling laws of the different terms, and thus to position the results of this work in the literature of similar experiments. Moreover, the scaling analysis allows to evaluate the similarity with different fluids. Denoting as \(x^{*}=x/[x]\) the dimensionless scaling of a variable \(x\) with respect to a reference \([x]\) and taking \([l]=l_{c}\), \([t]=(l_{c}/g)^{1/2}\), \([u]=(l_{c}g)^{1/2}\) and \([a]=g\) the reference length, time, velocity and acceleration, equation (2) becomes \[\begin{split}\overset{*}{h}^{*}(t^{*})=-8C_{f}\frac{Oh_{cb}}{R^{* 2}}\overset{*}{h}^{*}(t^{*})-\frac{2}{L^{*}}\bar{h}^{*}(t^{*})-\\ \frac{2}{R^{*2}L^{*}}\left(K_{A}^{*}(t^{*})-K_{B}^{*}(t^{*})\right) \,,\end{split} \tag{11}\] where \(R^{*}=R/l_{c}\) and \(L^{*}=L/l_{c}\) and \(Oh_{cb}=\mu(g/\rho\sigma^{3})^{1/4}\) is the Ohnsorge number based on the capillary length. This number solely depends on fluid properties and controls the viscous damping of interface oscillations. Schmitt and Dreyer [10] obtained a similar dimensionless equation using the radius of a cylindrical cell as characteristic length and a radius-dependent Ohnsorge number. Table 1 collects the values of \(Oh\), \(R^{*}\), \(L^{*}\) and the initial height \(\overset{*}{h}^{*}(0)\) for the experiments considered in this work together with experiments from Fiorini _et al._[25] and Dollet _et al._[24] as well as possible experiments using liquid hydrogen at \(T=20\)K, liquid oxygen at \(T=90\)K and liquid methane at \(T=112\)K, using the fluid properties reported in Dreyer [37]. Interestingly, the similarity between liquid nitrogen and liquid oxygen is excellent. HFE7200 is in an acceptable similarity with nitrogen and oxygen but less with liquid hydrogen or methane. In case using water, the experiments of Dollet _et al._[24] show comparable values of \(R^{*}\) and \(Oh_{cl}\) but not for \(L^{*}\). However, for untreated glass surface, the stick slip contact line motion has been reported both by Dollet _et al._[24] and Fiorini _et al._[25] and the set of dimensionless numbers here presented might not provide a complete picture of the experiment. On the other hand, the experiment with ethanol [24] has a similar dynamics to liquid nitrogen and HFE7200 [25]. However, the higher \(Oh_{cl}\) and \(R^{*}\) lead respectively to a higher damping due to viscous dissipation and a higher impact of the fluid inertia. ## III Methodology ### U-tube test case and Cryostat Facility The experiments were carried out at the cryostat facility from the von Karman Institute. A schematic of the facility and the connections between its components is shown in Figure 2. The U-tube sides are labelled A and B. The line connected to the side A is controlled via valves V14, V13 and air filter (AF) and the buffer tank (BF1) where the pressure is set using the pressure regulator PR11 and the valves V11 and V22. This is the gas feeding line, which controls the initial pressurization in the experiment, and is connected to gas bottles. Side B is connected to the gas-discharge line via valve V15 or via valve V17. The first discharges in atmosphere while the second is connected to a vacuum pump VP. A fast response cryogenic ball valve (Triad series 60C, V16 in Figure 2) controls the connection between the two sides, allowing for separating the umlage gas on the two sides once the curved side of the tube is filled with the test liquid. Safety valves SV11 and SV12 are placed on each line. The cryostat consists of annular volumes with the U-tube at the center in the sample space (white area in Figure 2). The reservoir of the cryostat is filled with liquid nitrogen, which flows through a serpentine heat exchanger and undergoes phase change by throttling (through valve TV). The vaporization of liquid nitrogen cools the heat exchanger contact gas block which then cools the cryostat's sample space. The nitrogen vapor is vented to the ambient atmosphere depending on the cooling rate needed. The throttle valve TV and a nitrogen exhaust valve EV allow controlling the vapor pressure and thus the cooling power of the cryostat. The pressure in the heat exchanger is monitored with the Kulite CTL-190 pressure transducer PT-0024. The cryostat is cooled \begin{table} \begin{tabular}{c c c} \hline density (\(\rho\)) & \(812\pm 2\) kg/m\({}^{3}\) \\ dynamic viscosity (\(\mu\)) & \(0.176\pm 0.002\)mPa \(\cdot\) s \\ surface tension (\(\sigma\)) & \(9.23\pm 0.15\)mN/m \\ static contact angle (\(\theta_{S}\)) & 0deg \\ tube radius (\(R\)) & 3.5mm \\ liquid column length (\(L\)) & \(99\pm 2\)mm \\ \end{tabular} \end{table} Table 2: Fluids and U-tube physical properties. \begin{table} \begin{tabular}{c c c c c} \hline & \(Oh_{cl}\cdot 10^{3}\) & \(R^{*}\) & \(L^{*}\) & \(\hbar(0)^{*}\) \\ \hline This work (LN2 at & 1.9 & 3.3 & 94.3 & 18.9 \\ \(\approx 77K\)) & & & & \\ HFE7200 in [25] & 4.9 & 4.1 & 80 & 10.2 \\ Water in [25] & 2.3 & 1.5 & 29.4 & 3.7 \\ Water in [24] & 2.3 & 3.0 & \(39.1-53.9\) & 3.7 \\ Ethanol in [24] & 7 & 4.9 & 86.9 & 5.9 \\ Liquid Hydrogen at \(20K\) & 0.84 & 2.1 & 59.2 & 11.8 \\ Liquid Methane at \(112K\) & 1.1 & 1.9 & 54.6 & 10.9 \\ Liquid Oxygen \(90K\) & 1.5 & 3.3 & 93.5 & 18.7 \\ \end{tabular} \end{table} Table 1: Comparison of the non-dimensional terms in equation 11 across different experiments. The properties of the cryogenic fluids are assumed as in Dreyer [37] with \(R,L\), and \(\bar{h}(0)\) values respectively 3.5, 100 and 20 mm as in this work. Figure 2: Schematic of cryostat facility. Pressure sensors are indicated with PT and Pressure Indicators with PI. Temperature sensors are indicated with TT and level transducers with LT. constantly as long as the nitrogen is replenished. The steady-state is obtained by matching the cooling power from the nitrogen with the heat loss. Figure 3 shows a picture of the channel distributor that connects the U-tube to the cryostat. The distributor connects to the cryostat through Inox 316L plate, 8 Inox 316L 3.5\(\varnothing\) mm screws and spring-energized O-rings (Fluolionion 01 Virgin PTFE). Figure 2 shows both the distributor and the U-tube positions in the cryostat's sample space with the Inox 316L plate in contact with the heat exchanger. The distributor has two \(\sfrac{1}{4}\)" male Swagelok VCR fitting brazed to its side to connect with an external gas input and output line. A third \(\sfrac{1}{8}\)" VCR allows connecting with an internal channel that transfers the liquid nitrogen from the nitrogen reservoir through the distributor and inside the U-tube. We use the buffer volume BF1 outside the cryostat to achieve fine control of the pressure on the input gas-line. A second buffer volume, labeled as BF2, is positioned inside the cryostat connected to side B of the U-tube and allows for adjustment of the pressure response of the system to approximate a step response. The entire setup and all fittings were helium leak checked before testing. Before starting the filling procedure, all the volumes are flooded with helium vapor and later pumped down to a pressure of \(10^{-3}\) Pa to remove any traces of ambient air and water vapor inside the cryostat. The purging cycle is repeated three times, and a small amount of helium is added to act as a heat exchange gas between the sample space and the heat exchanger. The motion of the liquid column is produced by setting an initial level difference between the two sides, pressurizing line A, then opening valve V16 to produce a step pressure reduction. The liquid oscillation lasts several seconds before the equilibrium position is recovered. The initial pressurization was achieved using pure Nitrogen gas bottles filling tube A with V16 closed. The input gas line starts from the gas cylinder visible in Figure 2. The pressure is set to a few hundred Pascals in the outer buffer tank BF1, where pressure is monitored via Pressure Transducer PT-0011. The air filter AF removes impurities in the buffer, and the gas is released in the test cell via valve V13, while V14 is used to regulate flow. The input gas line cools down as it flows through the entire cryostat liquid nitrogen reservoir before reaching the sample space. We added flexibles to each side of the U-tube to increase the heat exchange of the input gas with the sample room and achieve similar conditions with the gas already present in the channel distributor. During the experiments, we monitored the gas temperature on side A, and we observed a temperature variation \(\Delta T<0.25K\) upon the pressurization. Figure 2 shows the three temperature sensors connected to the test cell. There are: (1) a Lakeshore silicon diode DT-670 (TT-0011) mounted on the copper distributor of the test cell by use of SWAGELOCK connectors, (2) a second Lakeshore DT-670 (TT-0013) in contact with the external bottom part of the tube via cryogenic glue and (3) a Lakeshore RTD Cernox CX-1050-AA-HT-1.4L (TT-0012) suspended in the helium exchange gas close to the test cell. The temperature sensors are connected to a Lakeshore Model 218 temperature controller for data logging and processing. The corresponding calibration curves for temperature sensors are built into the temperature controller. The pressure in the test cell is monitored by three pressure sensors. The pressure in the test cell is monitored using three pressure sensors, also shown in figure 2. These are two miniature ruggedized pressure sensors Kulite CTL-190 (PT-002x and PT-0012) and an AMS 5812-0150-D pressure sensor (PT-0013). The sensor PT-002x is connected to side A with the reference port connected to side B (differential configuration) while sensor PT-0012 is connected to the input gas line. Both PT-002x and PT-0012 signals are conditioned through MICRO ANALOG 2 - FEMM4 module. The sensor PT-0013 is connected to the gas venting port close to the discharge valve. This sensor was calibrated using a standard Druck pressure calibrator and used to adjust the manufacturer calibration for PT-0012 and PT-002x at the temperatures encountered in our experiment. The recalibration of these sensors was performed with the sensors mounted at the respective locations and increasing the pressure in the U-tube using the gas-feeding line A. In the case of PT-002x, the calibration requires that the test cell is partially filled with the test liquid. The calibration offset is acquired with the gas-liquid interfaces resting at the same height, while the sensor sensitivity is acquired by slowly increasing the pressure on side A of the tube with V16 closed. The output voltage is acquired when stable conditions of the interface are achieved and compared with the readings from PT-0012 and PT-0013. Figure 3: Picture of the U-tube and the channel distributor assembly, showing the U-tube together with some of the components shown in the schematic of Figure 2. ### Experimental procedure At equilibrium starting conditions, the liquid interfaces on the two sides of the U-tube have the same height. The gas volumes of sides A and B are isolated by closing the cryogenic valve V16. The initial conditions are imposed by moving the interfaces out-of-equilibrium pressurizing side A, setting a level difference between the two interfaces, and ensuring their position is stable (zero initial interface velocity). The initial over-pressure level is controlled in the external buffer tank by the pressure regulators V14 and V13 (see Figure 2). The interfaces remain stable at the selected height for a few seconds, during which opening valve V16 triggers the experiment. If V16 remains closed, condensation of the introduced gas nitrogen begins on side A, slowly reducing the overpressure and moving the two interfaces back to equilibrium. After the pressurization phase (max 300 Pa), the temperature of the tillage gas corresponds to the value in saturated conditions at the operating pressure of the tube. In this work, we performed experiments with the gas temperature in the range \(74.2-74.8K\), where differences between experiments are due to the high sensitivity of the cryostat to the environmental conditions, the amount of liquid nitrogen remaining in its reservoir and the number of experiments performed. After opening valve V16, the liquid column oscillates freely around the equilibrium position. For each experiment, we record the motion of one of the two interfaces to maximize the interface resolution. We use a high-speed camera (model JAY SP-12000-CXP4), acquiring grey-scale images at 300 fps. The interface shape is obtained by casting an image of the shadow of the meniscus on the camera using diffused light source on the opposite side of the tube. Two optical access of \(75\varnothing\) mm give access to the sample space of the cryostat facility. The active region of the camera is restricted to the central region of size 4096x768 pixels to achieve the highest acquisition frequency allowed by the camera. The camera mounts a 105mm lens and it is positioned to acquire the motion of the interface spanning the full tube length. ### Interface tracking and contact angle measurement The interface tracking was carried out via image processing combining edge detection with correction for optical distortion and regression of the interface model in (4) on the detected interface position. The methodology is extensively described in Fiorini _et al._[25], to which the reader is referred for more details. The regression consists in identifying the model parameters \(l_{h},l_{i}\) and \(\theta\) in (4) in order to allow for robust computation of the contact angle. The regression was solved using the Nelder-Mead algorithm [38], implemented in the Python library scipy.optimize [39]. The same tool is used for the regression of the dynamic contact angle correlation (8) with the experimental dynamic contact angle data. The fitting requires the accurate determination of the liquid-wall interface location. The images should be well aligned with the tube's wall since a small misalignment can result in significant errors in the contact angle measurement [40]. The image pixel size was measured by taking the tube's external diameter as a reference is \(15\pm 0.5\mu\)m as measured by averaging from 10 images and 5 randomly chosen locations. The image-based interface detection provides the first points at a distance of about \(15-30\mu\)m from the wall. The regression is carried out using the available points, and \(\bar{h}(t)\) is obtained using Equation 1. An important aspect, however, is that the boundary value problem in (4) requires the contact-line velocity (through the Capillary number) as an input to compute the interface profile. This computation could be carried out iteratively, starting from the mean interface velocity as a guess and then adjusting from the interface solution. However, such an approach is particularly sensitive at the extremely low contact angle considered in this work because small variations of \(\theta_{D}\) produce large variations of the contact-line position \(h(R,t)=h_{CL}(t)\). In the literature [41; 42; 43], these difficulties have been circumvented by imposing the Capillary number from the average relative velocity between interface and wall. A similar approach has been followed by Dollet _et al._[24] in the same U-tube configuration considered in this work: these authors take the average interface velocity as the contact-line velocity. In recent works on oscillating droplets [34; 44], the contact-line position was identified from image analysis, and the contact-line velocity computed via time differentiation, while authors working on static sessile droplets with HFE7100[45; 46] circumvented the problem by engraving a pinning location on the substrate. Our work combines the challenge of a time-varying contact-line velocity with near zero contact angle, and we seek to separate the oscillations of the average interface motion from the local motion of the contact line. We thus compute the Capillary number required in equation(4) from an approximation of the interface at a 'larger scale' than the interface model provides. This is obtained by linearly extrapolating the interface prediction from a distance of \(50\mu\)m from the wall. An example of interface tracking, model regression, and local extrapolation is shown in Figure 4, along with the uncertainty calculation carried out via the Monte Carlo approach presented in Fiorini _et al._[25]. The portion on the left shows a 'large scale' view of the interface regression, which closely matches the image-based detection of the interface. The portion on the right shows a zoom near the wall, along with the linear extrapolation carried out for \(r<R-50\mu\)m and the contact line position \(\tilde{h}_{CL}\) at its intersection with the wall. The interface model generally predicts a much higher interface location, i.e. \(h_{CL}>\tilde{h}_{CL}\), thus a smaller contact angle. However, the computation of the contact line velocity by time differ entiation of \(h_{CL}(t)\) is too sensitive to the small (\(\pm 0.5^{o}\)) uncertainties in the contact angle computation. Therefore, in what follows, we compute the contact-line velocity (and thus the Capillary number required in (4)) by time differentiation of \(\tilde{h}_{CL}\). This is used to solve the regression of the interface model in (4) and the resulting interface was used to compute the dynamic contact angle \(\theta_{D}\). ## IV Results and Discussions We present and discuss the contact angle and interface measurements in section IV.1. Section IV.2 focuses on the relation between contact angle and interface dynamics by introducing an equivalent macroscopic contact angle. Finally, section IV.3 closes with a note on the relative importance of the forces driving the investigated configuration. ### Contact Angle and Interface Dynamics We consider two types of experiments to characterize the interface dynamics on both sides of the tube using only one camera. These are denoted as 'advancing' and'receding' experiments and differ in the initial condition. In the 'advancing', the interface starts with \(\overline{h}(0)=-18.4\pm 1.5\)mm. Thus the interface evolves along a dry surface during the first rise (when it is in advancing conditions) while it evolves over a pre-wet surface afterward. On the contrary, in the'receding' experiments, the interface starts with \(\overline{h}(0)=18.4\pm 1.5\)mm. Thus the interface recedes in the first descent and advances on a pre-wet surface afterward. Both types of experiments start with a still interface and with the release of the overpressure on side A of the tube (see section III). The two experiments are dynamically identical and should provide the same column oscillation up to a sign change. The videos in each experiment are processed to provide the history of the average interface height \(\bar{h}(t)\), the evolution of the interface shape \(h(r,t)\) and the contact angle evolution \(\theta_{D}(t)\). All experiments are repeated three times to assess repeatability. We summarize the main results for both tests in Figure 5. The figures on the left column refers to the advancing experiment while the figures on the right refers to the receding experiment. Figures 5(a) and 5(b) show five snapshots of the video recording for each set of conditions, cropped near the interface. From a visual inspection and from a close zoom in the images, no film is visible in the receding phase. Yet, as we shall see, the dynamics of the contact angle is different when evolving on a dry or a pre-wet surface. Figures 5(c)-5(d) show the interface detection for the same snapshots. For plotting purposes, the curves are shifted to have zero spatial average. In all tests the interface remains symmetric during the entire experiment but is far from a spherical. Figures 5(e)-5(f) show the history of the average interface \(\bar{h}(t)\) for the three runs carried out for each kind of experiment. Only two lines are distinguishable because two of the tests lead to identical curves; these are nevertheless kept because the contact angle measurements in all runs were considered for the regression of the contact angle correlations. In these plots, the dashed line shows the exponential envelope of the oscillation maxima and minima. This turns out to be \(\propto e^{-\lambda t}\), with the decay rate \(\lambda=1.6\) s. The excellent agreement between the 'advancing' and'receding' experiments is further proof of the measurement repeatability. Moreover, the exponential envelope of the oscillation shows that the dynamics of the liquid column in the U-tube, as described by (2), is well approximated by a linear second-order system. This was also the case of the experiments presented by Fiorini _et al._[25] for HFE7200 and water. The last set of plots (g and h) shows the contact angle evolution against the Capillary number. In both plots, the dashed line shows the best prediction for model (8), with the coefficients \(\alpha,\beta_{1}\), and \(\beta_{2}\) identified via optimization on the three sets of test cases. Different markers are used to distinguish plots with \(t<0.2\)s (evolving on a dry surface) from those with \(t>0.2\)s (evolving on a wet surface). The circle around each point provides the 95% confidence interval to account for the measurement uncertainty. This was computed fitting a bidimensional Gaussian distribution on the velocity and contact angle measurement. The continuous lines in both plots are used for the linear model in (10). In both test cases, the modified Voinov-Tanner law in Figure 4: The portion on the left shows the results of interface regression on the data. The portion on the right shows a zoom near the wall, plotting the interface regression together with data and the linear extrapolation from which the contact-line velocity is defined. The model-based measurement yields \(\theta_{D}=4.16\pm 0.51^{\text{o}}\). Figure 5: Summary of the measurements in advancing (figures on the left) versus receding (figures on the right) conditions. Figures (a) and (b) shows some selected snapshots. Figures (c) and (d) shows the corresponding interface detection while figure (e) and (f) shows the time history of average interface. Figures (g)-(h) plot the dynamic contact angle versus Capillary number. (8) fails to reproduce the measured dynamics while the linear Davis-Hocking correlation (10) succeeds, at least in advancing conditions (\(Ca>0\)). In these conditions, the 'advancing' experiments highlights the different contact angle evolution in the case of dry or pre-wet surfaces. In both cases, the linear relation holds, but the slope is different. The linear trend in the pre-wet conditions for both series of experiments is in good agreement. In receding conditions (\(Ca<0\)), regardless of the dry or pre-wet surface status, the dynamic contact angle remains much closer to the static one but less predictable. ### Equivalent Contact Angle We here analyze the correlation between the evolution of the contact angle and the evolution of the interface dynamics. To this end, we define an equivalent contact angle as the one that would result in the same pressure drop in the case of a spherical interface. That is, this angle is defined as \[\theta_{D,m}(t)=\arccos\frac{K_{exp}(t)}{R}\,, \tag{12}\] where \(K_{exp}\) is evaluated from equation 3 on the interface shape, obtained via the regression of equation (12) (interface examples in Figures 5c-d). This equivalent contact angle can be seen as the largest possible macroscopic contact angle that produces an equivalent impact on the interface dynamics. If the interface curvature is enslaved to the evolution of the contact angle, one would expect a correlation between \(\theta_{D}(t)\) and \(\theta_{D,m}(t)\). However, this correlation was not found in any of the experiments. Figure 6 shows the evolution of \(\theta_{D,m}(t)\) versus the measured \(\theta_{D}\) (reported in Figure 5g and 5h) for both the advancing (left) and the receding (right) experiments. The marker colors are linked to the time \(t\) allows following the trajectories of the points in this plane; these figures should be analyzed together with the interface oscillation plots in figure 5e and 5f. These figures shows that the equivalent contact angle is much larger than the actual one, as one might expect from the short length scale of the meniscii observed in Figure 5. In static conditions, the equivalent contact angle is \(\theta_{S,m}\approx 32^{\circ}\) while the maximum advancing and minimal receding values correspond respectively to \(\theta_{A,m}=40^{\circ}\) and \(\theta_{R,m}=29^{\circ}\). For the purposes of this one, we note that no clear relation appears between the two angles, neither in receding nor in advancing conditions: as these quantities evolves independently, we conclude that interface shape is mostly governed by forces that are acting far from the wall, where the influence of the contact angle appears to be negligible. This observation is of course only valid for the specific conditions analyzed in this work, and one might argue that this is due to the large tube radioous \(R^{*}=R/l_{c}\) in relation to the liquid's capillary lenght. We address this concern in the next section. ### A note on the force balance We are interested in how the relative contribution of the four terms in (11) change as a function of \(R^{*}\) and \(L^{*}\). To this end, we solve this equation for a wide range of \(R^{*}\) and \(L^{*}\). First, however, the solution of this equation requires a formulation for the capillary term and for the unknown coefficient \(C_{f}\). As we are solely interested in the orders of magnitude, we simplify the treatment of the capillary term and replace \((K_{A}^{*}(t^{*})-K_{B}^{*}(t^{*}))\) with \(R^{*}(cos(\theta_{R,m}(t^{*}))-cos(\theta_{A,m}(t^{*})))sign(\hat{h}^{*}(t^{*}))\) as done also in a similar work by Dollet _et al._[24]. This shifts the problem of providing an interface shape to that of providing an equivalent contact angle law. We consider the simplest possible one: a constant value for both the advancing and the receding contact angles. We take \(\theta_{A,m}=40^{\circ}\) and \(\theta_{A,m}=29^{\circ}\), that is the largest and the smallest values observed in Figure 6. This undoubtedly overestimates the capillary forces' role in the liquid column's dynamics. Concerning the term \(C_{f}\), we fit it to the data using an optimization problem and obtained \(C_{f}=25\). This value is similar to the one received by Dollet _et al._[24], who considered a similar geometry. Figure 7a compares experimental data and model prediction with the abovementioned parameters. The predicted interface position is in good agreement with the experimental one, especially for \(t<1\)s. Figure 7b shows the history of the different terms of equation 11. Despite the overestimation in the oversimplified step-like contact angle law (see zoom in Figure 7b), the capillary term is orders of magnitude lower than the others. The viscous damping is the main contribution to slowing down the interface oscillations. We conclude this note on the force balance using the previous closure for an extensive range of \(R^{*}\) and \(L^{*}\) while keeping the same oversimplified law for the contact angle. To ensure consistency with the asymptotic limits \(R^{*}\rightarrow\infty\) and \(R^{*}\to 0\), we introduced a smooth step-like function, such that the equivalent contact angle equals the actual ones at \(R^{*}\ll 1\) and \(\approx 90^{\circ}\) at \(R^{*}\gg 1\). Adding that the resulting laws must comply with the values observed for our experiments at \(R^{*}=3.5\) provides the following \[\theta_{A/R,m}(R^{*})=\frac{\pi/2}{1+e^{-\zeta_{A/R}(R^{*}-R_{A/R,0}^{*})}} \tag{13}\] where the parameters \(\zeta,R_{0}^{*}\) are provided in table 3. These were constrained by imposing \(\theta_{A,m}(R^{*}=3.5)=40^{\circ}\), \(\theta_{A,m}(R^{*}<1)\approx 25^{\circ}\) and \(\theta_{A,m}(R^{*}>8)\approx 90^{\circ}\) to identify \(\zeta_{A},R_{A,0}^{*}\) and \(\theta_{R,m}(R^{*}=3.5)=29^{\circ}\), \(\theta_{A,m}(R^{*}<1)\approx 0^{\circ}\) and \(\theta_{A,m}(R^{*}>8)\approx 90^{\circ}\) to identify \(\zeta_{R},R_{R,0}^{*}\). We thus simulate a total of 400 experiments, varying \(R^{*},L^{*}\) in the range \(R^{*}\in[0.4,3.5]\) and \(L^{*}\in[10,280]\), and considering the same dimensionless initial position \(\overline{h}^{*}(0)=20\). For each of these, we define the experi ment duration as the time for which we have simultaneously \(|\tilde{h}(t_{x})|<0.1|\tilde{h}^{*}(0)|\) and \(|\tilde{h}^{*}(t_{x})|<0.1(l_{c}g)^{1/2}\). Concerning the role of the viscous dissipation, we keep the same value of \(C_{f}=25\) for all simulations. This might result in the incorrect prediction of the contribution of viscous forces for the smallest tubes. However, considering more complex correlations for the pressure drop in curved tubes (see for example Ghobadi and Muzychka [47]) reveals that the error can be of the order of \(\sqrt{R/R_{2}}\) where \(R_{2}\) is the radius of curvature of the bend. Keeping the focus on the relative order of magnitude of the different forces, this does not significantly change the results on the unexpectedly minor role of surface tension. The figure 8a shows the contribution of forces over the full range of simulated lengths and radii while the figure 8b focuses on a case with \(L^{*}=94\). Both figures show that viscosity and gravity dominate the balance at the limit of small \(R^{*}\) while the contribution of inertial forces rises linearly with \(R^{*}\) and quadratically with \(L^{*}\). The figure 8b also shows the dimensionless duration of the experiment for each case. A minimum occurs at about \(R^{*}=1\). This is the critically damped condition where the interface reaches equilibrium without oscillation. Oscillations are produced at larger radii while over-damping (and nearly first-order behavior) is observed at lower radii. The capillary contribution remains negligible in the whole investigated range. In other words, as long as these experiments are carried out in normal gravity conditions, the role of surface tension in the dynamics of the U-tube experiments is negligible for any suitable combination of parameters \(R^{*}\) and \(L^{*}\). ## V Conclusions This work experimentally investigated the dynamics of a moving contact line for liquid nitrogen in a quasi-capillary U-tube (with \(R^{*}=R/l_{c}\approx 3.5\)) in cryogenic conditions. The experimental setup allowed for visualizing the gas-liquid interface during its motion, while image processing techniques and regression with dynamic interface models allowed for accurate dynamic contact angle detection. The contact angle evolution was compared with an unsteady generalization of Tanner-Voinov-Hoffman and the simpler Davis-Hocking linear relationship. The second proved to be valid in advancing conditions in both the case of dry and pre-wet surfaces. This result aligns with previous studies on sessile droplets on vibrating substrates. In receding conditions, the contact angle appeared less correlated with the Capillary number but close to the static contact angle. We analyzed the link between the contact angle and the interface evolution using a macroscopic equivalent contact angle, defined as the angle that would make a spherical interface have the same capillary pressure drop as the actual interface. The equivalent contact angle and the actual contact angle were shown to be uncorrelated, suggesting that the interface motion is independent of the wetting dynamics when \(R^{*}>>1\). Finally, we analyzed the force balance governing the motion of the liquid column over a wide range of tube diameters and lengths. The results show that capillary forces play a minor role in the U-tube experiments: inertia dominates in large tubes while viscosity and gravity dominate in the small ones. Future work will consider a modification of the experiments presented here, with the aim of increasing the Figure 6: Comparison of the equivalent contact angle computed with Equation 7 and the measured dynamic contact angle. On the left Figure 6a shows the case of Figure 5g while on the right Figure 6b shows the case of Figure 5h. Figure 8: Figure 8a shows the square of the l2 norm of each force term in the model equation 11. The plot shows that the capillary pressure drop at the interface has a minor impact also at small-size experiments because of the rising contribution of viscous forces. Figure 8b shows a section of the chart 8a for \(L^{*}=94\), together with the dimensionless duration of the virtual experiment. Figure 7: On the left Figure 7a shows the comparison of the experiment with the interface prediction obtained solving the 1D-ODE model Equation 11. On the right, Figure 7b shows the evolution of the dimensionless terms of Equation 11. sensitivity of the interface motion to the capillary pressure. This will allow to analyze better the impact that dynamic wetting can have on the dynamics of a gas-liquid interface. To this end, experiments will be carried out in microgravity conditions. ###### Acknowledgements. The authors thanks Mathieu Delsipee for his support and contribution in the preparation of the experimental set up. D. Fiorini is supported by Fonds Wetenschappelijk Onderzoek (FWO), Project number 1S96120N and the work was supported by the ESA Contract No. 4000129315/19/NL/MG.
2307.10334
Mitigating Viewer Impact from Disturbing Imagery using AI Filters: A User-Study
Exposure to disturbing imagery can significantly impact individuals, especially professionals who encounter such content as part of their work. This paper presents a user study, involving 107 participants, predominantly journalists and human rights investigators, that explores the capability of Artificial Intelligence (AI)-based image filters to potentially mitigate the emotional impact of viewing such disturbing content. We tested five different filter styles, both traditional (Blurring and Partial Blurring) and AI-based (Drawing, Colored Drawing, and Painting), and measured their effectiveness in terms of conveying image information while reducing emotional distress. Our findings suggest that the AI-based Drawing style filter demonstrates the best performance, offering a promising solution for reducing negative feelings (-30.38%) while preserving the interpretability of the image (97.19%). Despite the requirement for many professionals to eventually inspect the original images, participants suggested potential strategies for integrating AI filters into their workflow, such as using AI filters as an initial, preparatory step before viewing the original image. Overall, this paper contributes to the development of a more ethically considerate and effective visual environment for professionals routinely engaging with potentially disturbing imagery.
Ioannis Sarridis, Jochen Spangenberg, Olga Papadopoulou, Symeon Papadopoulos
2023-07-19T14:17:22Z
http://arxiv.org/abs/2307.10334v1
# Mitigating Viewer Impact from Disturbing Imagery using AI Filters: A User-Study ###### Abstract Exposure to disturbing imagery can significantly impact individuals, especially professionals who encounter such content as part of their work. This paper presents a user study, involving 107 participants, predominantly journalists and human rights investigators, that explores the capability of Artificial Intelligence (AI)-based image filters to potentially mitigate the emotional impact of viewing such disturbing content. We tested five different filter styles, both traditional (Blurring and Partial Blurring) and AI-based (Drawing, Colored Drawing, and Painting), and measured their effectiveness in terms of conveying image information while reducing emotional distress. Our findings suggest that the AI-based Drawing style filter demonstrates the best performance, offering a promising solution for reducing negative feelings (-30.38%) while preserving the interpretability of the image (97.19%). Despite the requirement for many professionals to eventually inspect the original images, participants suggested potential strategies for integrating AI filters into their workflow, such as using AI filters as an initial, preparatory step before viewing the original image. Overall, this paper contributes to the development of a more ethically considerate and effective visual environment for professionals routinely engaging with potentially disturbing imagery. disturbing content image style transfer journalists human rights investigators mental health artificial intelligence gruesome imagery ## 1 Introduction In the era of digital communication there is an exponential increase in media content, with potentially disturbing and traumatizing images becoming increasingly prevalent [1, 2, 3]. This issue holds particular significance for professions such as journalism and human rights investigation, where interactions with distressing visual content are occupational inevitabilities [1]. Such graphic visuals frequently encapsulate scenes of violence, harm, and suffering, provoking emotions of worry, concern, or anxiety that can lead to secondary or vicarious trauma [4, 5, 6]. For instance, professionals may be required to inspect footage from conflict zones such as the war in Ukraine [7], scenes from natural disasters, or horrific accidents. Thus, it is crucial to develop and employ solutions that can effectively mitigate the viewer's impact from such disturbing imagery. Conventional solutions to this issue have predominantly focused on the application of traditional image filters, such as blurring [8, 9]. However, these traditional filters come with significant drawbacks. If applied too heavily, blurring can render an image virtually unrecognizable, stripping away essential details and making the content impossible to interpret [10]. If not enough distortion is applied, however, disturbing elements are not sufficiently masked, thus failing to mitigate the negative impact on the viewer. This creates a challenging trade-off between the preservation of information and the protection of the viewer. The rapid advancements of Artificial Intelligence (AI) in recent years have enabled its integration into numerous fields with a wide range of applications [11; 12; 13; 14]. Among the areas where AI systems have exhibited notable effectiveness is the neural image style transfer [15; 16; 17; 18], i.e., the process of altering digital images to adopt the appearance or visual style of another image. In this paper, we investigate the potential of AI style-transfer filters to mitigate the distressing impact of graphic imagery, thereby addressing the inherent limitations of conventional blurring techniques. To this end, we have adopted three distinct styles/filters, i.e., Drawing, Colored Drawing, and Painting (see Figure 1), and conducted a user study to compare their effectiveness with that of traditional filters. It is important to stress that this study focuses on images containing explicit scenes of violence, injury, etc. It does not aim to detect or address every potentially traumatizing or distressing content due to the diverse range of triggers that different individuals may have. For instance, an image featuring a sorrowful child could potentially evoke distress, yet such images cannot easily be identified as potential distress triggers. The 107 participants of this study (details about study set-up in Section 3) are individuals from professional fields that often entail regular engagement with distressing digital content (e.g., journalists, investigators, etc.). The conducted evaluation is primarily based on two key axes - the intensity of the negative emotional responses triggered while viewing the filtered images and the degree of information retained within these images. The latter is of high importance in the relevant professional contexts where detail identification is essential. The findings of this user study confirmed the potential of AI filters to protect the mental well-being of such professionals. In particular, it was observed that compared to the conventional Blurring filters, the Drawing filter was more effective in reducing the negative emotional impact of viewing distressing images, as evidenced by the lower mean ratings used to measure negative feelings (i.e., -34.14%). In addition, Drawing maintained a significant amount of image detail (97.19%) necessary for various professional purposes, which is not the case for the Blurring filter (6.54%). It is worth noting that the absence of color and the regional consistency of the Drawing filter were the two major advantages compared to the other filters. Furthermore, feedback from participants indicated a broad acknowledgment of the potential utility of AI filters in their professional contexts. They highlighted specific stages in their workflow where such filters could be beneficially incorporated, proposed additional enhancements that could facilitate this integration, and noted potential limitations. The main contributions of this paper are the following: * Exploring the application of AI style transfer filters as a valuable tool for mitigating the emotional impact caused by disturbing digital content, with a focus on professions such as journalism and human rights investigation, where exposure to distressing imagery is a routine occurrence. * A comprehensive user study comparing the effectiveness of AI-based filters against traditional blurring techniques. The results indicate the promising performance of the AI style transfer filters. * Presenting user feedback, detailing potential workflow integration points, potential improvements, and limitations. The remainder of this paper is organized as follows: Section 2 provides an overview of related work, Section 3 presents the methodology followed for the user study conducted, and the results of the performed analysis are detailed in Section 4. Finally, we conclude with Section 5, summarizing our findings, outlining the study's limitations, and suggesting directions for future research. ## 2 Related Works **Professions Associated with Exposure to Disturbing Digital Imagery**. The exposure to disturbing user-generated content (UGC) has been recognized as a significant issue across multiple professions, including journalism, human rights investigations, content moderation, and criminal justice, among others. Zeng et al. [2] delve into the ethical responsibilities of news organizations towards journalists processing UGC, emphasizing the risk of secondary trauma and Post-Traumatic Stress Disorder (PTSD) symptoms. Similarly, the authors of a study conducted for Eyewitness Media Hub1 highlight that journalists engaged in the verification and editing of traumatic UGC can suffer'secondary trauma' and symptoms associated with PTSD [1]. Feinsteine et al. [19] and Reid [3] also align with this viewpoint, suggesting that the frequency or duration of exposure to graphic imagery escalates the likelihood of vicarious trauma. These studies recommend protective measures such as staff rotation, peer support, and preemptive hiring warnings. Hill et al. [20] and Baker et al. [21] further emphasize the emotional impact of reporting on traumatic events and reviewing graphic war crime imagery for journalists and human rights investigators, respectively. Both studies highlight the risk of secondary trauma and the need for strategies to mitigate this risk. Pearson et al. [22] provide insight into the harms experienced by online extremism and terrorism researchers due to their exposure to distressing content. Furthermore, psychological traumas on content moderators are highlighted in several studies [23; 24]. Finally, in-depth interviews with human content moderators exposed to child sexual abuse material (CSAM), focusing on the individual and organizational coping strategies, are presented in [25]. In particular, this study highlights the importance of social support, role validation, and work-life separation, revealing a preference for mandatory, specialized therapy. **Mitigation Strategies and Approaches**. Employing image blurring to decrease the exposure of moderators to harmful data is studied in [8]. However, blurring often compromises the conveyance of crucial image information, hampering a moderator's comprehension of the depicted content. Furthermore, the potential of grayscaling and blurring filters to minimize the emotional impact on content moderation workers is explored in [10]. However, similar usability concerns, such as the obscurity of image content and eye strain are highlighted. Consequently, achieving a balance between preserving essential information and protecting viewers remains an ongoing challenge. Figure 1: Examples of AI-based and conventional filters. In addition to the professional contexts, image blurring has been employed by social media platforms such as Instagram2 to protect users from potentially disturbing content (content warning screens) [26; 27]. However, several studies underscore the limitations of this method. A comprehensive analysis [28] related to Instagram's sensitive content screens found their efficacy in deterring users from accessing negative content to be low, even among individuals presenting with mental health issues. An effort of addressing the limitations of content warning screens suggests that providing additional information along with content warnings can reduce user engagement [29]. The underlying idea is that being informed about the content of an image can deter users from viewing the original image. Given these insights, the aim of this paper is to contribute to this field by exploring the utilization of AI filters for mitigating the effects of viewing disturbing imagery. By comparing these advanced AI approaches with conventional methods, we aim to deepen our understanding of this field and help devise more effective solutions to protect the mental well-being of individuals professionally required to interact with disturbing digital content. Footnote 2: [https://www.instagram.com/](https://www.instagram.com/) ## 3 Methodology A detailed description of the methodology, including the technical details, study format, and study distribution, is outlined in this Section. ### Image Style Transfer Algorithm Central to our methodology is the use of the Progressive Attentional Manifold Alignment (PAMA) [18] style transfer algorithm, which operates on the premise of aligning the content manifold to the style manifold. This is a sophisticated, three-staged process that involves a channel alignment module, an attention module, and a spatial interpolation module. Each module serves a distinct yet interconnected function. The channel alignment module focuses on related content and style semantics, the attention module is responsible for establishing correspondence between features, and the spatial interpolation module then adaptively aligns the manifolds. One of the key characteristics of PAMA is its capacity to alleviate the often-encountered style degradation problem, thus generating stylization outcomes that achieve state-of-the-art quality. In particular, PAMA offers regional consistency, content preservation, and high style quality. The inputs into the algorithm include the style image and the image set to be transformed. Our study adopts three specific styles for transformation: grayscale drawing, slightly colored drawing, and painting. The grayscale drawing imparts a monochrome filter to the imagery, simplifying the visual content while preserving essential structural information. The slightly colored drawing adds a minimal amount of color, providing additional visual clues. The painting style transforms the image into a Renaissance rendition, further distancing the viewer from the graphic reality of the content. Finally, as regards the conventional filters, we employed two blurring filters. The first one applies the blur across the entirety of the image, whereas the second one selectively blurs only the portion of the image that contains the disturbing content. ### Study Design The first user-study segment was dedicated to profiling the participants. This preliminary part of the study comprised typical demographic questions and two profile-building questions. The latter aimed to indicate the participants' frequency of exposure to potentially disturbing UGC and their level of comfort or discomfort when exposed to graphic imagery. The second segment constitutes the core of the study. It was carefully structured to include two main phases, the first of which involved five transformed images--one for each filter under consideration (i.e., three AI-based and two traditional filters). In this phase, participants were asked to rate a select subset of emotions from the Positive and Negative Affect Schedule (PANAS) scale. These emotions were carefully chosen for their relevance to the disturbing nature of the images - Distressed, Upset, Scared, Irritable, Nervous, Jittery, and Afraid. By rating these specific feelings after viewing each transformed image, participants were able to provide an empirical measure of their affective response, thus giving us an understanding of the emotional impact each filter had. The same procedure was followed for the original images to establish the baseline emotional reactions. In addition to this emotion rating, participants were asked to engage in an interpretative exercise. They were prompted to provide a free text description of what they believed each image depicted. This exercise allowed us to determine how successfully each filter retained the necessary information. In the second phase of image filter evaluation, participants were shown four more image sets. Each set contained five variations of a single image, showcasing the effects of each filter. Instead of focusing on specific emotions as in the first phase, participants were asked to rate the overall level of disturbance triggered by each (filtered) image. This aspect of the study was aimed at understanding the overall effectiveness of each transformation in mitigating the negative impact of the original images. The final segment of the study was designed to utilize the collective expertise and insights of the participants. A general feedback question was posed to participants, inviting them to share their thoughts on how AI technology, and specifically the AI style transformation approaches, can contribute to protecting users from the negative impact of being exposed to graphic imagery. The intent was to gain insights that would aid in further refining our approach, bridging gaps, and possibly revealing new research directions. The questionnaire used in this study is available as supplementary material. Note that it contains disturbing content. ### Distribution of the Study The study was distributed to a diverse array of professionals whose roles often necessitate engagement with potentially disturbing content. To this end, personalized emails were sent to carefully chosen, targeted individuals, including researchers, journalists, investigators, fact-checkers, documentalists, editors, political scientists, and producers. This initiative led to responses from more than 42 organizations, amounting to a total of 86 participants. In addition to the focused outreach to specific professionals, the study was also made accessible through an open call for participation. This initiative garnered an additional 21 responses from various professions, including forensic analysts, operations managers, sociologists, technologists, post-production supervisors, and systems engineers. This combination of targeted and open-call distribution strategies aimed to diversify the sample population, ensuring a comprehensive evaluation of the proposed approach's efficacy across different contexts and levels of exposure to disturbing content. ## 4 Results ### Demographics and profiling questions Beginning with the demographics of the participants, there was a diverse group in terms of age distribution, as evidenced by Table 1. The largest proportion of respondents, 44.86%, fell into the 30-45 age bracket, reflecting a participant pool primarily composed of mid-career professionals. This was followed by the 45-60 age group, representing almost a third of the sample at 29.91%. Younger participants, aged 18-30, constituted 22.43% of the sample, while those aged over 60 were least represented at 2.80%. Looking at gender diversity, as outlined in Table 2, the distribution was predominantly binary. Male participants accounted for over half of the total at 54.21%, while females constituted 41.12%. Non-binary individuals represented a smaller proportion at 3.74%, and a minimal percentage of 0.93% opted not to disclose their gender. Regarding the frequency of exposure to potentially disturbing UGC, as reported in Table 3, it was found that the largest portion of participants, namely 34.58%, encountered such content multiple times a week. Those who reported daily exposure constituted 22.43%, closely trailed by respondents who encounter disturbing material several times a month (21.50%). A lesser proportion, 16.82%, came across such content several times a year, while 4.67% of the participants almost never encountered disturbing material online. \begin{table} \begin{tabular}{l c} \hline \hline Age group & Percentage \\ \hline 18-30 & 22.43\% \\ 30-45 & 44.86\% \\ 45-60 & 29.91\% \\ \(>\)60 & 2.80\% \\ \hline \hline \end{tabular} \end{table} Table 1: Age distribution. \begin{table} \begin{tabular}{l c} \hline \hline Gender & Percentage \\ \hline Male & 54.21\% \\ Female & 41.12\% \\ Non-binary & 3.74\% \\ Prefer not to say & 0.93\% \\ \hline \hline \end{tabular} \end{table} Table 2: Gender distribution. Table 4 presents the distribution of self-perceived reactions to exposure to graphic imagery. The 39.25% of the participants indicated they sometimes react negatively to such content, while a slightly smaller proportion, 36.45% reported rarely reacting negatively. Those who regularly had negative reactions comprised 14.95% of the sample. A small fraction of 4.67% indicated that graphic imagery does not affect them negatively. Only 1.87% of participants claimed they almost always react negatively to such imagery, with a few respondents, i.e., 2.79%, providing other responses. ### Trade-off between conveyed information and mitigation of negative feelings As regards emotion alleviation, the Painting style filter illustrated a promising performance with an average negative feeling mitigation of 38.03% as presented in Table 5. The strongest mitigation effect was observed on feelings of being upset and distressed (i.e., the feelings that demonstrated the highest values w.r.t. the original image), registering a significant decrease of 49.20% and 44.63%, respectively. The least affected was the feeling triggered less when viewing the original image (i.e., irritability), with a mitigation rate of 27.09%. Although this reduction spectrum suggests the potential of the Painting filter in diminishing the overall emotional distress incited by graphic images, it was not without its drawbacks. While 87 out of 107 participants (i.e., 81.31%) were able to describe the content of the image, the provided responses revealed that the inherent abstraction of the Painting filter occasionally added an extra layer of distress, while some participants compared it to a piece of disturbing art. For instance, one of the responses was: _'An injured person (though it is very unclear, and that's what makes it a bit disturbing)'_. This unintended consequence indicates that while the Painting style filter has a definite potential in reducing negative emotional reactions, it may unintentionally introduce certain elements of unease. Furthermore, Table 6 presents the results for the Colored Drawing filter, indicating an average emotional mitigation of 17.96%. The highest mitigation was observed for feelings of distress, 30.25%, while feeling of fear saw the least \begin{table} \begin{tabular}{l c} \hline Response & Percentage \\ \hline Graphic imagery does not affect me negatively & 4.67\% \\ I rarely react negatively & 36.45\% \\ I sometimes react negatively & 39.25\% \\ I often react negatively & 14.95\% \\ I almost always react negatively & 1.87\% \\ Other responses & 2.79\% \\ \hline \end{tabular} \end{table} Table 4: Self-perceived reactions to exposure of potentially graphic imagery. \begin{table} \begin{tabular}{l c c} \hline \hline Feeling & Filtered & Original & Mitigation \\ \hline Distressed & 1.729 \(\pm\) 0.907 & 3.122 \(\pm\) 1.178 & 44.63\% \\ Upset & 1.439 \(\pm\) 0.826 & 2.833 \(\pm\) 1.295 & 49.20\% \\ Scared & 1.458 \(\pm\) 0.872 & 2.061 \(\pm\) 1.299 & 29.26\% \\ Irritable & 1.421 \(\pm\) 0.847 & 1.949 \(\pm\) 1.205 & 27.09\% \\ Nervous & 1.402 \(\pm\) 0.775 & 2.122 \(\pm\) 1.310 & 33.93\% \\ Jittery & 1.262 \(\pm\) 0.649 & 2.163 \(\pm\) 1.298 & 41.65\% \\ Afraid & 1.364 \(\pm\) 0.719 & 1.990 \(\pm\) 1.343 & 31.46\% \\ \hline mean & 1.439 \(\pm\) 0.649 & 2.322 \(\pm\) 1.065 & 38.03\% \\ \hline \hline \end{tabular} \end{table} Table 5: Painting Style: Feelings while watching the image. The rating scale ranges from 1 (low) to 5 (high). \begin{table} \begin{tabular}{l c} \hline \hline Frequency & Percentage \\ \hline Almost never & 4.67\% \\ Several times a year & 16.82\% \\ Several times a month & 21.50\% \\ Several times a week & 34.58\% \\ Daily & 22.43\% \\ \hline \hline \end{tabular} \end{table} Table 3: Frequency of exposure to potentially disturbing UGC. mitigation, i.e., 10.26%. It is worth noting that the original image was less disturbing (i.e., approximately 1.6 on the 1-5 rating scale) among the images involved in this study, which justifies the relatively low emotional mitigation (i.e., 17.96%). Although a total of 88 participants (i.e., 82.24%) could comprehend the image, some participants reported difficulties in identifying specific objects or elements within the image. Table 7 shows that the Drawing style filter particularly excelled in preserving the interpretability of the image and mitigating the negative feelings. A majority of participants (i.e., 97.19% or 104 out of 107) successfully identified several details, suggesting that this style maintained a high level of clarity. For example, one of the responses was the following: '_A dead man lying on the floor in front of two other people, one in Flipflops (so no soldiers, but private people)_'. The average reduction in negative emotions was significant, averaging 30.38%. It is worth noting that the feelings most profoundly triggered by the original images, such as being upset and distressed, experienced the highest reduction, with 38.62% and 37.66%, respectively. As regards the Partially Blurring filter, a significant majority of 103 participants (i.e., 96.26%) could interpret the image but primarily relied on the unblurred regions. In addition, Table 8 reports emotional mitigation results, the Partial Blurring style filter had a mean mitigation score of 25.54%. Similarly to the previous filters, it was most effective on feelings of distress and being upset, with reductions of 31.96% and 30.78%, respectively. The least impacted emotion was fear, with a mitigation of only 16.10%. \begin{table} \begin{tabular}{c c c c} \hline \hline Feeling & Filtered & Original & Mitigation \\ \hline Distressed & 1.374 \(\pm\) 0.694 & 1.970 \(\pm\) 1.096 & 30.25\% \\ Upset & 1.346 \(\pm\) 0.754 & 1.680 \(\pm\) 1.034 & 19.88\% \\ Scared & 1.308 \(\pm\) 0.679 & 1.530 \(\pm\) 0.948 & 14.51\% \\ Irritable & 1.252 \(\pm\) 0.616 & 1.500 \(\pm\) 0.948 & 16.53\% \\ Nervous & 1.327 \(\pm\) 0.684 & 1.490 \(\pm\) 0.959 & 10.93\% \\ Jittery & 1.243 \(\pm\) 0.564 & 1.520 \(\pm\) 0.948 & 18.22\% \\ Afraid & 1.355 \(\pm\) 0.743 & 1.510 \(\pm\) 0.987 & 10.26\% \\ \hline mean & 1.315 \(\pm\) 0.570 & 1.603 \(\pm\) 0.891 & 17.96\% \\ \hline \hline \end{tabular} \end{table} Table 6: Colored Drawing Style: Feelings while watching the image. The rating scale ranges from 1 (low) to 5 (high). \begin{table} \begin{tabular}{c c c c} \hline \hline Feeling & Filtered & Original & Mitigation \\ \hline Distressed & 1.748 \(\pm\) 0.912 & 2.804 \(\pm\) 1.213 & 37.66\% \\ Upset & 1.626 \(\pm\) 0.906 & 2.649 \(\pm\) 1.267 & 38.62\% \\ Scared & 1.439 \(\pm\) 0.815 & 1.897 \(\pm\) 1.262 & 24.14\% \\ Irritable & 1.449 \(\pm\) 0.849 & 1.876 \(\pm\) 1.235 & 22.76\% \\ Nervous & 1.421 \(\pm\) 0.790 & 1.990 \(\pm\) 1.311 & 28.59\% \\ Jittery & 1.430 \(\pm\) 0.766 & 2.031 \(\pm\) 1.311 & 29.59\% \\ Afraid & 1.393 \(\pm\) 0.844 & 1.844 \(\pm\) 1.292 & 24.46\% \\ \hline mean & 1.501 \(\pm\) 0.766 & 2.156 \(\pm\) 1.132 & 30.38\% \\ \hline \hline \end{tabular} \end{table} Table 7: Drawing Style: Feelings while watching the image. The rating scale ranges from 1 (low) to 5 (high). \begin{table} \begin{tabular}{c c c c} \hline \hline Feeling & Filtered & Original & Mitigation \\ \hline Distressed & 2.299 \(\pm\) 1.143 & 3.379 \(\pm\) 1.178 & 31.96\% \\ Upset & 2.215 \(\pm\) 1.182 & 3.200 \(\pm\) 1.260 & 30.78\% \\ Scared & 1.692 \(\pm\) 1.032 & 2.096 \(\pm\) 1.329 & 19.27\% \\ Irritable & 1.626 \(\pm\) 0.995 & 2.232 \(\pm\) 1.364 & 27.15\% \\ Nervous & 1.822 \(\pm\) 1.156 & 2.295 \(\pm\) 1.487 & 20.61\% \\ Jittery & 1.757 \(\pm\) 1.071 & 2.404 \(\pm\) 1.483 & 26.91\% \\ Afraid & 1.766 \(\pm\) 1.194 & 2.105 \(\pm\) 1.403 & 16.10\% \\ \hline mean & 1.883 \(\pm\) 1.000 & 2.529 \(\pm\) 1.169 & 25.54\% \\ \hline \hline \end{tabular} \end{table} Table 8: Partial Blurring Style: Feelings while watching the image. The rating scale ranges from 1 (low) to 5 (high). In contrast to the other styles, the Blurring filter drastically affected image interpretability. Only a small fraction of participants (i.e., 6.54%) could recognize the subject matter of the image, suggesting a high potential for information loss with this filter. Regardless of its impact on interpretability, the Blurring style filter achieved a mean mitigation of 26.09%. However, the significant information loss renders this filter less suitable for professionals where comprehending image details is crucial. Additionally, it is important to underscore that all discrepancies between the filtered and original images are statistically significant, affirmed by extremely small p-values (\(<1\mathrm{e}{-}10\)). For further highlighting the discrepancies between filtered and original images, Figure 2 visualizes the mean negative feelings values reported in Tables 5-9. Overall, these findings endorse the Drawing style as the most effective filter in terms of maintaining a balance between interpretability and negative feelings mitigation. Its exceptional performance in preserving the image content while also significantly reducing negative feelings positions it as an optimal choice for professionals needing to interpret graphic images without undue emotional distress. \begin{table} \begin{tabular}{c c c c} \hline \hline Feeling & Filtered & Original & Mitigation \\ \hline Distressed & 1.710 \(\pm\) 0.858 & 2.773 \(\pm\) 1.342 & 38.33\% \\ Upset & 1.607 \(\pm\) 0.939 & 2.814 \(\pm\) 1.294 & 42.89\% \\ Scared & 1.486 \(\pm\) 0.817 & 1.701 \(\pm\) 1.165 & 12.63\% \\ Irritable & 1.355 \(\pm\) 0.717 & 1.835 \(\pm\) 1.304 & 26.16\% \\ Nervous & 1.551 \(\pm\) 0.838 & 1.794 \(\pm\) 1.258 & 13.54\% \\ Jittery & 1.477 \(\pm\) 0.872 & 1.794 \(\pm\) 1.241 & 17.67\% \\ Afraid & 1.439 \(\pm\) 0.815 & 1.670 \(\pm\) 1.143 & 13.83\% \\ \hline mean & 1.518 \(\pm\) 0.734 & 2.054 \(\pm\) 1.111 & 26.09\% \\ \hline \hline \end{tabular} \end{table} Table 9: Blurring Style: Feelings while watching the image. The rating scale ranges from 1 (low) to 5 (high). Figure 2: Mean negative feelings for all filtered and original images. ### Direct Filters Comparison To further explore the relative efficacy of different filters in mitigating the emotional impact of disturbing images, we involved four additional images in the study. Each image was subjected to all five filtering styles, and participants were asked to rate how disturbing they found the filtered images on a scale from 1 (not disturbing) to 5 (highly disturbing). The results of this investigation provide a direct comparison between the styles and allow us to examine the potential of AI filters in comparison to conventional approaches (i.e., blurring filters). As presented in Table 10, based on mean disturbance ratings, the Drawing style filter was found to be the least disturbing with a mean score of 1.977 with a standard deviation equal to 0.733. The Colored Drawing and Painting style filters followed with scores of 2.439\(\pm\)0.799 and 2.692\(\pm\)0.804, respectively. The Partially Blurring and Blurring styles were perceived as the most disturbing, with mean scores of 3.371\(\pm\)0.926 and 3.002\(\pm\)0.873, respectively. These findings showcase high discrepancies among the filtering styles, with the AI-based Drawing style filter outperforming both the rest AI-based filters and the conventional blurring techniques. Overall, the results above underscore the advantage of AI-based filters over traditional filters. However, as mentioned in Section 4.2, they should be considered in combination with the image interpretability, where the Drawing style offered the optimal trade-off across the evaluated filters. ### Practical Use and General Feedback To assess the practical usability of each filter, we asked the participants, '_If the system you use in the scope of your work would provide the option to inspect images using this filter, to what extent would you use this option?_'. The responses, which ranged from 1 (would not use) to 5 (would use extensively), are compiled in Table 11. With an average rating of 3.486, the Partial Blurring filter exhibits the greatest adoption rate among all filters. This is primarily attributed to its ability to blur only the distressing regions of an image, preserving crucial details. This aspect appears particularly beneficial to professionals who require comprehensive analysis of images in their work. The full Blurring filter, on the other hand, garnered a lower mean score of 2.523, due to its tendency to conceal most of the image information. Regarding the AI-based filters, the Drawing outperformed the Colored Drawing and Painting styles with a mean rating of 3.0 compared to 2.505 and 2.542, respectively. The preference for the Drawing filter can be attributed to its capacity to preserve the visual structure of the image while simultaneously distancing the viewer from the original scene. It is also worth noting that the black-and-white nature of the Drawing filter helps to distance viewers from the reality of the content, which is not the case for the Colored Drawing filter. From a technical standpoint, a standard framework incorporating such filters would encompass two AI models. The first one would be tasked with differentiating between potentially disturbing and safe content [5], while the second model [18] would then apply the proposed filters to the content classified as potentially disturbing by the first model. \begin{table} \begin{tabular}{l c c} \hline \hline Style & Mean & Std \\ \hline Drawing Style & **1.977** & 0.733 \\ Colored Drawing Style & 2.439 & 0.799 \\ Painting Style & 2.692 & 0.804 \\ Partially blurred & 3.371 & 0.926 \\ Blurred & 3.002 & 0.873 \\ \hline \hline \end{tabular} \end{table} Table 10: Question: “How disturbing do you consider the following images?”. Mean value across the 4 images. The rating scale ranges from 1 (not disturbing) to 5 (highly disturbing). \begin{table} \begin{tabular}{c c} \hline \hline Style & Mean \(\pm\) Std \\ \hline Blurring & 2.523 \(\pm\) 1.231 \\ Partial Blurring & **3.486 \(\pm\) 1.231** \\ \hline Painting & 2.542 \(\pm\) 1.276 \\ Colored Drawing & 2.505 \(\pm\) 1.239 \\ Drawing & **3.000 \(\pm\) 1.259** \\ \hline \hline \end{tabular} \end{table} Table 11: Question: “If the system you use in the scope of your work would provide the option to inspect images using this filter, to what extent would you use this option?”. The rating scale ranges from 1 (low) to 5 (high). Many participants highlighted that significant limitations of such filters (both AI-based and conventional) exist - as it is often essential for professionals to view and investigate every minor detail in an original image, there were several suggestions on potential strategies to incorporate the proposed AI filters into their routine workflows. The idea of using filters as a preparatory step before viewing the original image was brought up by several participants. By first viewing a filtered version, the viewer can prepare themselves emotionally for the impact of the real image, thus potentially reducing distress. This approach may be particularly effective in contexts where exposure to the original image is ultimately unavoidable, such as investigative journalism or forensics. Furthermore, color was mentioned as a significant factor in perceiving images as disturbing. This aligns with previous psychological research suggesting that certain colors can evoke strong emotional responses [30]. Thus, adjusting the color palette of an image accordingly could be an effective way to reduce its emotional impact. Moreover, applying AI filters only to the regions of an image that depict disturbing content (as in the Partially Blurring filter) was another interesting suggestion. This targeted approach could maintain much of the image's original context and detail, while still protecting the viewer from the most distressing elements. In addition, the importance of variety and flexibility in filter options was emphasized by several participants. As user responses to different filters can vary widely based on individual sensitivities, having a range of filter styles to choose from could cover all individual requirements. Some participants also highlighted that AI filters could also prove particularly useful when dealing with large volumes of images. Finally, the use of AI filters for repeated viewings of an image was noted. After the initial viewing of the original, filters can be applied in subsequent viewings to prevent the repeated experience of negative emotions. The following quotes are direct transcriptions from a subset of participants: * '_Ultimately, in order to do an investigation, I will always eventually have to look at the original. With a technology as the one proposed, you advance a step from completely blurring (or overlaying) an image to giving the user some idea of what the image (original) may depict._' * '_Those tests were really interesting and showed (to me) how much changing the color (especially the color red) makes an impact. So it would definitely help in my job (journalist/fact-checker) to have the possibility to use such filters by default. Sometimes we WILL have to look at the original picture, of course, if we need to investigate it further, but having a default filter making these less violent would be awesome. We would then only be forced to see the ones we need to investigate further._' * '_I think the most important thing in limiting distress, for me personally, is that the photo allows me to have a symbolic understanding of what is happening without providing too many distinguishing characteristics. The black-and-white line drawing method in particular seems excellent. To that extent, I would be happy to use AI filters for researching gruesome topics if they allowed me to better understand information without suffering too many negative emotional effects._' * they need to be able to see original images if necessary of course). I think the best one is the drawing option in black and white, but maybe other styles would work better for other people. I would rather suggest only masking the zones which are graphic such as blood, wounds, and signs of starvation, instead of applying a new style to the whole image because it often suppresses any reality. Some filters on the whole image cartoonize it, making it look more as a contemporaneous artwork than some masked reality._' * '_In some cases filters improve the content as they romanticize it in a special way. While in other cases they make the situation worse as they remove information and make you imagine whatever you want._' ## 5 Conclusion In this paper, we introduce a user study that investigates the potential of AI-based filters for mitigating the emotional impact caused by disturbing imagery, aiming to support professionals who regularly encounter such content in the context of their work or related activities. The comprehensive study provided valuable insights into the effectiveness of different filter styles, with the Drawing style filter emerging as a particularly effective solution that maintains image interpretability while significantly reducing negative emotions. Although limitations certainly exist, most notably the necessity for professionals to inspect every detail in the original images, the participants proposed potential strategies for integrating these AI filters into their workflows, such as utilizing AI filters as an initial, preparatory step to viewing the full image. Future studies can refine these filter techniques, test new ones, and experiment with the proposed integration methods to further optimize the balance between necessary exposure to critical content and the mitigation of its emotional impact. To conclude, there is a clear need for more research and activities in this domain. We hope that with our work we can contribute to reducing secondary or vicarious trauma of investigators, supporting the mental well-being of those who, because of the nature of their work and activities online, are exposed to graphic and potentially damaging imagery. ## Ethics All participants were informed why the research is being conducted, whether or not anonymity is assured, and how the data they are collecting is being stored. We confirm that all the subjects have provided appropriate informed consent via the Google Forms platform. Finally, the ethics committee of the Centre for Research and Technology Hellas has granted ethical approval for this study. ## Disclosure statement The authors report there are no competing interests to declare. ## Acknowledgment This work was supported by the EU H2020 project MediaVerse under Grant Agreement 957252.
2306.07561
Analysing the time period of Vela pulsar
In this project, we have implemented our basic understanding of Pulsar Astronomy to calculate the Time Period of Vela Pulsar. Our choice of pulsar rests on the fact that it is the brightest object in the high-energy gamma-ray sky. The simplistic data set consisting of only voltage signals makes our preliminary attempt as closely accurate as possible. The observations had been made at 326.5 MHz through a cylindrically paraboloid telescope at Ooty. A higher frequency creates a much lower delay in the arrival time of pulses and makes our calculations even more accurate. Being an already widely studied celestial body, it gives us the opportunity to compare our findings and make necessary modifications.
Shreyan Goswami, Hershini Gadaria, Sreejita Das, Midhun Goutham, Kamlesh N. Pathak
2023-06-13T06:20:48Z
http://arxiv.org/abs/2306.07561v1
# Analysing the time period of Vela pulsar ###### Abstract In this project, we have implemented our basic understanding of Pulsar Astronomy to calculate the Time Period of Vela Pulsar. Our choice of pulsar rests on the fact that it is the brightest object in the high energy gamma ray sky. The simplistic data set consisting of only voltage signals makes our preliminary attempt as closely accurate as possible. The observations had been made at 326.5 MHz through a cylindrically paraboloid telescope at Ooty. A higher frequency creates a much lower delay in the arrival time of pulses and makes our calculations even more accurate. Being an already widely studied celestial body, it gives us the opportunity to compare our findings and make necessary modifications. ## 1 Introduction Pulsars are rapidly rotating highly magnetised neutron stars. They emit two steady, narrow beams of electromagnetic radiation in opposite directions that sweep the sky like a lighthouse. Pulsars are of extreme importance to astronomers as they can help locate planets or other celestial bodies orbiting around it, measure the distance to galaxies, construct models of free electron distribution and detect gravitational waves. Calculating several parameters of known pulsars like its Distance or Time Period can allow us to perform further complex calculations and estimations, helping us gain a much deeper understanding of the universe. We have been provided with a raw voltage signal from the observation of the Vela Pulsar (\(PSRB0833-45\)) by the two sub-apertures (north and south) of the Ooty Radio Telescope [4]. It is a cylindrical paraboloid telescope based on a north-south slope of 11.2 degrees in Ooty. The reflecting surface is 530m long and 30m wide and is operated at 326.5 MHz. The large reflecting surface makes the telescope highly sensitive. The observations, as recorded in the data set, have been made at 326.5 MHz with a bandwidth of 16.5 MHz. A data set with one second's worth of data was used with each row of data being separated by 30 nano seconds (33.3(3) MHz). The main difference in the analysis being, we have used the DM (Dispersion Measure) that has already been found to make sure our results have increased accuracy. In section 2 we have explored the statistical characteristic of the Vela pulsar to make sure there are no discrepancies in data. Voltage and power signals are plotted to understand how the distribution of the data is. Section 3 discusses properties of the signal and dynamic spectrum. This section discusses the RFI and how one would eliminate it. The frequency delay results from the same. Section 4 we find the distance using the correct values of DM to show that the same shall be applied to find a more accurate Time Period that allows us to eliminate the time delays by using the DM and get a dedispersed time series of the pulsar. In further sections, we find the average time period and plot the average profile of the pulsar. ## 2 Statistical Characteristics of the Signal We began our analysis by performing statistical evaluation of the raw voltage signals to verify some of its expected properties. We expect the signals to have a Gaussian distribution. To do this, we randomly and uniformly selected 100,000 voltage samples from both the north and the south arrays and plotted the histogram. As expected, the voltage signals demonstrate Gaussian distribution. ### Power Signal Characteristics We further studied the distribution that the power signals follow. The power or intensity signal is merely the square of the voltage signal. The power signals are expected to follow an exponential distribution. We used the same data samples that we used to look at voltage characteristics previously. Histograms for both the northern and southern arrays were plotted. As expected, the power signals demonstrate an exponential distribution. Figure 4: Histogram for northern array Figure 5: Histogram for southern array Figure 3: Voltage signal distribution of 100,000 randomly selected samples Figure 6: Power signal distribution of 100,000 randomly selected samples Properties of the signal in the Time-Frequency domain ### Voltage power spectrum The Power Spectrum of a signal describes the power present in the signal as a function of frequency. Any physical signal can be decomposed into a spectrum of frequencies over a range. We used a very efficient algorithm known as the Fast Fourier Transform (FFT) to plot the power distribution as a function of frequency. To compute the FFT, 256 frequency channels were used since power of 2 increases the speed of FFT (in this case, \(16^{2}\)). The average power spectrum of the voltage signal was plotted by averaging the power spectrum obtained from all 512-point FFTs. The voltage power spectrum for both the northern and southern arrays are given in figure 9. Each Power Spectrum corresponds to an interval of 512/33 microseconds. The sharp peaks in the spectra might indicate the presence of local Radio Frequency Interference (RFI). RFI is a disturbance caused by an external source like cellular networks, lightning, solar flares, etc that affects the electrical circuit used to originally measure the voltage signals from the pulsar. As observed, the DC channel power is much larger in the northern array than in the southern. The plot smoothly tapers off to 0 at both edges, indicating that the aliasing is minimal. ### Dynamic Spectrum The Dynamic Spectra is a color-coded graph that shows the relationship between Frequency (MHz) and Time (ms). It enables us to detect pulsar signal indicators. Incoherent Addition was used to combine the power from the two halves of the array to increase the Signal to Noise Ratio (SNR). Incoherent addition helped in removing Radio Frequency Interference (RFI) to a large extent. The dynamic spectrum is shown below. The x-axis represents time measured in ms and the y axis represents frequency in MHz. The colour bar on the right indicates the intensity of the power signal for a particular time and frequency data point. The diagonal and uniformly spaced features shown in the graph above leads us to conclude that the source has to be a pulsar. Upon carefully observing each pulse, the signal appears first at higher frequencies, and gradually appears later at lower frequencies. Thus, there is a frequency delay in the observed data which is a characteristic sign of a signal dispersed in the interstellar medium. Since our analysis is concerned with magnitudes, negative intensities at either ends of the colour bar do not pose any discrepancy. ### Dispersion Measure and the frequency delay The Dispersion Measure (DM) is a parameter that shows up in observations as the broadening of an otherwise sharp pulse. In statistics, it refers to how far a distribution may be stretched or squeezed. The DM is measured in pc/cc and is calculated as: \[t\approx t_{\infty}+4.149\times 10^{3}\times DM\times\nu^{-2} \tag{1}\] where \(t\) is the pulse arrival time in seconds and \(t_{\infty}\) is the pulse arrival time in seconds at infinite frequency. The DM is equal to 67.62 pc/cc. [1] Figure 7: Voltage power spectrum for northern array Figure 8: Voltage power spectrum for southern array Figure 9: Average voltage power spectrum for both the arrays. The frequency axis ranges from 0 to 256 MHz The electrostatic interaction between radio waves and charged particles in the Interstellar Medium creates a delay in the propagation of light, with the delay being a function of radio frequency and the masses of the charged particles or the Dispersion Measure. Lower the frequency, greater is the delay. The delay is given by: \[\tau(s)=4.149\times 10^{3}\times DM\times(\nu_{1}^{-2}-\nu_{2}^{-2}) \tag{2}\] ## 4 Distance to the Pulsar The distance to the pulsar (S) is given by: \[S=\frac{DM}{n_{e}} \tag{3}\] where \(n_{e}\) is the mean electron density between the pulsar and earth and is equal to 0.23 per cc.[1], [3] Hence, \[S=294\,pc\] ## 5 Dedispersed Time Series We eliminated the frequency-dependent time delays using the DM. To obtain the de-dispersed signal intensity, we changed the time-domain position of all lower frequency channels to align them with the pulse arrival time at the highest channel using Equation 1, and then added the same for all the channels. The obtained dedispersed time series is given below. The peaks in the above graph are much more significant and easier to recognize among the background noise. This is due to the dedispersion procedure increasing the SNR. Figure 10: The dynamic spectrum of the signal. Frequency (in Mhz) is plotted on the y-axis, and decreases upwards. Time is plotted on the x-axis for a duration of 1000 ms. ## 6 Time Period of the Pulsar We obtained a series of significant periodic single pulses in the previous section. This enabled us to further calculate the time period of the pulsar.The arrival time of the individual pulse should fit a period-solution, and hence, we used the technique of curve fitting, to estimate the arrival times. The best fit curve is a linear curve as given in the figure below. The arrival time of each pulse has been tabulated below. Based on the arrival times as shown in the table above, the time period of the pulsar is 89.3 ms, which on comparison with ATNF Pulsar Catalogue is correct. Finally, based on the time period, we folded the entire time series with the pulsar period to obtain an average profile for the pulsar. Figure 11: The dedispersed time series. Figure 12: The dynamic spectrum after accounting for the time delay ## 7 Conclusion After implementing various techniques, we managed to calculate important parameters related to the pulsar.We also examined the statistical properties of the raw voltage signal. * We estimated a distance of 294 pc to the pulsar from a Dispersion Measure of 67.62 pc/cc. The distance calculated is in agreement with the value measured using the method of parallax, considering the error margin.[1] * The time period of the pulsar turned out to be 89.3 ms. The calculated time period is in agreement with the currently accepted value.This is more accurate as we used the exact value of Density Measure.[2] An important thing to note is that all the quantitative figures are estimates with some amount of uncertainty. This can be due to uncertainties in other parameters such as the dispersion measure, low exposure time, small dataset etc. Longer observation time would significantly reduce uncertainties in the data. Since the source is a compact object, the visibility should remain constant as a function of time. Hence, the Fourier Transform of brightness distribution should not change with a change in baseline. Over the course of the project, we gained invaluable insights into the working of pulsars and how to decipher information from mere observations. The code used for analysis is available view code. \begin{table} \begin{tabular}{||c|c|c||} \hline Pulse number & Arrival time (ms) & Uncertainty(ms) \\ \hline \hline 1 & 88.96388 & 0.33612 \\ \hline 2 & 178.8742 & 0.610303 \\ \hline 3 & 266.8916 & 1.28255 \\ \hline 4 & 356.8019 & 0.610303 \\ \hline 5 & 446.7122 & 0.610303 \\ \hline 5 & 535.6761 & 0.33612 \\ \hline 5 & 624.6761 & 0.3 \\ \hline \end{tabular} \end{table} Table 1: Pulse arrival times. Figure 13: Fitting a linear curve to estimate the arrival times _Acknowledgement_: The authors would like to express their gratitude to Dr. Avinash Deshpande, who provided us the raw signal data, and Mr. Devansh Shukla, who helped in the course of the project. HG would like to thank Department of Science and Technology of India for the INSPIRE Scholarship for Higher Education (SHE), (DST/INSPIRE/02/2019/011921).
2308.11877
Integrated Image and Location Analysis for Wound Classification: A Deep Learning Approach
The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods, a vital step in diagnosing and determining optimal treatments. Recognizing this need, we introduce an innovative multi-modal network based on a deep convolutional neural network for categorizing wounds into four categories: diabetic, pressure, surgical, and venous ulcers. Our multi-modal network uses wound images and their corresponding body locations for more precise classification. A unique aspect of our methodology is incorporating a body map system that facilitates accurate wound location tagging, improving upon traditional wound image classification techniques. A distinctive feature of our approach is the integration of models such as VGG16, ResNet152, and EfficientNet within a novel architecture. This architecture includes elements like spatial and channel-wise Squeeze-and-Excitation modules, Axial Attention, and an Adaptive Gated Multi-Layer Perceptron, providing a robust foundation for classification. Our multi-modal network was trained and evaluated on two distinct datasets comprising relevant images and corresponding location information. Notably, our proposed network outperformed traditional methods, reaching an accuracy range of 74.79% to 100% for Region of Interest (ROI) without location classifications, 73.98% to 100% for ROI with location classifications, and 78.10% to 100% for whole image classifications. This marks a significant enhancement over previously reported performance metrics in the literature. Our results indicate the potential of our multi-modal network as an effective decision-support tool for wound image classification, paving the way for its application in various clinical contexts.
Yash Patel, Tirth Shah, Mrinal Kanti Dhar, Taiyu Zhang, Jeffrey Niezgoda, Sandeep Gopalakrishnan, Zeyun Yu
2023-08-23T02:49:22Z
http://arxiv.org/abs/2308.11877v2
## Integrated Image and Location Analysis for Wound Classification: A Deep Learning Approach ###### Abstract The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods, a vital step in diagnosing and determining optimal treatments. Recognizing this need, we introduce an innovative multi-modal network based on a deep convolutional neural network for categorizing wounds into four categories: diabetic, pressure, surgical, and venous ulcers. Our multi-modal network uses wound images and their corresponding body locations for more precise classification. A unique aspect of our methodology is incorporating a body map system that facilitates accurate wound location tagging, improving upon traditional wound image classification techniques. A distinctive feature of our approach is the integration of models such as VGG16, ResNet152, and EfficientNet within a novel architecture. This architecture includes elements like spatial and channel-wise Squeeze-and-Excitation modules, Axial Attention, and an Adaptive Gated Multi-Layer Perceptron, providing a robust foundation for classification. Our multi-modal network was trained and evaluated on two distinct datasets comprising relevant images and corresponding location information. Notably, our proposed network outperformed traditional methods, reaching an accuracy range of 74.79% to 100% for Region of Interest (ROI) without location classifications, 73.98% to 100% for ROI with location classifications, and 78.10% to 100% for whole image classifications. This marks a significant enhancement over previously reported performance metrics in the literature. Our results indicate the potential of our multi-modal network as an effective decision-support tool for wound image classification, paving the way for its application in various clinical contexts. **Keywords:** Multi-modal Wound Image Classification, Wound location Information, Body Map, Combined Image-Location Analysis, Deep Learning, Convolutional Neural Networks, Transfer Learning. ## Introduction Wound diagnosis and treatment are a pressing issue worldwide, with a considerable population suffering from wounds. As per a 2018 retrospective analysis, the costs for wound treatment have been estimated to be between $28.1 billion to $96.8 billion [3], reflecting the tremendous financial and medical burden. The most commonly observed wounds include diabetic foot ulcer (DFU), venous leg ulcer (VLU), pressure ulcer (PU), and surgical wound (SW), each associated with a significant portion of the population [22][23][24][25]. Given these circumstances, effective wound classification is crucial for timely and adequate treatment. Until recently, wounds were predominantly classified manually by specialists, often leading to inconsistencies due to lack of specific guidelines. However, the advent of artificial intelligence (AI) has brought about significant changes in healthcare, including wound diagnosis [4]. AI, specifically deep learning (DL), has proven to be a game-changer in medical image analysis, enabling accurate, time-efficient, and cost-effective wound classifications. Data-driven techniques like DL, which require minimal human intervention, have been extensively utilized for identifying patterns and relationships in complex data [5][11]. DL encompasses several methods, like Convolutional Neural Networks (CNN), Deep Belief Networks (DBN), Deep Boltzmann Machines (DBM), Stacked Autoencoders, and many more. These techniques have found applications in various medical diagnostic fields, including wound image analysis [26][29]. Studies have underscored the effectiveness and efficiency of deep convolutional neural networks in wound diagnosis and analysis [14][15][16]. Notwithstanding the advancements, the accuracy of wound classification models remains constrained due to the partial information incorporated in the classifiers. The present research introduces an innovative approach to address this limitation by including wound location as a significant feature in the wound classification process. Wound location, a standard entry in electronic health record (EHR) documents, is instrumental in wound diagnosis and prognosis. A body map has been utilized to facilitate accurate and consistent wound location documentation [28], enhancing the classifier's performance by providing a more holistic set of data for classification. The classifier trained on both image and location features outperforms those reliant solely on image data. A simplified workflow of this study is shown in **Figure 1**. The developed wound classifier takes both wound image and location as inputs and outputs the corresponding wound class. Figure 1: Expected workflow of this research ## Related Works In this review, we revisit the relevant research in the field of wound image classification, segmented into categories based on the methodology of each study. ### A. Deep Learning Based Classification **A.1. Convolutional Neural Networks (CNNs) with SVM:** A method proposed by Abubakar et al. [30] distinguished between burn wounds and pressure ulcers using pre-trained deep architectures such as VGG-face, ResNet101, and ResNet152 in combination with an SVM for classification. Similarly, Goyal et al. [31] predicted the presence of infection or ischemia in Diabetic Foot Ulcers (DFUs) using Faster RCNN and InceptionResNetV2 networks, in combination with SVM. ### A.2. Advanced Deep Learning Techniques: Advanced methods involving two-tier transfer learning were utilized in studies which used architectures like MobileNet, InceptionV2, and ResNet101. Goyal et al. [32] presented DFUNet for classification of DFUs, while Nilsson et al. [33] applied a CNN-based method using VGG-19 for venous ulcer image classification. In another significant study, Alaskar et al. [38] applied deep CNNs for intestinal ulcer detection in wireless capsule endoscopy images. Using AlexNet and GoogleNet architectures, they reported a classification accuracy of 100% for both networks. Ahsan et al. [41] discusses the use of deep learning algorithms to automatically classify diabetic foot ulcers (DFU), a serious complication of diabetes that can lead to lower limb amputation if untreated. The authors examined various convolutional neural network (CNN) architectures, including AlexNet, VGG16/19, GoogleNet, ResNet50, MobileNet, SqueezeNet, and DenseNet. They used these models to categorize infection and ischemia in the DFU2020 dataset. To address the issue of limited data and to reduce computational cost, they fine-tuned the weights of the models. Additionally, affine transform techniques were employed for data augmentation. The results revealed that the ResNet50 model achieved the highest accuracy rates, reaching 99.49% for ischemia and 84.76% for infection detection. ### A.3. Multi-Class Classification Techniques: Shenoy et al. [34] proposed a method to classify wound images into multiple classes using deep CNNs. Rostami et al. [36] proposed an ensemble DCNN-based classifier to classify entire wound images into surgical, diabetic, and venous ulcers. Anisuzzaman et al. [28] proposed a multi-modal classifier using wound images and their corresponding locations to categorize them into multiple classes, including diabetic, pressure, surgical, and venous ulcers. This paper introduced an image and location classifier and combined it together to create a multi-modal classifier. In this study, two different datasets were used namely AZH dataset that consists of 730 wound images with four classes, Medetec dataset which consists of 358 wound images with three classes. Also, they introduced a new dataset AZHMT dataset which is a combination of AZH and Medetec dataset containing 1088 wound images. The reported maximum accuracy on mixed-class classifications varies from 82.48% to 100% in different experiments and maximum accuracy on wound-class classifications varies from 72.95% to 97.12% in various experiments. **B. Wound Image Classification Using Novel Approaches** Novel techniques have been presented to overcome challenges in wound classification. Alzubaidi et al. [35] introduced DFU_QUTNet, a deep architecture model for classification of DFUs. They reported a maximum F1-Score of 94.5% obtained from combining DFU_QUTNet and SVM. Another interesting method was presented by Sarp et al. [37] who classified chronic wounds using an explainable artificial intelligence (XIA) approach. **C. Traditional Machine Learning-Based Classification** **C.1. SVM-Based Techniques:** Traditional machine learning techniques have also found significant use in wound image classification. Yadav et al. [39] used color-based feature extraction and SVM for binary classification of burn wound images. Goyal et al. [40] used traditional machine learning and DCNN techniques for detecting and localizing DFUs, with Quadratic SVM classifiers trained on feature-rich patches extracted from the images. Through this review, it is evident that both traditional and advanced machine learning techniques have demonstrated promising results in wound image classification, providing valuable insights for future research in this area. **Materials and Methods** This study encompasses three distinct subsections, each elucidating the specific methodology employed in this study: Whole Image Classification, Region of Interest (ROI) Extracted Image Classification, and ROI with Body Map Location Image Classification. It should be noted that each of these subsections utilizes the same fundamental base classifier for the image data analysis. Datasets were anonymized, partitioned, and augmented before processing through a proposed architecture. The proposed model incorporated transfer learning, convolution blocks, axial-attention mechanisms, and Adaptive-gated MLP. Model performance was evaluated using accuracy, precision, recall, and the F1-score. **A. Dataset** **A.1 AZH Dataset:** The AZH Dataset is a collection of prefiltered 730 ROI images and 538 Whole wound images, varying in size and depicting four types of wounds: venous, diabetic, pressure, and surgical. Captured over two years at Milwaukee's AZH Wound and Vascular Center, the images were collected using an iPad Pro (software version 13.4.1) and a Canon SX 620 HS digital camera, and subsequently labeled by a wound specialist from the center. While most of the dataset comprises unique patient cases, some instances involve multiple images from a single patient, taken from different body sites or at varying stages of healing. These were classified as separate due to distinct wound shapes. This dataset, unfortunately, couldn't be expanded due to resource limitations. It's important to note that the data doesn't involve any human experimentation or usage of human tissue samples. Instead, it utilizes de-identified wound images, publicly available at link: (Link). Each image only includes the wound and immediate skin area, eliminating any unnecessary or personal data to protect patient identity. The University of Wisconsin-Milwaukee has vetted the dataset's use for compliance with university policy. **Figure 3** and **Figure 4** show images from whole and ROI images. **A.2 Medetec Dataset:** The Medetec wound dataset is a compendium of freely available images that encompasses an extensive range of open wounds [57]. We prefiltered 216 images from three distinct categories for this study: diabetic wounds, pressure ulcers, and venous leg ulcers. Notably, this dataset does not encompass images of surgical wounds. The images are provided in jpg format, with weights and heights fluctuating between 358 and 560 pixels, and 371 to 560 pixels, respectively. This dataset laid a solid foundation for the robustness and reliability assessments of the model we developed. **B. Body map for location** A body map serves as a simplified, symbolic, and accurately phenotypic representation of an individual's body [42]. Primarily used in the medical field, body maps are effective tools for identifying and locating physical afflictions such as bruises, wounds, or fractures. They are especially valuable in forensic science for identifying bodily changes during post-mortem examinations and in medical practice for pinpointing the location of infections [43]. By offering a detailed overview of the body, they inform practitioners about other body areas that might be affected and require attention during the healing process. Furthermore, in the realm of scientific research, body maps function as verifiable evidence, validating observable bodily changes caused by internal diseases. The design of a comprehensive body map with 484 distinct parts is credited to Anisuzzaman et al. [28]. PaintCode [44] was employed to prepare this body map, with initial references being drawn from several credible sources [45][46][47]. The fundamental framework for this design originated from the Original Anatomy Mapper [48], which directly paired each label and outline. The extreme intricacy involved in the detailed depiction of each feature on the body map led to a pre-selection of 484 features or regions. This process was overseen and approved by wound professionals at the AZH wound and vascular center, ensuring the map's medical accuracy and applicability. Major part of body map is shown in the **Figure 2**. Each number denotes a location in this case. **Table 1** shows a few examples of locations and their related numbers. ## Appendix C Dataset Processing and Augmentation ### ROI extraction: The extraction of Region of Interest (ROI) from wound images presents a robust methodology for diagnosing and tracking wound progression. As aforementioned, the ROI includes the wound itself and a portion of the surrounding healthy skin, which collectively encompasses the vital elements of the wound's condition. The developed wound localizer is a significant tool for this extraction process, as it is capable of automatically cropping single or multiple ROIs from each image [49]. Each ROI represents one of six categories - diabetic, venous, pressure, surgical, background, and normal skin. These categories are critical in understanding the etiology of the wound, allowing for more accurate and personalized treatment plans. However, the diversity of the wounds is also reflected in the different sizes and shapes of the extracted ROIs, each telling a unique narrative of the wound's journey. Importantly, the ROI's rectangular form and variable size allow for greater adaptability in handling various wound types and sizes. It is an efficient method to focus on the essential wound characteristics while reducing unnecessary information that could potentially introduce noise into the data. \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{2}{|c|}{**Right Leg Front and Back**} & \multicolumn{2}{|c|}{**Left Leg Front and Back**} \\ \hline \multicolumn{2}{|c|}{**Location name**} & \multicolumn{1}{|c|}{**Body map**} & \multicolumn{1}{|c|}{**Location name**} & \multicolumn{1}{|c|}{**Body map**} \\ \multicolumn{2}{|c|}{} & \multicolumn{1}{|c|}{**number**} & \multicolumn{1}{|c|}{**number**} \\ \hline Right Fifth Toe Tip & 135 & Left Anterior Ankle & 180 \\ \hline Right Lateral Heel & 150 & Left Fifth Toe Tip & 202 \\ \hline Right Medial Malleolus & 158 & Left Medial Malleolus & 178 \\ \hline Right Proximal Lateral Dorsal Foot & 159 & Left Proximal Medial Plant & 215 \\ \hline \end{tabular} \end{table} Table 1: Body Map examples of Lower leg region Figure 2: Full Body View [28] **Figure 4** excellently illustrates the variation in extracted ROIs from different classes of the wound dataset. This showcases the versatility of our wound localizer, capable of handling wounds of different origins, sizes, and stages. It successfully extracts the ROI, making the most relevant information available for analysis. **C.2. Data Split:** During this study, we utilize two distinct methods to partition the dataset. The utilization of two distinct partitioning provides a richer understanding of the model's behavior, potential biases, sensitivities, and the ability to generalize. It also facilitates a more robust and comprehensive evaluation process. The first approach consisted of splitting the data into training (70%), testing (15%), and validation (15%). The second partitioning method diverged slightly from the first, allocating 60% of the data for training, 15% for validation, and increasing the testing set to comprise 25% of the total dataset. **Table 2** shows both types of dataset splits on ROI images. **C.3. Data Augmentation:** Each image in the training set was augmented using transformation methods such as resizing, rotation, flipping (both vertically and horizontally), and application of affine transforms (including scaling, rotating, and shifting). Additional alterations such as the application of Gaussian noise and coarse dropout (i.e., random rectangular region removal) were also performed. These transformations were probabilistically applied, creating a diverse set of augmented samples as shown in **Figure 5**. The transformations ensured robustness of the model against variations in the data. **Figure 3:** Sample images from the AZH Wound and Vascular Center database. The rows from top to bottom display diabetic, pressure, surgical and venous samples, respectively. Figure 4: Sample ROIs. The columns from left to right display background, normal skin, diabetic, pressure, surgical and venous ROIs, respectively. Figure 5: Data Augmentation with leftmost original image. The rows from top to bottom display background, normal skin, diabetic, pressure, surgical and venous ROIs, respectively. ### ROI and wound location: In the ROI dataset we have two additional classes named normal skin and background which were created manually by selecting skin region for normal skin and any additional information as background from the original whole image dataset **Error! Bookmark not defined.[28]**. Sample of these two classes are shown in **Figure 4**. All of these were verified by wound specialists. Wound location was associated with each ROI image and assigned values from the body map discussed in section B. All the six classes abbreviation is shown in **Table 2.** ## Appendix D Model Our proposed deep learning model integrates multi-level features from various pre-existing models, utilizing custom layers and attention mechanisms to improve performance. Our model design has been adopted from C-Net architecture [53][54]. Basic model outline is displayed in **Figure 6**. ### Base Models: The proposed model utilizes three pre-trained Convolutional Neural Networks (CNNs) - ResNet152, VGG16, and EfficientNet-B2. In ResNet152, modifications include not only the removal of the average pooling and fully connected layers, but also alterations to the third Figure 6: Proposed model architecture outline Figure 7: Parallel Squeeze-and-Excitation block architecture outline block of the second layer by removing its last two layers and removing the final four layers of the overall model. For VGG16, the last twelve layers are omitted, capturing more primitive patterns. The last layer of EfficientNet-B2 is removed to maintain consistency with the modifications made to the other two models. These models, applied in parallel to the input, capture different levels of features. ### Custom Layers: The custom layers comprise a Convolutional Block (ConvBlock), followed by a Parallel Squeeze-and-Excitation (P_scSE) block [51], and a dropout layer. The ConvBlock is a combination of a convolution layer and a ReLU activation function, capturing spatial information and introducing non-linearity. The P_scSE block blends Channel-wise Squeeze-and-Excitation (cSE) and Spatial Squeeze-and-Excitation (sSE) operations. The cSE focuses on channel interdependencies, providing global context, while the sSE concentrates on the spatial interdependencies of each channel, maintaining detailed spatial information. Outputs from the cSE and sSE are merged using max-out and addition operations [51] as shown in **Figure**7. ### Aggregation and Fully Connected Layers: The base models' outputs are concatenated and fed through sequences of ConvBlocks and P_scSE blocks to merge and process multi-level features. The output is then flattened and passed through a dense layer. The output is further processed through a fully connected layer block. This block includes two dense layers enriched with axial-attention mechanisms, an enhancement over traditional attention mechanisms, focusing on individual dimensions separately. Interspersed with ReLU activation functions and dropout operations, the axial-attention mechanism boosts important features, helping the model to recognize complex patterns and dependencies in the data. If wound location is used along with image data, then we use Adaptive-gated MLP to analyze wound location separately before concatenating them with the fully connected layers in the above model. This module is constructed as a series of linear transformations, axial attentions, and ReLU activations, followed by a final linear transformation to map the output to the target size. The MLP is gated, meaning it can learn to selectively propagate information through the network. The orderly arrangement of data is vital for the efficient functioning of the model. Consistency in the output from the image and location data is essential, thus necessitating the synchronous feeding of properly sequenced data into the model. This alignment was maintained by associating each Region of Interest (ROI) with a unique index number and mapping the corresponding wound location to this number. Given the categorical nature of wound location data, it was represented using one-hot encoding. ### Output Layer: The final dense layer maps to the number of output classes. ### Performance Metrics In our study, we employed various evaluation metrics such as accuracy, precision, recall, and the F1-score to scrutinize the effectiveness of the classifiers. The related mathematical formulations for these assessment metrics are illustrated in Equations 1 through 4. The abbreviations TP, TN, FP, and FN in these equations stand for True Positive, True Negative, False Positive, and False Negative, respectively. For a more comprehensive understanding of these equations and associated theories, readers are referred to reference [50]. \[Accuracy=\frac{TP+TN}{TP+FP+FN+TN} \tag{1}\] \[Precision=\frac{TP}{TP+FP} \tag{2}\] \[Recall=\frac{TP}{TP+FN} \tag{3}\] \[F1-Score=2\times\frac{Recall\times Precision}{Recall+Precision} \tag{4}\] ## Results In the present investigation, we deployed the advanced computational capacities of Google Colab Pro Plus A100, fortified with 40GB of memory. This enabled a methodical analysis involving both Region of Interest (ROI) and whole image-based classifications. The experimental setup involved processing images of 256x256 pixel dimensions, batched into groups of 32, across a course of 100 epochs. Our learning parameters were finely tuned to optimize the learning process: a learning rate of 0.0001 was chosen, with a minimum rate limit of 0.00001. To enhance the efficiency of our learning process, we applied the Adam optimizer [52]. **Classification Categories:** The classifiers were extensively trained to distinguish among various classes represented in the images, specifically: Diabetic (D), Venous (V), Pressure (P), Surgical (S), Background (BG), and Normal Skin (N). Further specifications and results regarding these classes will be provided in the ensuing sections of this paper. **Loss Function:** Cross Entropy was chosen as our loss function, given the multi-class and binary nature of our image classifications. Its mathematical formulation is as follows 26[56]: For multi-class problems, the cross-entropy loss, \(L\), is: \[L=-\sum_{i=0}^{n}(y_{i}\times\log{(p_{i})}) \tag{1}\] Here \(y_{i}\) is the actual label and \((p_{i})\) is the predicted probability for each class (\(i\)). For binary classification problem, the binary cross entropy loss, \(L\), is computed as: \[L=-\sum_{i=0}^{n}(y_{i}\times\log(p_{i})+(1-y_{i})\times\log(1-p_{i})) \tag{2}\] The optimization process strives to minimize this loss, thereby reducing the discrepancy between our model's predictions (\(p_{i}\)) and the actual labels (\(y_{i}\)). Further sections will elucidate the efficacy of this loss function within our research context. A ROI Classification The primary phase of the ROI classification trial pertains to the classification of 6 unique types of wound patches, specifically: diabetic, venous, pressure, surgical, BG, and N. Subsequently, the 5-category classification problem comprised three types of wound labels alongside BG and N categories. When addressing the 4-category classification, the objective centered on the categorization of the wound patches into one of the four classes: BG, N, along with two different wound labels. In the context of 3-category classification, the aim was to sort the wound patches into one of the three groups: D, P, S, V. For binary classification, a range of combinations including N, D, P, S, V were utilized to categorize the wound patches into two distinct groups. The dataset was split in two different ways, one is 70-15-15 and the other is 60-15-25, to observe and compare the best results. ### ROI multiclass classification without wound location The results of the ROI classifier's performance without wound location evaluation varied across different scenarios. For the 6-class classification case (BG, N, D, P, S, V), the test accuracy was 85.41% and 80.42% for the 70%, 15%, 15% and 60%, 15%, 25% data splits respectively. The precision, recall and F1-score for this case were 85.69%, 85.41%, 85.29% and 80.26%, 80.42%, 79.52% for each data split respectively, as displayed in **Table 3**. In the 5-class classification scenario, the results varied between the class combinations. The BG, N, D, S, V combination showed superior performance with test accuracies, precisions, recalls, and F1-scores of 91.86%, 92.29%, 91.86%, 91.91% and 91.04%, 91.30%, 91.04%, 90.96% for each data split respectively. Conversely, the BG, N, D, P, S class combination registered slightly lower accuracy rates of 87.73% and 84.39%, along with precision, recall and F1-score values of 88.91%, 87.73%, 87.74% and 84.39%, 84.39%, 84.39% for each data split respectively. When the classifier was tested for 4-class classification, BG, N, D, V demonstrated high accuracy rates of 96.90% and 96.22%, with precision, recall, and F1-score of 97.04%, 96.90%, 96.90% and 96.31%, 96.22%, 96.23% for each data split respectively. However, the BG, N, P, S combination indicated a decrease in accuracy at 87.01% and 85.71%, along with precision, recall, and F1-score values of 89.16%, 87.01%, 87.30% and 85.88%, 85.71%, 85.78% for each data split respectively. The performance for 3-class and 2-class classification showed a range of accuracy scores, with the 2-class case achieving 100% accuracy for the N, D combination in both data splits, with corresponding precision, recall, and F1-score values also being 100%. All these experiments were performed with data augmentation only on train data, as it consistently led to improved results. ### ROI multi-class classification with wound location Following the inclusion of wound location data in conjunction with image data, **Table 4** displays the performance metrics from experiments using an Adaptive-gated MLP to separately analyze the wound location. This data was subsequently concatenated with the fully connected layers of the prior model. For the 6-class classification comprising BG, N, D, P, S, and V classes, the accuracy was recorded at 87.50% and 83.82%, precision at 88.04% and 83.42%, recall at 87.50% and 83.82%, and F1-score at 87.37% and 83.53% for the data splits of 70%,15%,15% and 60%,15%,25% respectively. Moving on to the 5-class classification, the class combination BG, N, D, S, V saw strong results with accuracy levels of 91.86% and 91.54%, precision at 91.99% and 91.65%, recall at 91.86% \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **No. of** & \multirow{2}{*}{**Classes**} & \multirow{2}{*}{**A**} & \multirow{2}{*}{**P**} & \multirow{2}{*}{**R**} & \multirow{2}{*}{**F**} \\ **Classes** & & & & & \\ \hline [MISSING_PAGE_POST] \end{tabular} \begin{tabular}{|c|c|c|c|c|c|} \hline **No. of** & \multirow{2}{*}{**Classes**} & \multirow{2}{*}{**A**} & \multirow{2}{*}{**P**} & \multirow{2}{*}{**R**} & \multirow{2}{*}{**F**} \\ [MISSING_PAGE_POST] and 91.54%, and F1-score at 91.85% and 91.50% across the two data splits. Conversely, the BG, N, D, P, S combination demonstrated lower accuracy at 84.90% and 84.39%, precision at 85.28% and 85.56%, recall at 84.90% and 84.39%, and F1-score at 84.96% and 83.92%. In the context of the 4-class classification, the BG, N, D, V combination once again showed impressive metrics with accuracy rates of 95.87% and 96.22%, precision at 96.06% and 96.37%, recall at 95.87% and 96.22%, and F1-score at 95.83% and 96.24%. On the other hand, the BG, N, P, S combination witnessed a decrease in performance, registering accuracy levels of 90.90% and 88.88%, precision at 91.50% and 88.90%, recall at 90.90% and 88.88%, and F1-score at 91.03% and 88.72% for each respective data split. For the 3-class and 2-class classification models, a range of performance scores were observed. The 2-class case, particularly the N, D combination, achieved perfect performance with accuracy, precision, recall, and F1-score all at 100% in both data splits. The D, P class combination, however, recorded the lowest performance levels for this category with accuracy at 86.14% and 86.41%, precision at 86.14% and 86.73%, recall at 86.00% and 86.41%, and F1-score at 86.03% and 86.22%. In conclusion, the results show that the incorporation of wound location data alongside image data led to variations in accuracy, precision, recall, and F1-score based on the number and combination of classes, as well as the distribution of the data split. Furthermore, the use of an Adaptive-gated MLP for separate wound location analysis consistently resulted in promising outcomes across all experiments. \begin{tabular}{|c|c|c|c|c|c|c|} \hline **No. of** & & & & & & & \\ **Classes** & **Classes** & **A** & **P** & **R** & **F** \\ \hline **6 Class** & **BG, N,** & & & & & & \\ **5 Class** & **BG, N,** & **87.50** & 88.04 & 87.50 & 87.37 \\ \hline **5 Class** & **BG, N,** & 91.52 & 91.51 & 91.52 & 91.44 \\ **5 Class** & **BG, N,** & **91.86** & 91.99 & 91.86 & 91.85 \\ **5 Class** & **BG, N,** & 84.90 & 85.28 & 84.90 & 84.96 \\ **5 Class** & **BG, N,** & 86.70 & 86.99 & 86.70 & 86.34 \\ **5 Class** & **BG, N,** & 95.87 & 96.06 & 95.87 & 95.83 \\ **4 Class** & **BG, N,** & 94.38 & 94.50 & 94.38 & 94.34 \\ **4 Class** & **BG, N,** & **96.80** & 97.04 & 96.80 & 96.80 \\ **5 Class** & **BG, N,** & 88.75 & 89.05 & 88.75 & 88.78 \\ **5 Class** & **BG, N,** & 92.94 & 93.24 & 92.94 & 92.74 \\ **5 Class** & **BG, N,** & 90.90 & 91.50 & 90.90 & 91.03 \\ **5 Class** & **BG, N,** & 92.47 & 92.79 & 92.47 & 92.34 \\ **5 Class** & **PG, S, V** & 87.05 & 87.18 & 87.05 & 87.10 \\ \hline **5 Class** & **BG, N,** & 81.57 & 81.69 & 81.57 & 81.07 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **No. of** & & & & & & \\ **Classes** & **B ## Appendix B Whole Image Classification In the whole image classification, the precision, recall, and F1-score measurements show that the incorporation of these metrics, alongside accuracy, provides a more comprehensive understanding of the model's performance. **Table 5** depicts these additional measurements, and they reveal interesting patterns that match with the observed accuracy rates. For the 4-class classification comprising D, P, S, and V, precision, recall, and F1-scores were observed at 83.22%, 83.13%, and 82.26% respectively for the 70-15-15 data split. For the 60-15-25 split, these scores were slightly lower, coming in at 78.60%, 78.10%, and 76.75%, respectively. This pattern is similarly reflected in the accuracy measurements for the same class combination and data splits. In the 3-class classification, the D, S, V combination showed a high precision of 93.48%, recall of 92.64%, and F1-score of 92.54% for the 70-15-15 split. Conversely, the D, P, S combination demonstrated lower values, with a precision of 82.66%, recall of 81.35%, and F1-score of 80.72% in the same split. Focusing on the 2-class classification, all N-related combinations (N, D; N, P; N, S; N, V) achieved perfect precision, recall, and F1-score of 100% in both data splits. However, other combinations like D, P and P, S displayed lower scores. The D, P combination, for instance, recorded precision, recall, and F1-score of 89.38%, 87.17%, and 86.50% respectively for the 70-15-15 split, and 86.03%, 84.37%, and 83.61% respectively for the 60-15-25 split. In conclusion, the whole image classification performance, as depicted by precision, recall, F1-score, and accuracy, varies based on the number of classes and the specific class combinations. N-related combinations in the 2-class category consistently showed perfect precision, recall, and F1-scores, indicating optimal classification performance. These results provide significant insights and avenues for further research and optimization in whole image classification. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **No. of Class** & **Classes** & **A** & **P** & **R** & **F** \\ **s** & **S** & & & & & \\ \hline **4 Class** & \begin{tabular}{c} **D, P,** \\ **S, V** \\ \end{tabular} & **83.13** & 83.22 & 83.13 & 82.26 \\ \hline **3 Class** & \begin{tabular}{c} **D, S, V** \\ **P, S, V** \\ \end{tabular} & **92.64** & 93.48 & 92.64 & 92.54 \\ \hline **P, S, V** & 89.83 & 89.71 & 89.83 & 89.73 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|} \hline **No. of Class** & **Classes** & **A** & **P** & **R** & **F** \\ **s** & & & & & \\ \hline **4 Class** & \begin{tabular}{c} **D, P,** \\ **S, V** \\ \end{tabular} & **83.13** & 83.22 & 83.13 & 82.26 \\ \hline **3 Class** & \begin{tabular}{c} **D, S, V** \\ **P, S, V** \\ \end{tabular} & **92.64** & 93.48 & 92.64 & 92.54 \\ \hline **P, S, V** & 89.83 & 89.71 & 89.83 & 89.73 \\ \hline \end{tabular} \end{table} Table 4: ROI image with location-based classification with different data split (Left - 70%,15%,15%, Right - 60%,15%,25%). P = Precision, R = Recall, F = F1-score, A = Accuracy ## Appendix C Cross Validation Cross-validation is a robust methodology we employed in our experiments to validate the performance of our machine learning model. It involves splitting the data into several subsets or 'folds', in this case, five. We then train the model on all but one of these folds, testing the model's performance on the unused fold as displayed in **Table 7**. This process is repeated for each fold, giving us a better understanding of how our model might perform on unseen data. It's particularly useful in situations where our dataset is limited, as it maximizes the use of data for both training and validation purposes. Due to resource constraints, our experimental scope was confined to select procedures. As such, we were only able to conduct a limited subset of experiments, focusing on those deemed most crucial and promising. In the first scenario, we explored an approach called "ROI without location" with an 80-20 data split. Here, the average accuracy varied across different groupings of classes. The accuracy for a grouping of six classes fluctuated between 80.01% to 85.34%, giving an average of 82.58%. For five classes, it varied from 80.71% to 87.14%, with an average of 82.28%. In a group of four classes, we observed a higher average accuracy of 95.65%, while three classes gave us an average accuracy of 74.80%. The second method we looked at was "ROI with location". Here, we noticed a similar pattern to our first method. The six-class grouping showed an average accuracy of 83.77%, with individual tests ranging from 80.10% to 86.91%. The five-class grouping had an average accuracy of 81.85%, ranging between 78.57% and 84.28%. For four classes, the average accuracy was high again at 95.50%, while the three classes gave us an average of 76.60%. Finally, we examined the "whole image" method with the same 80-20 data split. A four-class grouping resulted in an average accuracy of 78.34%. One group of three classes managed a much higher average accuracy of 89.86%, while the other three-class group had an average accuracy of 78.22%. \begin{table} \begin{tabular}{|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline & **D, P, S** & 81.35 & 82.66 & 81.35 & 80.72 \\ \cline{2-7} & **D, P,** & \multirow{2}{*}{87.30} & \multirow{2}{*}{87.30} & \multirow{2}{*}{87.30} & \multirow{2}{*}{87.30} & \multirow{2}{*}{87.30} & \multirow{2}{*}{87.30} & \multirow{2}{*}{87.30} \\ & **V** & & & & \\ \hline \multirow{8}{*}{**2 Class**} & **N, D** & \multirow{2}{*}{**100.0**} & 100.0 & 100.0 & 100.0 & 100.0 \\ & & & 0 & 0 & 0 & 0 \\ \cline{2-7} & **N, P** & \multirow{2}{*}{**100.0**} & 100.0 & 100.0 & 100.0 & 100.0 \\ & & & 0 & 0 & 0 & 0 \\ \cline{2-7} & **N, S** & \multirow{2}{*}{**100.0**} & 100.0 & 100.0 & 100.0 & 100.0 \\ & & & 0 & 0 & 0 & 0 \\ \cline{2-7} & **N, V** & \multirow{2}{*}{**100.0**} & 100.0 & 100.0 & 100.0 & 100.0 \\ & & & 0 & 0 & 0 & 0 \\ \cline{2-7} & **D, P** & 87.17 & 89.38 & 87.17 & 86.50 & 86.50 \\ & **D, S** & 95.45 & 95.80 & 95.45 & 95.42 & \\ \cline{2-7} & **D, V** & \multirow{2}{*}{**100.0**} & 100.0 & 100.0 & 100.0 & 100.0 \\ & & & 0 & 0 & 0 & 0 \\ \cline{2-7} & **P, S** & 88.57 & 89.26 & 88.57 & 88.62 & \\ \cline{2-7} & **P, V** & 89.74 & 91.20 & 89.74 & 89.34 & \\ \cline{2-7} & **S, V** & 95.45 & 95.80 & 95.45 & 95.42 & \\ \hline \end{tabular} \end{table} Table 5: AZH Whole image-based classification with different data split (Left - 70%,15%,15%, Right - 60%,15%,25%). P = Precision, R = Recall, F = F1-score, A = Accuracy Overall, these results show that the different methods and the number of classes used can have varied impacts on performance. ### Robustness and Reliability To assess the robustness and reliability of our model, we performed multiple tests with varying class distributions on two distinct datasets: the newly created AZH Dataset and the Medetec Dataset. We picked the latter due to its unique data collection and distribution features. Through rigorous testing, we gauged our model's adaptability to diverse conditions. First, we examined our model on the AZH dataset with a class distribution of 60-15-25 for classes D, P, and V. The model showed notable robustness, achieving an accuracy, precision, recall, and F1-score of 82.69%, 82.52%, 82.69%, and 82.30% respectively. Next, we used the Medetec dataset for a whole image-based classification. The model continued to showcase excellent robustness despite a different data distribution, registering an accuracy, precision, recall, and F1-score of 87.50%, 87.44%, 87.50%, and 87.43% respectively. We then altered the class distribution to 70-15-15 on the AZH dataset. The model continued to perform robustly, achieving 87.30% accuracy. In a similar variation on the Medetec dataset, the model held its high performance with accuracy, precision, recall, and F1-score of 88.57%, 88.65%, 88.57%, and 88.50%. The series of tests reaffirm our model's consistency and adaptability, demonstrating its ability to perform at a high level regardless of class distribution changes or dataset characteristics. This confirms its robustness and versatility as a data analysis tool. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline **No. of** & \multirow{2}{*}{**Classes**} & \multirow{2}{*}{**A**} & \multirow{2}{*}{**P**} & \multirow{2}{*}{**R**} & \multirow{2}{*}{**F**} & **No. of** & \multirow{2}{*}{**Classes**} & **A** & **P** & **R** & **F** \\ **Classes** & & & & & & & **3 Class** & **D, P, V** & **87.44** & 87.50 & 87.43 \\ \hline **3 Class** & **D, P, V** & **88.57** & 88.65 & 88.57 & 88.50 & **3 Class** & **D, P, V** & **87.50** & 87.44 & 87.50 & 87.43 \\ \hline \end{tabular} \end{table} Table 6: Medetec Whole image-based classification with different data split (Left - 70%,15%,15%, Right - 60%,15%,25%). P = Precision, R = Recall, F = F1-score, A = Accuracy \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{8}{|c|}{**ROI without Location with 80-20 Data Split (\%)**} \\ \hline **No. of Classes** & **Classes** & **Fold1** & **Fold2** & **Fold3** & **Fold4** & **Fold5** & **AVG** \\ \hline **6 Class** & **BG, N, D, P, S, V** & 80.01 & 80.01 & 82.72 & **85.34** & 84.81 & 82.58 \\ \hline **5 Class** & **BG, N, D, P, S** & 81.42 & 80.71 & 80.71 & 81.42 & **87.14** & 82.28 \\ \hline **4 Class** & **BG, N, D, V** & **96.89** & 93.79 & 96.12 & 95.34 & 96.11 & 95.65 \\ \hline **3 Class** & **D, P, S** & **80.00** & 71.00 & 73.00 & 72.00 & 78.00 & 74.80 \\ \hline \multicolumn{8}{|c|}{**ROI with Location with 80-20 Data Split (\%)**} \\ \hline & **Classes** & **Fold1** & **Fold2** & **Fold3** & **Fold4** & **Fold5** & **AVG** \\ \hline **6 Class** & **BG, N, D, P, S, V** & **86.91** & 83.24 & 80.10 & 83.24 & 85.34 & 83.77 \\ \hline **5 Class** & **BG, N, D, P, S** & 80.71 & 83.57 & 78.57 & 82.14 & **84.28** & 81.85 \\ \hline **4 Class** & **BG, N, D, V** & 94.57 & 95.34 & **96.12** & 95.34 & **96.12** & 95.50 \\ \hline **3 Class** & **D, P, S** & 73.00 & 78.00 & 79.00 & 73.00 & **80.00** & 76.60 \\ \hline \multicolumn{8}{|c|}{**Whole Image with 80-20 Data Split (\%)**} \\ \hline & **Classes** & **Fold1** & **Fold2** & **Fold3** & **Fold4** & **Fold5** & **AVG** \\ \hline **4 Class** & **D, P, S, V** & 77.71 & 78.37 & 79.27 & 77.47 & **78.87** & 78.34 \\ \hline **3 Class** & **D, S, V** & 90.10 & 86.81 & **94.50** & 91.20 & 86.68 & 89.86 \\ \hline **3 Class** & **D, P, S** & 79.74 & 75.94 & **81.01** & 79.74 & 74.68 & 78.22 \\ \hline \end{tabular} \end{table} Table 5: Model-based classification with different data split (Left - 70%,15%,15%,15%,15%,15%,15%,15%,15%,25%). P = Precision, R = Recall, F = F1-score, A = Accuracy ## 6 Conclusion In this paper, we have proposed a method for the estimation of the Figure 9: Confusion matrix and ROC curve for six class classification on BG-N-D-P-S-V (class 0-1-2-3-4-5), left column displays dataset with 70/15/15 and right column displays dataset with 60/15/25. ## Discussion **A. Comparison with previous work:** Our study presents a comprehensive comparison of our model's performance with those of previous studies, namely the research conducted by Rostami et al. [36], Anisuzzaman et al. [28], Goyal et al. [32], and Aguirre et al.[33]. The comparison is based on accuracy as the evaluation metric, which is a common criterion for classification tasks. For each work, we have tested our model on the same dataset and compared the results as displayed in **Table** \begin{table} \begin{tabular}{|c|c|c|c|} \hline **[28]** & **BG, N, D, P, V** & (60-15-25) & 86.46 (60-15-25) & **89.11** \\ **5 Class** & **BG, N, D, S, V** & (Selected & 91.00 & Our model is & **91.54** \\ & **BG, N, D, P, S** & Accuracy & 83.14 & fixed, and & **84.39** \\ & **BG, N, P, S, V** & based on & 86.17 & we did not & **88.82** \\ \hline **4 Class** & **BG, N, D, V** & author’s & 95.57 & used any & **96.22** \\ & **BG, N, P, V** & highlight & 92.47 & different & **93.15** \\ & **BG, N, D, P** & across & 94.16 & combinations & **96.10** \\ & **BG, N, D, P** & different & 89.23 & & **89.31** \\ & **BG, N, D, S** & models) & 91.30 & & **93.52** \\ & **BG, N, P, S** & 85.71 & & **88.88** \\ **3 Class** & **D, P** & **92.00** & & 90.72 \\ & **B, S** & **85.51** & & 84.05 \\ & **B, D, V** & 72.95 & & **73.98** \\ & **B, P, V** & 84.51 & & **86.71** \\ & **B, N, D** & 100.0 & & **100.0** \\ & **B, N, P** & **98.31** & & 96.61 \\ & **B, N, S** & **98.51** & & 98.50 \\ & **B, N, V** & **100.0** & & 98.85 \\ & **B, D, P** & 85.00 & & **86.41** \\ & **B, S** & 89.77 & & **89.88** \\ & **B, D, V** & 94.44 & & **97.24** \\ & **B, P, S** & **89.47** & & 84.21 \\ & **B, V** & 90.63 & & **92.70** \\ & **B, S, V** & 97.12 & & **94.23** \\ \hline **Goyal et al.** [32]** & **2 Class** & **N, D** & \multirow{2}{*}{DFU Dataset} & \multirow{2}{*}{92.50} & AZH Dataset & **100.0** \\ **Goyal et al.** [32]** & **2 Class** & & & **B**G** \\ & & & & **B**G** \\ **Aguirre et al.** [33]** & **2 Class** & **N, V** & A dataset of & & **100.0** \\ & **D, V** & 300 & AZH Dataset & & **97.24** \\ **Aguirre et al.** [33]** & **2 Class** & **P, V** & \multirow{2}{*}{85.00} & AZH Dataset & **100.0** \\ & **B, V** & & wound images & & **97.24** \\ **Aguirre et al.** [33]** & **2 Class** & **N, V** & \multirow{2}{*}{A dataset of 300} & AZH Dataset & **100.0** \\ **Aguirre et al.** [33]** & **2 Class** & **D, V** & \multirow{2}{*}{85.00} & AZH Dataset & **97.24** \\ **Aguirre et al.** [33]** & **2 Class** & **P, V** & \multirow{2}{*}{A dataset of 300} & AZH Dataset & **97.24** \\ **Aguirre et al.** [33]** & **2 Class** & **P, V** & wound images & & **90.62** \\ **Aguirre et al.** [33]** & **2 Class** & **N, V** & \multirow{2}{*}{A dataset of 300} & AZH Dataset & **100.0** \\ **Aguirre et al.** [33]** & **2 Class** & **D, V** & \multirow{2}{*}{85.00} & AZH Dataset & **97.24** \\ **Aguirre et al.** [33]** & **2 Class** & **P, V** & \multirow{2}{*}{A dataset of 300} & AZH Dataset & **97.24** \\ **Aguirre et al.** [33]** & **2 Class** & **P, V** & \multirow{2}{*}{A dataset of 300} & AZH Dataset & **90.62** \\ **Aguirre et al.** [33]** & **2 Class** & **P, V** & \multirow{2}{*}{A dataset of 300} & AZH Dataset & **100.0** \\ **Aguirre et al.** [33]** & **2 Class** & **D, V** & \multirow{2}{*}{85.00} & AZH Dataset & **97.24** \\ **Aguirre et al.** [33]** & **2 Class** & **P, V** & \multirow{2}{*}{A dataset of 300} & AZH Dataset & **97.24** \\ **Aguirre et al.** [33]** & **2 Class** & **P, V** & \multirow{2}{*}{A dataset of 300} & AZH Dataset & **90.62** \\ **Aguirre et al.** [33]** & **2 Class** & **P, V** & \multirow{2}{*}{A dataset of 300} & AZH Dataset & **97.24** \\ **Aguirre et al.** [33]** & **2 Class** & **P, V** & \multirow{2}{*}{A dataset of 300} & AZH Dataset & **97. **8. Figure 8** and Figure 9 display confusion matrix for 3-class (P, S and V) and 6-class (BG, N, D, P, S, V). **Figure 9** also display ROC plots for 6-class ROI image with location-based image classification. In the case of Rostami et al.'s work [36], the classification was carried out on a 6-class, 5-class, and 4-class basis using the AZH dataset. Our model outperformed the previous work by a notable margin across all class divisions. For example, in the 6-class division (BG, N, D, P, S, V), our model improved the accuracy by approximately 11.73%. Furthermore, for the 5-class and 4-class divisions, our model consistently showed improvements, highlighting its efficiency and robustness. Anisuzzaman et al.'s work [28] also used the AZH dataset, with a focus on 6-class, 5-class, and 4-class divisions. Our model yielded better accuracy results, such as an increase of about 1.34% in the 6-class division. The consistency of improved performance in all divisions showcases the broad applicability of our model. As for the work of Goyal et al. [32], they only classified into a 2-class division (N, D) using the DFU dataset. When tested on the AZH dataset, our model demonstrated 100% accuracy, similar to their findings. This highlights the versatility of our model in achieving high accuracy across different datasets. Aguirre et al. [33] conducted their research on a dataset of 300 wound images with a 2-class division. Their model yielded an 85% accuracy rate, while our model, when tested on the AZH dataset, showed a significant improvement in the accuracy rates, ranging from 92.70% to 100%. **B. Limitations and Future Research:** While our research demonstrates the strengths of our model, it is not without limitations. For instance, all comparisons in our current study were conducted using the AZH and Medetec dataset. Although our model performed commendably on this dataset, the results might not be fully generalizable to all datasets. Hence, the applicability of our model to other datasets remains an area for further investigation. It's noteworthy that our study does not solely rely on accuracy as the evaluation metric. In an attempt to provide a comprehensive evaluation, we also considered other metrics such as precision, recall, and the F1 score. This thorough approach helps to give a well-rounded understanding of our model's performance. However, despite its strong performance, there could be scenarios or datasets where the model might not yield the same level of success, a potential caveat to be explored in future work. Future research should be focused on testing the model with larger and more diverse datasets to ensure its generalizability. Specifically, addressing the issue of overlap between healthy and diseased skin, possibly through refining the image preprocessing or feature extraction stages, could yield significant improvements. Furthermore, conducting comparative studies using a wider range of evaluation metrics could offer a broader understanding of the model's strengths and weaknesses. In addition to further empirical evaluation, there is also potential to investigate the theoretical properties of the model. Understanding why the model performs as it does could lead to insights that drive further improvements. ## Conclusion In this study, we presented a multi-modal wound classification network that uniquely incorporates both images and corresponding wound locations to categorize wounds. Differing from previous research, our approach utilizes a pre-existing body map and two datasets to classify wounds based on their locations. Our model is built on a novel deep learning architecture, featuring parallel squeeze-and-excitation blocks (P_scSE), adaptive gated multi-layer perceptron (MLP), axial attention mechanism, and convolutional layers. The integration of image and location data contributed to superior classification outcomes, demonstrating the potential of multi-modal data utilization in wound management. Despite the benefits, our work has some limitations, including data scarcity which affects the generality of our model. Looking ahead, future research will aim to enhance our model by incorporating more modalities such as pain level, palpation findings, general observations, wound area and volume, and patient demographics. Addressing data overlaps in wound location will also be a priority to enhance classification accuracy. Our efficient wound care algorithm has significant potential for automation in wound healing systems, offering cost-effectiveness and aiding clinicians in prompt diagnosis and development of suitable treatment plans. Especially in resource-scarce areas, AI-enabled wound analysis can contribute to rapid diagnosis and quality treatment. However, this necessitates proper technical training for both patients and physicians, which will also be a focus of future work. Expanding our dataset will help improve our model's performance and better serve wound care providers and patients alike. ## Data availability The AZH dataset can be accessed via the following link: (Link). Due to Authorship conflict, we cannot make Medetec dataset public.
2305.00986
Meat Freshness Prediction
In most retail stores, the number of days since initial processing is used as a proxy for estimating the freshness of perishable foods or freshness is assessed manually by an employee. While the former method can lead to wastage, as some fresh foods might get disposed after a fixed number of days, the latter can be time-consuming, expensive and impractical at scale. This project aims to propose a Machine Learning (ML) based approach that evaluates freshness of food based on live data. For the current scope, it only considers meat as a the subject of analysis and attempts to classify pieces of meat as fresh, half-fresh or spoiled. Finally the model achieved an accuracy of above 90% and relatively high performance in terms of the cost of misclassification. It is expected that the technology will contribute to the optimization of the client's business operation, reducing the risk of selling defective or rotten products that can entail serious monetary, non-monetary and health-based consequences while also achieving higher corporate value as a sustainable company by reducing food wastage through timely sales and disposal.
Bhargav Sagiraju, Nathan Casanova, Lam Ivan Chuen Chun, Manan Lohia, Toshinori Yoshiyasu
2023-05-01T04:02:50Z
http://arxiv.org/abs/2305.00986v1
# Meat Freshness Prediction ###### Abstract In most retail stores, the number of days since initial processing is used as a proxy for estimating the freshness of perishable foods or freshness is assessed manually by an employee. While the former method can lead to wastage, as some fresh foods might get disposed after a fixed number of days, the latter can be time-consuming, expensive and impractical at scale. This project aims to propose a Machine Learning (ML) based approach that evaluates freshness of food based on live data. For the current scope, it only considers meat as a the subject of analysis and attempts to classify pieces of meat as fresh, half-fresh or spoiled. Finally the model achieved an accuracy of above 90% and relatively high performance in terms of the cost of misclassification. It is expected that the technology will contribute to the optimization of the client's business operation, reducing the risk of selling defective or rotten products that can entail serious monetary, non-monetary and health-based consequences while also achieving higher corporate value as a sustainable company by reducing food wastage through timely sales and disposal. Machine Learning, Self-Organization, Self-Organ * If meat is purchased, it will be consumed and thus if spoiled meat is purchased, the customer will have a health issue. A total cost of $100,000 will be incurred due to legal action, reporting to health-related authorities, loss of corporate trust, and other related factors. ## 3 Methodology This project experiments with two different models, namely ResNet and UNet to predict meat freshness, comparing their performance on a fixed set of metrics to identify the best performer. The original train dataset is split into train and validation dataset that are used for model development and hyperparameter tuning. The original validation set is treated as an 'unseen' test dataset that is only used for final model evaluation after all tuning has been done. In evaluating model performance, a metric called MisClassification Cost (MCC), which is defined specifically for this project, is used. MCC is an expected value and represents the cost associated with misclassification, which depends on the actual class and the direction of the misclassification. A special metric apart from class based accuracy, precision, recall etc is required as the cost of misclassification is not symmetric, and hence certain misclassifications are more expensive or riskier than others. MCC is calculated by _expected loss from misclassification_ less _expected gain from misclassification_, which considers the probability of purchase in each case explained in 2.Assumption. Misclassification on actual SP samples leads to serious consequences due to customers' potential health issue it could cause. Furthermore, "misclassifying actual SP as HF" has the higher expected cost than "misclassifiryng actual SP as FR". This is because discounts due to "predicted HF" can increase the purchase probability, which increases the risk. Thus, predicting actual SP as HF is considered the most costly misclassification that should be avoided in this project. Table 2 and 3 show MCC of each combination of actual and predicted classes and calculation of MCC. An ideal model is one which minimizes the MCC. Hyperparameter settings and corresponding total MCC are recorded in Weight and Biases in order to compare the performance of different models more easily. It has to be noted that the evaluation in this project highly depends on the assumptions set in the previous section. In the real business setting, MCC must be modified following the user's actual discounting policy, purchase probabilities and estimated cost of any consequences. Simultaneously, to further enhance the model robustness, MCC was not used during training as an evaluation metric or the loss function. By doing this, the model is prevented from learning to optimize based on MCC and attempts to optimize on a different metric instead, namely the cross entropy loss. The benefit is that model evaluation is done independently of model training, avoiding any data leakage and increasing the reliability of the model on unseen data. ## 4 Data Description & EDA This project assumes the Meat Freshness Image Dataset (Vinayakshanawad, 2020) is the dataset provided by the client. The data consists of two folders for train and test datasets. The images are 416 x 416 pixels with the train dataset having 1,816 images and test dataset having 452 images. All images are of red meats. There are three classes of images as discussed in Assumptions section: fresh, half-fresh and spoiled. \begin{table} \begin{tabular}{l c c} \hline \hline Actual & \$10 (Pred as FR) & \$5 (Pred as HF) \\ \hline FR & 90\% & 100\% \\ HF & 10\% & 90\% \\ SP & 1\% & 5\% \\ \hline \hline \end{tabular} \end{table} Table 1: Purchase Probability of Each Actual\(|\)Price(Pred) Combination \begin{table} \begin{tabular}{l c c} \hline \hline Actual\(|\)Pred & \multicolumn{1}{c}{Consequence} & MCC \\ \hline FR \(|\)FR & \(\$10*90\%\) & \$5*100\% \\ FR \(|\)SP & \(\$10*90\%\) & \$0 \\ HF \(|\)FF & \(\$50\) & \$0 \\ HF \(|\)FR & \(\$5*90\%\) & \$10*10\% \\ HF \(|\)SP & \(\$5*90\%\) & \$0 \\ SP \(|\)SP & \(\$0\) & \$0 \\ SP \(|\)FR & \(\$10,000*1\%\) & \$10*1\% \\ SP \(|\)HF & \(\$10,000*5\%\) & \$5*5\% \\ \hline \hline \end{tabular} \end{table} Table 2: MCC of Each Actual\(|\)Pred Combination \begin{table} \begin{tabular}{l c c} \hline \hline Actual\(|\)Pred & \multicolumn{1}{c}{Consequence} & MCC \\ \hline FR \(|\)FR & \(\$10*90\%\) & \$5*100\% \\ FR \(|\)SP & \(\$10*90\%\) & \$0 \\ HF \(|\)FF & \(\$50\) & \$0 \\ HF \(|\)FP & \(\$5*90\%\) & \$0 \\ SP \(|\)SP & \(\$0\) & \$0 \\ SP \(|\)FR & \(\$10,000*1\%\) & \$10*1\% \\ SP \(|\)HF & \(\$10,000*5\%\) & \$5*5\% \\ \hline \hline \end{tabular} \end{table} Table 3: Calculation of MCC During EDA, the class balance and pixel value frequency was explored to see if there was any abnormalities with the dataset before pre-processing. For class balance, the three classes were relatively balanced within the training dataset as shown below. For pixel value frequency, it can be determined that the three classes have distinct distribution of pixel value frequencies as shown below. The distribution is significantly concentrated on the lighter pixels (255) for fresh meats while the distribution is concentrated on the dark pixels (0) for the spoiled meats. Half-fresh meat also has a distribution that is more concentrated on the lighter pixels than the spoiled meats. This is reasonable since on most meats the first sign of rot can be visibility detected by darker colored areas on the meat. ## 5 Preprocessing For pre-processing, augmentation was applied to the dataset in order for the models to be trained on more data with noise and also not overfit. The training dataset was split into training and validation and a class and pipeline transformation function was established to augment the dataset, the transformation function did the following on the training dataset: Figure 1: Sample Image per Class Figure 5: Pixel Value Distribution for Images of Spoiled Meat. Pixel value from darker (0) to lighter (255). Figure 4: Pixel Value Distribution for Images of Half-Fresh Meat. Pixel value from darker (0) to lighter (255). Figure 3: Pixel Value Distribution for Images of Fresh Meat. Pixel value from darker (0) to lighter (255). Figure 2: Number of Images per Class ## 6 Algorithm and Modeling ### ResNet One of the models utilized to predict the freshness of the meat based on its image is ResNet (He et al., 2016). ResNet is a convolutional neural network architecture that introduced residual blocks which allowed for effective training of deep neural networks. To fit ResNet for this paper's freshness prediction task, the final layer is reshaped to have the same output count as our image classes. Two variants of the ResNet architecture were used to train models which are ResNet-18 and ResNet-50. The former is 18 layers deep while the latter is 50 layers deep. For model training, two transfer learning strategies were utilized. The first is feature extraction which utilized pre-trained weights of ResNet to get the image embeddings, and only the parameters of the final layer were updated during training. The pre-trained weights used for feature extraction were acquired from the PyTorch library (Paszke et al., 2019). The second strategy is to fine-tune the whole ResNet architecture by updating all model parameters using the dataset. After training, the models were evaluated on the test set and the models' performance metrics and final hyperparameters used are reported in Tables 4 and 5 below. All model weights are saved after training should the client decide that the solution proposed is suitable for deployment. The best performing model based on test set accuracy, precision, recall is the fine-tuned ResNet-18. Interestingly, the results show that a deeper network doesn't necessarily translate to better model performance. While the ResNet-50 feature extraction model performed better than the ResNet-18 feature extraction model, the ResNet-18 fine-tuned model is superior to the ResNet-50 fine-tuned model. The ResNet-18 feature extraction model also performed better than both of the ResNet-50 models. This is probably due to the relatively small number of samples used to train the models. The deeper ResNet-50 architecture might be over-fitting or is not learning better representations of each image class but this is still speculated given the high complexity of ResNet-50. ### UNet with Dense Net A semi-supervised approach can also be used to classify images, one of the more common methods is image segmentation. Image Segmentation is a method of image representation that uses a set of "masks" which act as a form of ground truth to segment specific portions of our images and these would be the representational patterns that we would like to capture. In this case, the pattern of interest is the rot present in an image, while identifying rot is a subjective matter it is entirely possible to map this as an input feature to a model and have its representations capture. This model would be able to capture image segments on related images. To achieve this, the model typically uses a combination of \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Accuracy & Precision & Recall \\ \hline ResNet 18-FE & 82.71\% & 84.66\% & 83.71\% \\ ResNet 18-FT & **93.13\%** & **93.97\%** & **92.85\%** \\ ResNet 50-FE & 88.03\% & 88.35\% & 88.86\% \\ ResNet 50-FT & 84.70\% & 84.50\% & 86.14\% \\ \hline \hline \end{tabular} \end{table} Table 4: ResNet Model Performance \begin{table} \begin{tabular}{l c} \hline \hline Hyperparameter & Value \\ \hline Batch Size & 32 \\ Epochs & 5 \\ Optimizer & Adam \\ Learning Rate & 0.001 \\ Loss Criterion & Cross-Entropy \\ \hline \hline \end{tabular} \end{table} Table 5: ResNet Hyperparameters Figure 6: Sample Image after Transformation Double Convolutional Neural Networks with a structure called skip-connections which skip some of the connections in a neural network and feeds the output of one layer as input to the other layers. Skip-connections greatly reduce the complexity of loss surfaces, making it easier for optimizers to reduce loss while ensuring that feature representations are reused (Li et al., 2017). The images for a sample image and prediction are shown below (areas in yellow are rotten areas of the meat as identified by the model). The algorithm used to achieve this was UNet (Ronneberger et al., 2015), it uses Double Convolutional layers to identify and extract features from the input image and uses skip connects to reuse these features in a related layer. The idea is that each feature set captured in a layer is captured in a layer connected by a skip connection and passed to the next layer to compute the representation segment. Since this task outputs a set of image patterns the ideal outcome would be identifying the quality of the outputs in terms of the intersection and the overlap resulting from the predictions and the image masks. The loss functions capable of representing this effort are Dice loss and Jaccard loss which broadly look at the ratio of the intersection to the union, so concretely both would have a measure of how well the model can segment the patterns of interest from a given input image. The extracted image segmented predictions were passed as input features to a DenseNet model and predictions were output based on the segments captured. The segment of interest in this case is the rot present in the image and this was one-hot encoded when it was passed to the DenseNet model. The outputs from this model would be used to classify the image. This model, however, seems to show very poor performance in classification because the segments may not be fully interpreted by the model. While a larger model like ResNet learns several feature representations from an image with increasing complexity, this model learns only from the image segments captured and can only use this limited information for inference. As observed in the model performance, the recall is quite low which means that the model is incorrectly classifying meat but the precision is quite high which implies that the model is very accurate in identifying the correct classes. The resulting misclassification cost is also quite high in this case because every incorrect prediction would result in a very large cost to the business and as a result, this model was not used as the final model. ## 7 Model Evaluation To evaluate the model performance, the primary metrics used were accuracy and Misclassification Cost (MCC). While accuracy is commonly understood as the model's predictive capability, another method of assessment would be MCC which uses the underlying principles of the Expected Value Framework (EVF) to identify the cost to a company based on the predictions from this model. MCC can be interpreted as the amount of money lost by the business should the model misclassifies an image. This would determine how the model can affect the business. The MCC is calculated based on an individual value resulting from the actual value vs the predictions as referenced in Table 2 and 3 and cumulatively they would form a cost representing the amount of money lost by the business per misclassified image. This would prioritize the model development to ensure that specific costly misclassification, which is predicting actual spoiled as fresh or half-fresh, is avoided while ensuring that the model has a high accuracy. The cumulative MCCs shown in Table 8 indicate the potential costs of using the model in one business day. It means that if daily benefits the client would obtain with this technology, such as labour cost reduction, outweight the cumulative MCC, the client could consider the introduction of the technology. \begin{table} \begin{tabular}{l c} \hline Hyperparameter & Value \\ \hline Batch Size & 8 \\ Epochs & 5 \\ Optimizer & AdamW \\ Learning Rate & 0.001 \\ Loss Criteria & Jaccard Loss \& Cross-Entropy \\ \hline \end{tabular} \end{table} Table 6: UNet and DenseNet Hyperparameters \begin{table} \begin{tabular}{l c c} \hline Accuracy & Precision & Recall \\ \hline 35.25\% & 100\% & 35.25\% \\ \hline \end{tabular} \end{table} Table 7: UNet with Dense Net Model Performance Figure 7: Sample Image and Prediction The ResNet 18-FT model has exceptionally higher precision and recall scores on test data compared to other models and this also shows that it is able to largely generalize on the dataset and given that the dataset is small it can identify the patterns correctly without compromising too much on the quality of the predictions. However, the ResNet 50-FE yielded the lowest MCC despite having lower accuracy, precision and recall compared to ResNet 18-FT. This means that ResNet 50-FE is the best model for this paper's business case. The reason why the MCC evaluation chose a lesser performing model as the most appropriate one for the business case lies in how the MCC matrix penalizes the mistakes of the models. Looking at the confusion matrix of ResNet 18-FT in Figure 8, it misclassified 10 spoiled meat images as half-fresh, resulting in an MCC cost of $4,998 on these mistakes alone. Contrast this with the ResNet 50-FE model in Figure 9, where it didn't misclassify any spoiled meat images but made most of its mistakes misclassifying half-fresh meat images. The ResNet 50-FE model did not incur any heavy cost in misclassfying spoiled meat, and all of its other mistakes only incurred a cost of only $242. This result is in line with the business case where selling a customer spoiled meat will incur a very a high cost, and consequently a model that misclassifies spoiled meat will incur a significant cost to the client. In summary, it is recommended that the model to be deployed in production is the ResNet 50-FE model, since it will yield the client the lowest possible cost when this model makes mistakes. ## 8 Interpretation SHAP (Lundberg and Lee, 2017) and LIME (Ribeiro et al., 2016) paradigms were used to understand how the model works and improve interpretability of the model, to identify what features or areas of the image the model uses to identify the class of a particular piece of meat. Results from SHAP were inconclusive and ambiguous, however, results of using LIME offered valuable insight into what the model sees and uses to perform classification. Some results from the LIME classification are given below. The images in the middle represent the super pixels or segments used as important features used by the model to classify the image for a specific class and the images on the right represent the probabilistic regions used by the model for classification; with regions in green indicating a higher probability that the model used those regions while regions Figure 8: Confusion Matrix of 18FT Figure 10: Matrix of MisClassification Cost(MCC) \begin{table} \begin{tabular}{l c c} \hline \hline Model & Accuracy & MCC \\ \hline ResNet 18-FE & 82.70\% & 5886 \\ ResNet 18-FT & **93.13\%** & 55,076 \\ ResNet 50-FE & 88.03\% & **5242** \\ ResNet 50-FT & 84.70\% & S316 \\ Unet & 35.25\% & 889,411 \\ \hline \hline \end{tabular} \end{table} Table 8: Evaluation on MCC Figure 9: Confusion Matrix of 50FE in red indicating a lower probability for the same. It is observed that the model is able to identify the key regions of rot in the spoilt meat and use those regions for determining it's classification. On the other hand, the model is also able to identify similar segments of freshness in fresh meet and classify those correctly as well. This suggests that the model developed is able to differentiate between spurious features in the image and pick out the important segments of the image that will help it in classification. A similar analysis of a few misclassified images [(12)] suggests the same. Though the model is unable to correctly classify these food items, it is still successful in identifying appropriate areas of the image which can serve as important input features in the final decision. It can hence be concluded that while the model does not yield 100% accuracy, it's current decision making is based on identifying valid areas of the image that represent fresh or spoiled meat, rather than using spurious areas such as portion of packaging or image background to determine the same. ## 9 Conclusion By utilizing this technology, clients would be able to enjoy three main benefits. The first is improved efficiency. By judging freshness based on the actual condition of the product rather than the number of days since processing, unnecessary discounts and waste can be avoided. Secondly, there is an improvement in reliability. Just because the number of days since processing is short does not necessarily mean that the product has not spoiled. This technology can identify truly spoiled products in real time to dispose of them. Avoiding the sale of defective products leads not only to maintaining customer trust but also to avoiding both monetary and non-monetary costs associated with consumers' health problems due to the sale and consumption of spoiled food, which is non-trivial. The third is contributing to better corporate image. Recently, companies' contribution toward Figure 11: Example of Lime Interpretations for each of three classes: Fresh (Top), Half-Fresh (Middle) and Spoiled (Bottom) Figure 12: Example of Lime Interpretations for images that were misclassified sustainability is being highlighted. If food waste can be reduced by this technology, it can be a marketing advantage and can contribute to an increase in corporate value. However, there are also barriers to overcome in order to create the above-mentioned value. If the applicable ingredients are limited (such as only meat), supermarkets will not be interested in this technology. To introduce it into actual operation, it would be necessary to be applicable to all kinds of perishable foods. Also, freshness standards may differ depending on the weather at that time. To address these issues, a huge amount of training data and time, as well as new features to consider additional factors such as humidity levels, are needed. In addition, this model assumes that clear images are available for each slice of meat. In other words, it assumes that each product on display can be photographed one by one with adequate lighting and that the meat is not blocked by any packaging or other material. If a model is improved with cameras and object identification/image processing such that it can analyze the freshness of multiple products at the same time from a single image containing multiple products with packaging, the usability of this model will be further enhanced. Additionally, the interpretation portrayed using LIME can be used in multiple ways. At any given point, it can be used to generate a similar probability map and check what are the areas of rot on the meat that the model is using to make it's prediction. This can help understand if the model is'seeing' the correct features. This utility can be further extended to monitor model performance, and track deterioration if the model starts using irrelevant or relatively unimportant areas of the image to make its classification. Any deterioration or change in the probability maps would signal a need to retrain the model or deploy new models In this project, a supermarket is assumed as a client. However, this technology can also have other applications. For consumers, if a device that can detect the food conditions can be installed in their refrigerators, they can reduce food expenses and waste at home. Finally, the utility and value of this model can be further enhanced by merging it with data-driven decisions such as maintaining inventory based on demand forecasting and other business analytics techniques to add multiple layers of safety in terms of food freshness and wastage.
2310.04956
Towards Explainable Machine Learning: The Effectiveness of Reservoir Computing in Wireless Receive Processing
Deep learning has seen a rapid adoption in a variety of wireless communications applications, including at the physical layer. While it has delivered impressive performance in tasks such as channel equalization and receive processing/symbol detection, it leaves much to be desired when it comes to explaining this superior performance. In this work, we investigate the specific task of channel equalization by applying a popular learning-based technique known as Reservoir Computing (RC), which has shown superior performance compared to conventional methods and other learning-based approaches. Specifically, we apply the echo state network (ESN) as a channel equalizer and provide a first principles-based signal processing understanding of its operation. With this groundwork, we incorporate the available domain knowledge in the form of the statistics of the wireless channel directly into the weights of the ESN model. This paves the way for optimized initialization of the ESN model weights, which are traditionally untrained and randomly initialized. Finally, we show the improvement in receive processing/symbol detection performance with this optimized initialization through simulations. This is a first step towards explainable machine learning (XML) and assigning practical model interpretability that can be utilized together with the available domain knowledge to improve performance and enhance detection reliability.
Shashank Jere, Karim Said, Lizhong Zheng, Lingjia Liu
2023-10-08T00:44:35Z
http://arxiv.org/abs/2310.04956v1
Towards Explainable Machine Learning: The Effectiveness of Reservoir Computing in Wireless Receive Processing ###### Abstract Deep learning has seen a rapid adoption in a variety of wireless communications applications, including at the physical layer. While it has delivered impressive performance in tasks such as channel equalization and receive processing/symbol detection, it leaves much to be desired when it comes to explaining this superior performance. In this work, we investigate the specific task of channel equalization by applying a popular learning-based technique known as Reservoir Computing (RC), which has shown superior performance compared to conventional methods and other learning-based approaches. Specifically, we apply the echo state network (ESN) as a channel equalizer and provide a first principles-based signal processing understanding of its operation. With this groundwork, we incorporate the available domain knowledge in the form of the statistics of the wireless channel directly into the weights of the ESN model. This paves the way for optimized initialization of the ESN model weights, which are traditionally untrained and randomly initialized. Finally, we show the improvement in receive processing/symbol detection performance with this optimized initialization through simulations. This is a first step towards explainable machine learning (XML) and assigning practical model interpretability that can be utilized together with the available domain knowledge to improve performance and enhance detection reliability. Deep learning, reservoir computing, echo state network, equalization, receive processing, symbol detection, model interpretability and explainable machine learning. ## I Introduction The rise of deep learning in recent times has been unprecedented, owing largely to its remarkable success in a wide range of applications. The wireless communications field has also seen active adoption of machine learning (ML) and neural network (NN) based techniques at a rapid pace in a variety of problems and will play a significant role in next-generation wireless networks [1]. The increasing complexity and modeling intractability of end-to-end wireless links caused by highly nonlinear radio frequency (RF) components and low-resolution analog-to-digital converters (ADCs) among others limits the applicability of traditional model-based approaches for most receive processing tasks such as channel equalization or receive symbol detection. While state-of-the-art deep learning practice is that of training a large NN model "offline" with a large dataset and then deploying it for inference, this approach may not be feasible in wireless communications, especially at the physical layer where the over-the-air (OTA) training data is extremely limited. Additionally, the choice of the NN model and its architecture may not be aligned with the nature of the specific problem at hand. On the other hand, reservoir computing (RC) [2] provides an "online learning" alternative whereby the model weights are adaptively updated through low-complexity training, making it ideal for application in tasks such as receive processing [3, 4, 5] and dynamic spectrum access [6, 7], demonstrating superior performance in comparison to conventional model-based methods and other offline learning approaches [8, 9]. Despite this empirical evidence however, a systematic analysis of the general effectiveness of RC-based methods in physical layer receive processing operations is largely missing in state-of-the-art. Although there exist generalization error characterizations of RC from a statistical learning theory perspective [10], these do not address model interpretability or suggest how to incorporate domain knowledge, if available, into the NN design. Our recent work [11] develops a first principles-based signal processing understanding of RC, specifically the echo state network (ESN), and provides basic interpretability to the conventionally untrained ESN model weights under the simple scenario of a two-tap fading channel. The primary contribution of this paper is a systematic analysis of the ESN as an equalizer in a general fading channel scenario with multiple taps, in addition to providing a clear procedure of incorporating available domain knowledge in the form of channel statistics directly into the ESN design. We also assign interpretability to the conventionally untrained ESN model weights, thus representing a significant stride towards explainable machine learning in RC-based approaches when applied to receive processing tasks. _Notation:_\(\Re(\cdot)\) and \(\Im(\cdot)\) are the real part and imaginary part operators respectively. \(\mathbf{1}_{N}\) is the all-ones \(N\times N\) matrix. \((\cdot)\)* is the complex conjugate operator. \(\nabla(\cdot)\) is the gradient operator. ## II System Model ### _Wireless Channel_ Consider a wireless channel with the discrete-time impulse response \(\mathbf{h}=[h_{0},h_{1},\ldots,h_{L-1}]^{T}\in\mathbb{C}^{L}\). The system response can be written in terms of the \(z\)-transform as \(H_{\mathrm{ch}}(z)=\sum_{\ell=0}^{L-1}h\epsilon z^{-\ell}\). Then, the frequency response of the channel,
2304.09453
Network Pruning Spaces
Network pruning techniques, including weight pruning and filter pruning, reveal that most state-of-the-art neural networks can be accelerated without a significant performance drop. This work focuses on filter pruning which enables accelerated inference with any off-the-shelf deep learning library and hardware. We propose the concept of \emph{network pruning spaces} that parametrize populations of subnetwork architectures. Based on this concept, we explore the structure aspect of subnetworks that result in minimal loss of accuracy in different pruning regimes and arrive at a series of observations by comparing subnetwork distributions. We conjecture through empirical studies that there exists an optimal FLOPs-to-parameter-bucket ratio related to the design of original network in a pruning regime. Statistically, the structure of a winning subnetwork guarantees an approximately optimal ratio in this regime. Upon our conjectures, we further refine the initial pruning space to reduce the cost of searching a good subnetwork architecture. Our experimental results on ImageNet show that the subnetwork we found is superior to those from the state-of-the-art pruning methods under comparable FLOPs.
Xuanyu He, Yu-I Yang, Ran Song, Jiachen Pu, Conggang Hu, Feijun Jiang, Wei Zhang, Huanghao Ding
2023-04-19T06:52:05Z
http://arxiv.org/abs/2304.09453v1
# Network Pruning Spaces ###### Abstract Network pruning techniques, including weight pruning and filter pruning, reveal that most state-of-the-art neural networks can be accelerated without a significant performance drop. This work focuses on filter pruning which enables accelerated inference with any off-the-shelf deep learning library and hardware. We propose the concept of _network pruning spaces_ that parametrize populations of subnetwork architectures. Based on this concept, we explore the structure aspect of subnetworks that result in minimal loss of accuracy in different pruning regimes and arrive at a series of observations by comparing subnetwork distributions. We conjecture through empirical studies that there exists an optimal FLOPs-to-parameter-bucket ratio related to the design of original network in a pruning regime. Statistically, the structure of a winning subnetwork guarantees an approximately optimal ratio in this regime. Upon our conjectures, we further refine the initial pruning space to reduce the cost of searching a good subnetwork architecture. Our experimental results on ImageNet show that the subnetwork we found is superior to those from the state-of-the-art pruning methods under comparable FLOPs. ## 1 Introduction Large neural networks are usually preferred as they exhibit better generalization capability than small ones. In many model families such as ResNets (He et al., 2016) and MobileNets (Howard et al., 2017), large networks consistently achieve higher accuracy and are superior to small ones. Although large networks have revolutionized many fields such as computer vision and natural language processing, they suffer from significant inference costs in practice, especially when used with embedded sensors or mobile devices. For many practical applications, computational efficiency and small network sizes are crucial factors in addition to performance. Recent works on network pruning reveal that most state-of-the-art models are over-parameterized (Li et al., 2017; Frankle and Carbin, 2019). We can remove weights (Frankle and Carbin, 2019), filters (Li et al., 2017; Luo et al., 2017) and other structures (Meng et al., 2020) from large networks without inducing significant performance drop by network pruning techniques. Such strategies reduce the resource demands of neural network inference, including storage requirements, energy consumption and latency, and thus increase runtime efficiency. This work focuses on filter pruning, which prunes the original network to a slimmer subnetwork. Unlike weight pruning approaches that lead to irregular sparsity patterns and require specialized libraries or hardware for computational speedups, filter pruning enables accelerated inference with any off-the-shelf deep learning library and hardware. In the literature, a well-established paradigm is to train a large network to completions, prune the network according to some heuristics and retrain the pruned subnetwork to recover the accuracy loss (Renda et al., 2020). Despite the effectiveness of this _prune-then-retrain_ paradigm, existing filter pruning methods have one major limitation: their outcome is merely a single pruning recipe tuned to a specific setting1 and may fail to generalize to new settings. For example, we find that some filter pruning methods cannot produce subnetworks that outperform a regular subnetwork with an uniform pruning ratio in some extreme pruning regimes where we reduce more than \(90\%\) FLOPs of a network. Moreover, the prune-then-retrain paradigm does not deepen our understanding about the reason why these recipes produced by pruning approaches work well in specific regimes. Footnote 1: In this work, we refer to the combination of pruning ratios for the layers in a network as _pruning recipe_. Instead of producing the best single pruning recipe like existing works (Li et al., 2017; He et al., 2018; Li et al., 2020), we explore the general principles that can help us understand and refine pruning algorithms. Inspired by Radosavovic et al. (2019, 2020), we present _network pruning spaces_ in this work. Given a network, its network pruning space is a large population of subnetwork architectures produced by pruning recipes. This is different from architecture search spaces (Zoph & Le, 2017; Tan & Le, 2019) and network design spaces (Radosavovic et al., 2019, 2020), which aim to produce model families with good generalization from scratch. Specifically, we start with a well-designed network architecture and sample subnetworks from its pruning space, which gives rise to a subnetwork distribution. Based on this distribution, we present a tool to analyze the pruning space. With a constraint on pruning recipes, we divide the initial pruning space into several subspaces to explore the structure aspect of winning subnetworks and reduce the cost of searching such a subnetwork under different settings. The implementation is still filter pruning in common, but elevated to the population level and guided via distribution estimates following Radosavovic et al. (2019). In our empirical studies, the core observations on network pruning spaces are simple but interesting: (_a_) There exists an optimal FLOPs-to-parameter-bucket ratio in a pruning regime; (_b_) Subnetworks with the optimal FLOPs-to-parameter-bucket ratios will be winning ones; (_c_) The limitation of performance in a pruning regime is predictable by a function of the optimal FLOPs-to-parameter-bucket ratio. With these observations, we return to examine some subnetworks produced by existing pruning methods (Luo et al., 2017; Li et al., 2017). We find that these subnetworks with good performance match our observations (_a_ and _b_). Moreover, our Observation \(c\) is consistent with Rosenfeld et al. (2021) in weight pruning, suggesting that we are able to use an empirical function to make trade-offs between efficiency and accuracy when pruning filters in a network. The contributions of this work are threefold: * We present the new concept of network pruning spaces that parametrize populations of subnetwork architectures in the field of filter pruning. With this concept, we empirically study the structure aspect of winning subnetworks in different pruning regimes and explore general pruning principles. * Based on our experimental results, we make a series of conjectures that help us understand filter pruning principles. With these conjectures, we refine the initial pruning space with a constraint on FLOPs-to-parameter-bucket ratio to reduce the cost of searching a winning subnetwork. * We analytically show that the limitation of performance in a pruning regime is predictable, suggesting that we can build an empirical tool for reasoning trade-offs between efficiency and accuracy in filter pruning. ## 2 Revisit filter pruning In this section, we revisit existing techniques in filter pruning and introduce an efficient setting for further exploration. Generally, a pruning method involves instantiating each of the steps in the prune-then-retrain paradigm from a range of choices. The combination of such choices has a major impact on the final performance (Renda et al., 2020). The combinations of pruning implementations vary from each other in accuracy and efficiency. Since we aim to explore general pruning principles that can interpret the behaviors of filter pruning, a qualified and efficient setting is needed. In the following, we assess different pruning implementations and finally introduce a standard setting for conducting empirical studies. Our main experiments use ResNet-50 (He et al., 2016) on CIFAR-10 (Krizhevsky, 2009). The input images are resized to \(224\times 224\) such that we only modify the last fully-connected layer in ResNet-50. ### Pruning filters We briefly describe the procedure of pruning filters on the convolutional layers. Extending it to other layers is straightforward. Let \(c_{i}\) denote the number of input channels for the \(i\)th convolutional layer in a network and \(h_{i}/w_{i}\) be the height/width of the input feature maps. The convolutional layer transforms the input feature maps \(\mathbf{x}_{i}\in\mathbb{R}^{c_{i}\times h_{i}\times w_{i}}\) into the output feature maps \(\mathbf{x}_{i+1}\in\mathbb{R}^{c_{i+1}\times h_{i+1}\times w_{i+1}}\), which are used as input feature maps for the next convolutional layer, by applying \(c_{i+1}\) filters on the \(c_{i}\) input channels. Each filter \(\mathbf{F}_{i,j}\in\mathbb{R}^{c_{i}\times k\times k}\) is composed by \(c_{i}\) 2D kernels \(\mathbf{K}\in\mathbb{R}^{k\times k}\) and generates one feature map \(\mathbf{x}_{i+1,j}\in\mathbb{R}^{1\times h_{i+1}\times w_{i+1}}\) on top of \(\mathbf{x}_{i}\in\mathbb{R}^{c_{i}\times h_{i}\times w_{i}}\). All the filters constitute the convolutional kernel \(\mathbf{F}_{i}\in\mathbb{R}^{c_{i}\times c_{i+1}\times k\times k}\). When \(m\) filters are pruned with a ratio \(r_{i}=m/c_{i+1}\), \(m\) corresponding feature maps are removed. It reduces \(m/c_{i+1}\) of the computational cost for both layers \(i\) and \(i+1\). **Filter importance.** There exist a wide variety of pruning heuristics for evaluating the importance of filters. Figure 1 (left) shows accuracy drop curves of some pruning heuristics when we prune more filters. The difference between these pruning heuristics is small and they have the same trend when pruning filters. We adopt \(\ell_{2}\) norm pruning heuristic in this work because of its simplicity. **One-shot pruning.** We prune the network to a target size at once. In contrast, the prune-retrain circle can be repeated until a target network size is reached, which is known as iterative pruning. Renda et al. (2020) and our experiments (not shown) suggest that iterative pruning is slightly superior to one-shot pruning in high-pruning-ratio regimes; however, it takes much more time on retraining (about \(5\times\)). Such an accuracy gain can be ignored considering that one-shot pruning saves a lot of time, especially when we need to study a large population of pruned subnetworks in this work. ### Retraining subnetworks When the filters of the large network are removed after pruning, accuracy significantly decreases. It is standard to retrain the pruned subnetwork with the remaining weights to recover the original accuracy. We discuss and evaluate conventional retraining techniques in the literature. Our experiments prune the network with different uniform pruning ratios and retrain pruned subnetworks using existing techniques. **Fine-tuning.** The first common retraining technique is fine-tuning, which retrains pruned subnetworks for a specified number of epochs \(t\) using a small learning rate \(lr\)(Molchanov et al., 2019; Renda et al., 2020). In this work, we use a cosine annealing learning rate schedule (Loshchilov and Hutter, 2017) with an initial \(lr=0.01\) and empirically study the influence of \(t\) in the field of filter pruning. As shown in Figure 1 (center), in low-pruning-ratio regimes, it is sufficient to fine-tune pruned subnetworks for a few epochs (_e.g._, 50 epochs under an uniform pruning ratio of 0.25). In high-pruning-ratio regimes, where more than \(90\%\) parameters are removed after pruning, subnetworks require more epochs for fine-tuning to recover the original accuracy. **Learning rate rewinding.** Renda et al. (2020) introduce a retraining technique, namely learning rate rewinding, which retrains the pruned networks using the original schedule from last \(t\) epochs of training. During retraining, the learning rate in rewinding is always larger than that used in fine-tuning. In this work, we adopt a slightly different rewinding implementation, which starts from the initial learning rate \(lr\) in training and retrains pruned subnetworks for \(t\) epochs with warming up for 5 epochs. Our implementation is similar to learning rate restarting (Le and Hua, 2021) with a cosine annealing schedule in practice.2 Figure 1 (right) shows the experimental results in a high-pruning-ratio regime. Comparison between rewinding and fine-tuning shows that their performance is almost Figure 1: Pruning filters of ResNet-50 on CIFAR-10. (_Left_) Pruning filters of ResNet-50 based on different pruning heuristics, including \(\ell_{1}\) norm, \(\ell_{2}\) norm and FPGM (He et al., 2019). (_Center_) Experiments on fine-tuning epochs with different pruning ratios. (_Right_) Comparison of fine-tuning, rewinding and retraining from scratch with an uniform pruning ratio of \(0.75\). the same after retraining 200 epochs. However, in a low-epoch retraining regime, fine-tuning is more efficient than rewinding for filter pruning. **From scratch.** We also conduct experiments that retrain pruned subnetworks from scratch, where the parameters of a subnetwork are re-initialized after pruning. The original learning rate schedule is adopted during retraining. As shown in Figure 1 (right, green line), retraining from scratch requires more epochs to reduce accuracy drop compared to fine-tuning and rewinding. When retrained for enough epochs (_e.g._, 600 or 800 epochs), retraining from scratch is able to produce comparable performance, which is consistent with Liu et al. (2019).3 Moreover, our experimental results suggest that it is unfair to compare retraining techniques under different epochs, since all of them benefit from more epochs. Footnote 3: In our experiments, the others outperform retraining from scratch with a less accuracy drop of 0.3. **Discussion.** In this subsection, we empirically study existing retraining techniques, including _fine-tuning_, _rewinding_ and _retraining from scratch_. Our experimental results show that these retraining techniques can achieve similar performance if we retrain pruned subnetworks with enough epochs. However, fine-tuning for a few epochs is a more efficient choice among them as our focus is not on recovering the original accuracy as much as possible in the following. Finally, we introduce a standard setting for further exploration. Given a pruning recipe, the standard setting prunes filters by their \(\ell_{2}\) norms in a one-shot manner, and fine-tunes pruned subnetworks with \(lr=0.01\) for \(t\) epochs. ## 3 Network pruning spaces In this section, we introduce _network pruning spaces_, which are inspired by the concept of network design spaces (Radosavovic et al., 2019). Given a trained network \(f_{\theta}\), a network pruning space \(\mathbb{S}\) comprises a large population of subnetworks \(\{f_{\theta_{\text{obs}}}\}\) pruned from \(f_{\theta}\). The initial pruning space \(\mathbb{S}\) is loosely defined by a constraint on FLOPs. We refer to the constraint on FLOPs as \(c_{\text{flops}}\), which is a relative value calculated by \(c_{\text{flops}}=\text{FLOPs}(f_{\theta_{\text{obs}}})/\text{FLOPs}(f_{\theta})\) in this work. In practice, we use a range \(c_{\text{flops}}\pm\delta\) (\(\delta=0.002\)) for efficiency, as it is difficult to search recipes with a fixed constraint. Based on the specified pruning heuristics (_e.g._, our standard setting), a subnetwork \(f_{\theta_{\text{obs}}}\) in \(\mathbb{S}\) is generated by a pruning recipe, which consists of pruning ratios \(\mathbf{r}=\{r_{1},r_{2},\dots,r_{N}\}\) for \(N\) layers. We sample a number of pruning recipes from \(\mathbb{S}\), giving rise to a subnetwork distribution, and analyze the pruning space. **Tools for pruning spaces.** We adopt accuracy drop empirical distribution functions (EDFs) Radosavovic et al. (2019); Radosavovic et al. (2020) to analyze the pruning space. Given a set of \(n\) subnetworks with accuracy drops \(\{e_{i}\}\), the accuracy drop EDF is given by: \[F(e)=\frac{1}{n}\sum_{i=1}^{n}\mathbf{1}[e_{i}<e], \tag{1}\] where \(\mathbf{1}\) is the indicator function. \(F(e)\) represents the fraction of subnetworks with accuracy drop less than \(e\) after retraining. ### The Std pruning spaces We start with the initial pruning space \(\mathbb{S}\). Any recipe generation strategy, such as evolutionary algorithm and reinforcement learning (He et al., 2018), can be used when sampling subnetworks. Random sampling is the most straightforward one, which randomly generates \(N\) real numbers from a given range \([0,R]\) as a pruning recipe \(\mathbf{r}\)(Li et al., 2020).4 Footnote 4: \(R\) is the largest pruning ratio applied to a layer. Although a random recipe that meets the requirement \(c_{\text{flops}}\) can be a valid one, we observe an interesting problem when sampling subnetworks from \(\mathbb{S}\) following Li et al. (2020). This random sampling process is not effective enough, taking many times for a valid recipe, as we have a compact constraint (_i.e._, small \(\delta\)). It is much easier to collect enough recipes when we start with an uniform pruning ratio and modify \(\{r_{i}\}\) with \(N\) small random numbers. Therefore, we take a refinement step and come to STD pruning spaces. We additionally use a constraint \(c_{\text{sd}}\) on the standard deviation of the recipe \(\mathbf{r}\) and refer to the pruning spaces with this constraint as STD pruning spaces. By controlling the standard deviation of \(\mathbf{r}\), we divide the initial space \(\mathbb{S}\) into STD subspaces. In this work, we mainly study three STD pruning spaces with \(c_{\text{sd}}=\{0.1,0.05,0.01\}\), denoted as STD-0.1, STD-0.05 and STD-0.01 spaces respectively. We focus on exploring the structure aspect of winning subnetworks that have the best performance after retraining. With our baseline setting, we sample \(n=100\) subnetworks from STD pruning spaces and fine-tune these subnetworks for \(t=50\) epochs. Although better performance can be Figure 2: STD pruning spaces under \(c_{\text{tops}}=\{0.25,0.1,0.05,0.02\}\). In the figure, we study three STD pruning spaces, including STD-0.1, STD-0.05 and STD-0.01 spaces. _(Row 1)_ Accuracy drop EDFs on STD spaces. _(Row 2)_ Standard deviation of \(\mathbf{r}\) distribution. _(Row 3)_ Remaining FLOPs distribution. _(Row 4)_ Remaining parameters distribution. _(Row 5)_ Mean computation budget distribution. _(Row 6)_ The winning pruning recipe on each STD pruning space. Note that the scales of accuracy drop under different \(c_{\text{tops}}\) varies. achieved when we retrain subnetworks for more epochs, we argue that fine-tuning for 50 epochs is more efficient for most settings. **Medium-pruning-ratio regime.** Figure 2a (Row 1) illustrates accuracy drop EDFs for three STD pruning spaces in a medium-pruning-ratio regime. As a baseline, the pruned subnetwork with an uniform pruning ratio of 0.5 reduces \(74.1\%\) FLOPs (corresponds to \(c_{\text{flops}}=0.259\)) from the original network and has an accuracy drop of \(0.3\%\) after fine-tuning for 50 epochs. On each STD space, we can easily find many subnetworks (about a fraction of \(60\%\)) that outperform the baseline subnetwork. The winning subnetworks in this regime have significantly different recipes (Row 6) and all of them lead to pruned subnetworks without any accuracy drop.5 From accuracy drop EDFs and standard deviation of \(\mathbf{r}\) (Row 2), the cost to find such a good subnetwork on each STD space is similar, suggesting that there is no difference between STD spaces in medium-pruning-ratio regimes. Footnote 5: A small loss of 0.02% is ingored here, as better performance can be achieved when retrained for more epochs. **High-pruning-ratios regime.** Next, we present STD spaces in high-pruning-ratio regimes, where more than 90% FLOPs are reduced after pruning. In Figure 2b, 2c and 2d (Row 1) we find that the accuracy drop EDFs for STD spaces progressively differ from each other. From \(c_{\text{flops}}=0.1\) to \(c_{\text{flops}}=0.02\), STD-0.01 spaces become higher than STD-0.05 and STD-0.1 spaces consistently. In other words, STD-0.01 space has a higher fraction of better subnetworks at every accuracy drop threshold when we prune more filters. This clear qualitative difference suggests that the cost to find a good subnetwork on STD-0.01 spaces is less than the costs on others. Moreover, under an extreme constraint such as \(c_{\text{flops}}=0.02\), we can easily find many subnetworks on STD-0.01 space that outperform the winning one on STD-0.1 space. There might exist a winning subnetwork on STD-0.1 space that is comparable to the best subnetworks on others, as we only sample \(n=100\) recipes in the experiments. However, it does not alter our conclusion that STD-0.01 space is a more efficient network pruning space across pruning regimes, especially in high-pruning-ratio regimes. According to our experimental results, in these pruning regimes, there seems to be no shared pattern from the perspective of winning subnetwork structures. However, we observed an interesting trend that good pruning recipes always have a small standard deviation, although those with a large standard deviation can also produce good subnetworks with a small probability. Upon this discovery, one might ask: _What makes these recipes with a small standard deviation outperform others?_ **Distribution comparison.** In Figure 2 we present FLOPs distribution (Row 3) and parameter bucket distribution (Row 4). A pattern emerges: the amount of remaining parameters of subnetworks on STD-0.01 spaces is much closer to a certain number than those on other STD spaces in each pruning regime. Since we produce pruning recipes \(\mathbf{r}\) with a constraint on FLOPs, we have the following conjecture: Figure 3: We present winning subnetworks on all STD spaces with \(c_{\text{flops}}\) and \(c_{\text{params}}\) respectively. (_Left_) Parameters distribution of winning subnetworks under \(c_{\text{flops}}=\{0.25,0.1,0.05,0.02\}\). (_Right_) FLOPs distribution of winning subnetworks under \(c_{\text{params}}=\{0.25,0.1,0.05,0.02\}\). **Conjecture 1**.: _(**a) Given a target FLOPs reduction, there exists an optimal parameter bucket for pruning filters; (**b)** Subnetworks that have the optimal parameter bucket will be the winning ones on \(\mathbb{S}\); (**c**) A good structure of pruned subnetwork guarantees an approximately optimal parameter bucket under its FLOPs._ With Conjecture 1, we present the best 40 subnetworks on all STD spaces under each \(c_{\text{flops}}\) in Figure 3 (left). Note that the 40 winning subnetworks have different standard deviations of \(\mathbf{r}\), which empirically proves our Conjecture 1c. **Constraint on parameter bucket.** Consider a simple question: _What kind of subnetworks will be winning ones under a constraint on parameter bucket?_ Next, we present STD pruning spaces with a constraint on parameter bucket \(c_{\text{params}}\) in Figure 4. We observe a consistent trend that STD-0.01 space is higher than others in high-pruning-ratio regimes (_e.g._, \(c_{\text{params}}=\{0.05,0.02\}\)). Similarly, in Figure 3 (right), we present the best 40 subnetworks on all STD spaces under each \(c_{\text{params}}\) and have the following conjecture: **Conjecture 2**.: _(**a) Given a target parameter bucket reduction, there exists an optimal FLOPs for pruning filters; (**b)** Subnetworks that have the optimal FLOPs will be winning ones on \(\mathbb{S}\); (**c**) A good structure of pruned subnetwork guarantees an approximately optimal FLOPs under its parameter bucket._ Based on Conjectures 1 and 2, we believe that the winning subnetworks in different pruning regimes have optimal configurations of _(FLOPs, parameter bucket)_, which are achieved by their pruning recipes. To further reduce the degrees of freedom, we introduce a tool that represents the FLOPs-to-parameter-bucket ratio compared to the original network, namely _mean computation budget_ (mCB): \[\text{mCB}=\frac{\text{FLOPs}(f_{\theta_{\text{sub}}})/\text{Params}(f_{ \theta_{\text{sub}}})}{\text{FLOPs}(f_{\theta})/\text{Params}(f_{\theta})}. \tag{2}\] In Figure 2 (Row 5) and Figure 4 (Row 4) we present mCB distribution of subnetworks with \(c_{\text{flops}}\) and \(c_{\text{params}}\) respectively. With this tool, we combine Conjectures 1 and 2 into one and have the following conjecture: **Conjecture 3**.: _(**a) In a pruning regime, there exists an optimal mean computation budget for pruning filters; (**b)** Subnetworks that have the optimal mean computation budget will be the winning ones on \(\mathbb{S}\); (**c**) A good structure of pruned subnetwork guarantees an approximately optimal mean computation budget in this pruning regime._ Taking the design of the original network as the optimal configuration of _(FLOPs, parameter bucket)_, we argue that the optimal mCBs in many pruning regimes are roughly equivalent to 1.0. In Figure 5 (right), we present mCB distribution of winning subnetworks when FLOPs reduction increases. Statistically, our experimental results show that the optimal mCB for a pruning regime should be a dynamic range and the optimal mCB range is around 1.0 in most pruning regimes. Moreover, the optimal mCB tends to increase when we reduce more FLOPs by pruning. We draw a fitted curve of optimal mCB in Figure 5 (right, red dashed line) and the curve rises significantly when the reduction is above \(90\%\). We also draw a curve (orange dashed line) presenting the mCB of subnetworks with uniform pruning ratios. These two curves are close in most regimes, suggesting that an uniform recipe is a good starting point when we search winning pruning recipes. That might be the reason why uniform pruning ratios outperform some pruning methods in our extremely pruning experiments. In Figure 3, we observed that the optimal parameter bucket, the optimal FLOPs and the final performance of winning subnetworks seemed to be predictable. This observation is consistent with Rosenfeld et al. (2021), in which a scaling law is developed to estimate the error after weight pruning. Therefore, we have another conjecture: **Conjecture 4**.: _The limitation of performance in each pruning regime can be predicted by a functional form.6_ Footnote 6: The limitations of performance and the curves in Figure 3 can be further refined, as we only retrain subnetworks for 50 epochs. In Figure 5 (left), we present the relationship between the mCBs of winning subnetworks and their accuracy drops. We observed that the limitation of performance could be predicted by a function of the optimal mCB in a pruning regime. Since the goal of network pruning is to increase efficiency while maintaining accuracy, such an empirical function helps a lot when one is making a trade-off. Moreover, we believe that one might find a winning subnetwork without any accuracy drop in many pruning regimes, where the optimal mCB ranges are around 1.0.7 Footnote 7: We argue that the limitation is related to the dataset size. Our experiments on ImageNet fail to find such a subnetwork without any accuracy drop when we reduce more than 75% FLOPs. ## 4 ResNet-50 on ImageNet In this section, we prune ResNet-50 on ImageNet (Russakovsky et al., 2015) with \(c_{\text{flops}}=0.5\) and \(c_{\text{flops}}=0.25\). Since ImageNet is a large-scale dataset that takes more time for one epoch than CIFAR-10, we first generate pruning recipes and then fine-tune subnetwork candidates for 5 epochs. The top-5 ranking candidates will be retrained with a complete schedule. Specifically, we fine-tune subnetworks with an initial learning rate \(lr=0.01\) for 100 epochs on ImageNet. Figure 4: STD pruning spaces under \(c_{\text{params}}=\{0.25,0.1,0.05,0.02\}\). In the figure, we study three STD pruning spaces, including STD-0.1, STD-0.05 and STD-0.01 spaces with a constraint on parameter bucket. (_Row 1_) Accuracy drop EDFs on STD spaces. (_Row 2_) Remaining FLOPs distribution. (_Row 3_) Remaining parameters distribution. (_Row 4_) Mean computation budget distribution. (_Row 5_) The winning pruning recipe on each STD pruning space. In the experiments, we produce pruning recipes with a refined pruning space upon the proposed Conjecture 3. We empirically use a constraint on mCB \(c_{\text{mcB}}=1.0\pm 0.1\) instead of a constraint on the standard deviation of recipes \(\mathbf{r}\) for efficiency. This is because all STD spaces have a similar distribution in such a pruning regime, according to our analysis for CIFAR-10. Note that the excepted parameter bucket range is decided by \(c_{\text{mcB}}\) when a target FLOPs reduction is set. We generate \(n=300\) subnetwork candidates and finally keep the top-5 ranking candidates. In Table 1, we compare the winning subnetwork to the state-of-the-art pruning methods. The comparison shows that our analysis for network pruning spaces guides us to better pruning results. ## 5 Conclusion and discussion In this work, we introduce network pruning spaces for exploring general pruning principles. Inspired by (Radosavovic et al., 2019, 2020), we focus on the structure aspect of subnetwork architectures by comparing populations of subnetworks, instead of producing the best single pruning recipe. Although our empirical studies do not lead to a common pattern from the perspective of architecture, we observe that there exists an optimal mean computation budget in a pruning regime. Moreover, our observations suggest that the limitations of performance in different pruning regimes might be predictable by a function of the optimal mean computation budget. Our work empirically provides insight into the existence of such a functional form that approximates the accuracy drops and char \begin{table} \begin{tabular}{c l c c c} \hline \hline FLOPs zone & method & FLOPs (G) & Top-1 (\%) & Top-5 (\%) \\ \hline - & ResNet-50 (He et al., 2016)\({}^{\dagger}\) & 4.12 & 76.88 & 93.44 \\ \hline \multirow{6}{*}{2G} & Thinet-50 (Luo et al., 2017) & 2.10 & 74.70 & 90.02 \\ & EagleEye (Li et al., 2020) & 2.00 & 76.40 & 92.89 \\ & MetaPruning (Liu et al., 2019) & 2.00 & 75.40 & - \\ & AutoSlim (Yu \& Huang, 2019) & 1.00 & 75.60 & - \\ & HRank (Lin et al., 2020) & 2.30 & 74.98 & 92.33 \\ & FPGM (He et al., 2019) & 1.92 & 74.83 & 92.32 \\ & **Ours** (\(c_{\text{flops}}=0.5\)) & 2.06 & 76.90 & 93.55 \\ \hline \multirow{6}{*}{1G} & ThiNet-30 (Luo et al., 2017) & 1.20 & 72.10 & 88.30 \\ & EagleEye (Li et al., 2020) & 1.00 & 74.20 & 91.77 \\ \cline{1-1} & MetaPruning (Liu et al., 2019) & 1.00 & 73.40 & - \\ \cline{1-1} & AutoSlim (Yu \& Huang, 2019) & 1.00 & 74.00 & - \\ \cline{1-1} & HRank (Lin et al., 2020) & 1.55 & 71.98 & 91.01 \\ \cline{1-1} & **Ours** (\(c_{\text{flops}}=0.25\)) & 1.03 & 74.96 & 92.51 \\ \hline \hline \end{tabular} \end{table} Table 1: Pruning ResNet-50 on ImageNet. \(\dagger\) denotes our re-implementation. Figure 5: (_Left_) Sort winning subnetworks under different constraints (\(c_{\text{flops}}\) and \(c_{\text{params}}\)) by their accuracy drops. Each one is the best on all STD spaces under a constraint. (_Right_) We present the best 40 subnetworks on all STD spaces under each constraint.
2305.06449
A crystallization result in two dimensions for a soft disc affine potential
We prove finite crystallization for particles in the plane interacting through a soft disc potential, as originally shown by C. Radin \cite{Radin_soft}. We give an alternative proof that relies on the geometric decomposition of the energy proved in \cite{DLF1}, and that is based on showing that any minimizer has at least as many boundary points as the canonical ``spiral'' configuration.
Giacomo Del Nin, Lucia De Luca
2023-05-10T20:37:00Z
http://arxiv.org/abs/2305.06449v1
# A crystallization result in two dimensions for a soft disc affine potential ###### Abstract. We prove finite crystallization for particles in the plane interacting through a soft disc potential, as originally shown by C. Radin [9]. We give an alternative proof that relies on the geometric decomposition of the energy proved in [6], and that is based on showing that any minimizer has at least as many boundary points as the canonical "spiral" configuration. Keywords: Crystallization; collective behavior; graph theory; soft disc; variational methods. AMS subject classifications: 70C20, 05C10, 49J45, 82D25. ###### Contents * 1 Introduction * 2 Preliminaries on planar graphs * 3 The soft disc model ## 1. Introduction This paper deals with finite crystallization in two dimensions for a soft disc affine pairwise interaction potential at zero temperature. Following a nowadays standard approach, we want to look at crystallization as a phenomenon emerging by the minimization of suitable energy functionals (see [2] for a recent review on the crystallization conjecture). Specifically, we consider an interaction potential of the form \[\mathscr{V}^{\delta}(r):=\left\{\begin{array}{ll}+\infty&\text{if }r<1\,, \\ -1+\frac{r-1}{\delta}&\text{if }1\leq r\leq 1+\delta\,,\\ 0&\text{if }r>1+\delta\,,\end{array}\right. \tag{1.1}\] with \[0<\delta<\frac{1}{2\sin\frac{\pi}{7}}-1\,. \tag{1.2}\] Given a finite set \(\mathsf{X}\subset\mathbb{R}^{2}\,,\) representing the positions of a system of particles, we define the associated energy as \[\mathcal{E}_{\mathscr{V}^{\delta}}(\mathsf{X}):=\frac{1}{2}\sum_{x\in\mathsf{ X}}\sum_{x^{\prime}\in\mathsf{X}\setminus\{x\}}\mathscr{V}^{\delta}(|x-x^{ \prime}|)\,, \tag{1.3}\] ## 1. Introduction Let \(\mathcal{E}_{\mathcal{V}^{\delta}}\) be the set of all \(\mathcal{E}_{\mathcal{V}^{\delta}}\)-valued functions on \(\mathcal{E}_{\mathcal{V}^{\delta}}\). We denote by \(\mathcal{E}_{\mathcal{V}^{\delta}}\) the set of all \(\mathcal{E}_ Lemma 3.2 to show that any configuration \(\mathsf{X}\) with \(\sharp\mathsf{X}=N\) cannot have less boundary points than the canonical configuration \(\overline{\mathsf{X}}_{N}\). Since in \(\overline{\mathsf{X}}_{N}\) all the interior points, i.e., the points in \(\overline{\mathsf{X}}_{N}\setminus\partial\overline{\mathsf{X}}_{N}\), have exactly \(6\) nearest neighbors, this must be the case also for any minimizer of the energy, whence we deduce the desired claim. We highlight that the proof of Theorem 3.5 works verbatim for the case \(\delta=0\), providing a proof of the crystallization for the sticky disc potential that is slightly different from [8] and [6]. Finally, since the minimizers of the soft affine problem dealt with here coincide with those of the sticky disc problem, the results on the asymptotic Wulff shape [1] as well as the estimates on the fluctuations [10, 4, 3] hold verbatim in our case (see also [5] for a purely discrete result concerning the uniqueness of minimizers). Acknowledgments: LDL is member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). **Notation:** In what follows \(\mathbb{N}\) denotes the set of positive integer numbers and \(\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}\). ## 2. Preliminaries on planar graphs Here we collect some notions and notation on planar graphs that will be adopted in this paper. Let \(\mathsf{X}\) be a finite subset of \(\mathbb{R}^{2}\) and let \(\mathsf{Ed}\) be a given subset of \(\mathsf{E}(\mathsf{X})\), where \[\mathsf{E}(\mathsf{X}):=\left\{\{x,y\}\subset\mathbb{R}^{2}\,:\,x,y\in\mathsf{ X}\,,\,x\neq y\right\}.\] The pair \(\mathsf{G}=(\mathsf{X},\mathsf{Ed})\) is called _graph_; \(\mathsf{X}\) is called the set of _vertices_ of \(\mathsf{G}\) and \(\mathsf{Ed}\) is called the set of _edges_ (or _bonds_) of \(\mathsf{G}\). Given \(\mathsf{X}^{\prime}\subset\mathsf{X}\) we denote by \(\mathsf{G}_{\mathsf{X}^{\prime}}\) the _subgraph_ (or _restriction_) of \(\mathsf{G}\) generated by \(\mathsf{X}^{\prime}\), defined by \(\mathsf{G}_{\mathsf{X}^{\prime}}=(\mathsf{X}^{\prime},\mathsf{Ed}^{\prime})\) where \(\mathsf{Ed}^{\prime}:=\left\{\{x^{\prime},y^{\prime}\}\in\mathsf{Ed}\,:\,x^{ \prime},y^{\prime}\in\mathsf{X}^{\prime}\right\}.\) **Definition 2.1**.: We say that two points \(x,z\in\mathsf{X}\) are connected and we write \(x\sim z\) if there exist \(M\in\mathbb{N}\) and a _path_\(x=y_{0},\ldots,y_{M}=z\) such that \(\{y_{m-1},y_{m}\}\in\mathsf{Ed}\) for every \(m=1,\ldots,M-1\). We say that \(\mathsf{G}_{\mathsf{X}_{1}},\ldots,\mathsf{G}_{\mathsf{X}_{K}}\) with \(K\in\mathbb{N}\) are the _connected components_ of \(\mathsf{G}\) if \(\{\mathsf{X}_{1},\ldots,\mathsf{X}_{K}\}\) is a partition of \(\mathsf{X}\) and for every \(k,k^{\prime}\in\{1,\ldots,K\}\) with \(k\neq k^{\prime}\) it holds \[x_{k}\sim y_{k} \text{for every }x_{k},y_{k}\in\mathsf{X}_{k}\,,\] \[x_{k}\not\sim x_{k^{\prime}} \text{for every }x_{k}\in\mathsf{X}_{k}\,,x_{k^{\prime}}\in \mathsf{X}_{k^{\prime}}\,.\] If \(\mathsf{G}\) has only one connected component we say that \(\mathsf{G}\) is _connected_. We say that \(\mathsf{G}\) is planar if for every pair of (distinct) bonds \(\{x_{1},x_{2}\},\{y_{1},y_{2}\}\in\mathsf{Ed}\), the (open) segments \((x_{1},x_{2})\) and \((y_{1},y_{2})\) have empty intersection. From now on we assume that \(\mathsf{G}=(\mathsf{X},\mathsf{Ed})\) is planar, so that we can introduce the notion of face (see also [6]). By a face \(f\) of \(\mathsf{G}\) we mean any open, bounded, connected component of \(\mathbb{R}^{2}\setminus\left(\mathsf{X}\cup\bigcup_{\{x,y\}\in\mathsf{Ed}}[ x,y]\right)\), which is also simply connected; here \([x,y]\) is the closed segment with extreme points \(x\) and \(y\). We denote by \(\mathsf{F}(\mathsf{G})\), the set of faces of \(\mathsf{G}\) and we set \[O(\mathsf{G}):=\bigcup_{f\in\mathsf{F}(\mathsf{G})}\operatorname{clos}(f)\,.\] We define the Euler characteristic of \(\mathsf{G}\) as \[\chi(\mathsf{G})=\sharp\mathsf{X}-\sharp\mathsf{Ed}+\sharp\mathsf{F}(\mathsf{G})\,,\] and we warn the reader that this may differ from the standard Euler characteristic in graph theory. We just remark that if \(\chi(\mathsf{G})=1\), then \(\mathsf{G}\) is connected. With a little abuse of language we will say that an edge \(\{x,y\}\) lies on a set \(E\subset\mathbb{R}^{2}\) if the segment \([x,y]\) is contained in \(E\). We classify the edges in \(\mathsf{Ed}\) in the following subclasses: * \(\mathsf{Ed}^{\mathrm{int}}\) is the set of _interior edges_, i.e., of edges lying on the boundary of two (distinct) faces; * \(\mathsf{Ed}^{\mathrm{wire,ext}}\) is the set of _exterior wire edges_, i.e., of edges that do not lie on the boundary of any face; * \(\mathsf{Ed}^{\mathrm{wire,int}}\) is the set of _interior wire edges_, i.e., of edges lying on the boundary of precisely one face but not on the boundary of its closure (or, equivalently, of \(O(\mathsf{G})\)) ; * \(\mathsf{Ed}^{\partial}\) is the set of _boundary edges_, i.e., of edges lying on \(\partial O(\mathsf{G})\). With a little abuse of notation we set \(\partial\mathsf{X}:=\{x\in\mathsf{X}:\exists y\in\mathsf{X}\text{ such that }\{x,y\}\in\mathsf{Ed}^{\partial}\cup\mathsf{Ed}^{ \mathrm{wire,ext}}\}\). We define the _graph-perimeter_ of \(\mathsf{G}\) as \[\mathrm{Per}_{\mathsf{gr}}(\mathsf{G}):=\sharp\mathsf{Ed}^{\partial}+2\sharp \mathsf{Ed}^{\mathrm{wire,ext}}\,.\] According with the definitions introduced above, if \(O(\mathsf{G})\) has simple and closed polygonal boundary and if \(\sharp\mathsf{Ed}^{\mathrm{wire,ext}}=0\), then \(\mathrm{Per}_{\mathsf{gr}}(\mathsf{G})=\sharp\partial\mathsf{X}\). We stress that if \(\mathsf{G}\) has no edges, then \(\mathrm{Per}_{\mathsf{gr}}(\mathsf{G})=\sharp\partial\mathsf{X}=0\). Analogously, for every face \(f\in\mathsf{F}(\mathsf{G})\) one can define the following subclasses of edges delimiting \(f\): * \(\mathsf{Ed}^{\mathrm{wire,int}}(f)\) is the set of edges lying on the boundary of \(f\) but not on the boundary of the closure of \(f\); * \(\mathsf{Ed}^{\partial}(f)\) is the set of edges lying on the boundary of the closure of \(f\). Therefore, the _graph-perimeter_ of a face \(f\) is defined by \[\mathrm{Per}_{\mathsf{gr}}(f):=\sharp\mathsf{Ed}^{\partial}(f)+2\sharp \mathsf{Ed}^{\mathrm{wire,int}}(f).\] Finally, following [6, Sec. 2.6], we define the _defect measure_\(\mu(\mathsf{G})\) of the graph \(\mathsf{G}\), as the number of additional edges that we need to add to \(\mathsf{G}\) to make it triangulated. More precisely: for every face \(f\) with \(\mathrm{Per}_{\mathsf{gr}}(f)=k\), \(k\geq 4\), we triangulate it by adding \(k-3\) edges that connect not already connected vertices and that do not cross each other, thus obtaining a new graph \(\overline{\mathsf{G}}\). Then \(\mu(\mathsf{G}):=\sharp\mathsf{Ed}(\overline{\mathsf{G}})-\sharp\mathsf{Ed}( \mathsf{G})\). ## 3. The soft disc model For every \(0<\delta<\frac{1}{2\sin\frac{\pi}{7}}-1\) let \(\mathscr{V}^{\delta}:[0,+\infty)\to[0,+\infty]\) be the function defined in (1.1) and, for every finite \(\mathsf{X}\subset\mathbb{R}^{2}\), let \(\mathcal{E}_{\mathscr{V}^{\delta}}(\mathsf{X})\) be the corresponding energy functional as defined in (1.3). For every \(N\in\mathbb{N}\) we denote by \(\mathcal{A}_{N}\) the set of \(N\)-particle configurations with finite energy, i.e., \(\mathcal{A}_{N}:=\{\mathsf{X}\subset\mathbb{R}^{2}\,:\,\sharp\mathsf{X}=N\,, \,\mathcal{E}_{\mathscr{V}^{\delta}}(\mathsf{X})<+\infty\}\) and we set \(\mathcal{A}:=\bigcup_{N\in\mathbb{N}}\mathcal{A}_{N}\). For every \(\mathsf{X}\in\mathcal{A}\), we denote by \(\mathsf{G}(\mathsf{X})\) the _graph generated by \(\mathsf{X}\), i.e., \(\mathsf{G}(\mathsf{X})=(\mathsf{X},\mathsf{Ed}(\mathsf{X}))\), where \(\mathsf{Ed}(\mathsf{X}):=\{\{x,y\}\,:\,x,y\in\mathsf{X},\,1\leq|x-y|\leq 1+ \delta\}\). Notice that the finiteness of \(\mathcal{E}_{\mathscr{V}^{\delta}}(\mathsf{X})\), implies that \(\mathsf{G}(\mathsf{X})\) is a planar graph and that for any given point \(x\in\mathsf{X}\) there could be at most six edges lying on \(x\). In what follows, with a little abuse of notation, we set \(\mathrm{Per}_{\mathsf{gr}}(\mathsf{X}):=\mathrm{Per}_{\mathsf{gr}}(\mathsf{G}( \mathsf{X}))\) and \(\chi(\mathsf{X}):=\chi(\mathsf{G}(\mathsf{X}))\). Analogously, we set \(\mathsf{F}(\mathsf{X}):=\mathsf{F}(\mathsf{G}(\mathsf{X}))\) and we denote by \(\mathsf{F}^{\triangle}(\mathsf{X})\) denotes the set of the triangular faces of \(\mathsf{X}\), namely the set of faces \(f\in\mathsf{F}(\mathsf{X})\) with \(\operatorname{Per}_{\mathsf{gr}}(f)=3\). By [6, Theorem 3.1], for any \(\mathsf{X}\in\mathcal{A}\) we have that \[\mathcal{E}_{\mathscr{V}^{\delta}}(\mathsf{X})=-3\sharp\mathsf{X}+ \operatorname{Per}_{\mathsf{gr}}(\mathsf{X})+\mu(\mathsf{X})+3\chi(\mathsf{X}) +\mathcal{E}_{\mathrm{el}}(\mathsf{X})\,, \tag{3.1}\] where \[\mu(\mathsf{X}):=\sum_{f\notin\mathsf{F}^{\triangle}(\mathsf{X})}( \operatorname{Per}_{\mathsf{gr}}(f)-3)\text{ and }\mathcal{E}_{\mathrm{el}}(\mathsf{X}):=\frac{1}{2}\sum_{ \begin{subarray}{c}x,y\in\mathsf{X}\\ 1<|x-y|\leq 1+\delta\end{subarray}}(1+\mathscr{V}^{\delta}(|x-y|))\,.\] In what follows, for every \(\mathsf{X}\in\mathcal{A}\) we set \[\mathcal{F}_{\mathscr{V}^{\delta}}(\mathsf{X}):=\operatorname{Per}_{\mathsf{ gr}}(\mathsf{X})+\mu(\mathsf{X})+3\chi(\mathsf{X})+\mathcal{E}_{\mathrm{el}}( \mathsf{X})\,, \tag{3.2}\] so that, in view of (3.1), minimizing \(\mathcal{E}_{\mathscr{V}^{\delta}}\) in \(\mathcal{A}_{N}\) is equivalent to minimizing \(\mathcal{F}_{\mathscr{V}^{\delta}}\) in \(\mathcal{A}_{N}\). Let \(\mathsf{X}\in\mathcal{A}\) have simply closed polygonal boundary. For every \(x\in\partial\mathsf{X}\), let \(\mathsf{l}^{\mathrm{bdry}}(x)\) and \(\mathsf{l}^{\mathrm{inner}}(x)\) be the sets of boundary and interior edges, respectively, emanating from \(x\). Let moreover \(\alpha(x)\) denote the inner angle spanned by the two boundary edges emanating from \(x\). The following result is the analogous of [9, Lemma 1]. **Lemma 3.1**.: _Let \(\mathsf{X}\in\mathcal{A}\) have simply closed polygonal boundary. Then for every \(x\in\partial\mathsf{X}\)_ \[\frac{1}{2}\sum_{e\in\mathsf{l}^{\mathrm{bdry}}(x)}\mathscr{V}^{\delta}(|e|)+ \sum_{e\in\mathsf{l}^{\mathrm{inner}}(x)}\mathscr{V}^{\delta}(|e|)\geq-\frac{ \alpha(x)}{\frac{\pi}{3}}\,. \tag{3.3}\] _Moreover, if equality in (3.3) holds true then \(\alpha(x)=(\sharp\mathsf{l}^{\mathrm{inner}}(x)+\sharp\mathsf{l}^{\mathrm{bdry }}(x)-1)\frac{\pi}{3}\) and \(|e|=1\) for all \(e\in\mathsf{l}^{\mathrm{bdry}}(x)\cup\mathsf{l}^{\mathrm{inner}}(x)\)._ Proof.: Let \(x\in\partial\mathsf{X}\) be fixed and let \(I(x):=\sharp\mathsf{l}^{\mathrm{bdry}}(x)+\sharp\mathsf{l}^{\mathrm{inner}}(x)\). Let moreover \(\alpha_{1}(x)\,,\dots,\alpha_{I(x)-1}\) denote the \(I(x)-1\) angles spanned by the bonds in \(\mathsf{l}^{\mathrm{bdry}}(x)\cup\mathsf{l}^{\mathrm{inner}}(x)\), in such a way that \(\sum_{j=1}^{I(x)-1}\alpha_{j}(x)=\alpha(x)\). If \(\alpha_{j}(x)\geq\frac{\pi}{3}\), then (3.3) is trivially satisfied. Assume now that \(\alpha_{\overline{j}}(x)=(1-z)\frac{\pi}{3}\) for some \(0\leq z\leq 1-\frac{6}{\pi}\arcsin\frac{1}{2(1+\delta)}\) and let \(l\leq L\) be the lengths of the two bonds spanning \(\alpha_{\overline{j}}(x)\). We notice that (3.3) is proven if we show that \[\frac{1}{2}\mathscr{V}^{\delta}(L)\geq z-\frac{1}{2}\,. \tag{3.4}\] Figure 1. Reference figure for Lemma 3.1. To this purpose, we first prove that \[L\geq\frac{1}{2\sin\left((1-z)\frac{\pi}{6}\right)}\,, \tag{3.5}\] which, in view of the monotonicity of \(\mathscr{V}^{\delta}\), yields \[\frac{1}{2}\mathscr{V}^{\delta}(L)\geq\frac{1}{2}\mathscr{V}^{\delta}\Big{(} \frac{1}{2\sin\left((1-z)\frac{\pi}{6}\right)}\Big{)}=-\frac{1}{2}-\frac{1}{2 \delta}+\frac{1}{4\delta\sin\left((1-z)\frac{\pi}{6}\right)}\,. \tag{3.6}\] Indeed, let \(\bar{\alpha}\) be the angle formed by the segment \(AC\) and the segment \(CH\) as in Figure 1. Clearly \(\bar{\alpha}\geq\frac{\alpha\jmath(x)}{2}\). Moreover, \[L\cos\bar{\alpha}=l\cos(\alpha_{\bar{\jmath}}(x)-\bar{\alpha})\,, \tag{3.7}\] and, since \(\mathcal{E}_{\mathscr{V}^{\delta}}(\mathsf{X})<+\infty\), we have that \[L\sin\bar{\alpha}+l\sin(\alpha_{\bar{\jmath}}(x)-\bar{\alpha})\geq 1\,. \tag{3.8}\] By (3.7) and (3.8), we get \[L\geq\Big{(}\sin\bar{\alpha}+\cos\bar{\alpha}\tan(\alpha_{\bar{\jmath}}(x)- \bar{\alpha})\Big{)}^{-1}=:(f(\bar{\alpha}))^{-1}\,. \tag{3.9}\] Since \(f^{\prime}(\bar{\alpha})=-\cos\bar{\alpha}\tan(\alpha_{\bar{\jmath}}(x)-\bar {\alpha})(\tan\bar{\alpha}+\tan(\alpha_{\bar{\jmath}}(x)-\bar{\alpha}))<0\) and \(\bar{\alpha}\geq\frac{\alpha_{\bar{\jmath}}(x)}{2}\), we have that \(f\) has a maximum at \(\bar{\alpha}=\frac{\alpha_{\bar{\jmath}}(x)}{2}\) and that \(f(\frac{\alpha_{\bar{\jmath}}(x)}{2})=2\sin(\frac{\alpha_{\bar{\jmath}}(x)}{2 })\), thus giving (3.5). Now, with (3.6) in hand, we observe that claim (3.4) is proven if we show that \[g(z):=-\frac{1}{2\delta}+\frac{1}{4\delta\sin\left((1-z)\frac{\pi}{6}\right)} -z\geq 0\,. \tag{3.10}\] Notice that \[g^{\prime}(z)=\frac{\pi}{24\delta}\frac{\cos\left((1-z)\frac{\pi}{6}\right)} {\sin^{2}\left((1-z)\frac{\pi}{6}\right)}-1,\] \(g^{\prime}(0)=\frac{\sqrt{3}\pi}{12\delta}-1\geq 0\) for \(0<\delta<\frac{1}{2\sin\frac{\pi}{7}}-1\), and \[g^{\prime\prime}(z)=\frac{\pi^{2}}{144\delta}\frac{1+\cos^{2}\left((1-z)\frac {\pi}{6}\right)}{\sin^{3}\left((1-z)\frac{\pi}{6}\right)}\geq 0\,.\] It follows that \(g(z)\) is monotonically increasing in the interval \([0,1-\frac{6}{\pi}\arcsin\frac{1}{2(1+\delta)}]\), which together with the fact that \(g(0)=0\) implies (3.10). This concludes the proof of (3.3) and shows that if equality holds true in (3.3), then \(\alpha_{j}(x)=\frac{\pi}{3}\) for every \(j=1,\ldots,I(x)\). But this yields \[-I(x)+1=\frac{1}{2}\sum_{e\in\mathfrak{l}^{\mathrm{bdry}}(x)}\mathscr{V}^{ \delta}(|e|)+\sum_{e\in\mathfrak{l}^{\mathrm{inner}}(x)}\mathscr{V}^{\delta}( |e|)\geq-I(x)+1\,,\] and hence the inequality above is in fact an equality thus providing the last sentence in the statement. The following result, which is a consequence of Lemma 3.1, is the analogous of [6, Lemma 4.2] in the soft affine case. **Lemma 3.2**.: _Let \(\mathsf{X}\in\mathcal{A}\) be connected and have simple and closed polygonal boundary and suppose that \(\mathsf{X}^{\prime}:=\mathsf{X}\setminus\partial\mathsf{X}\) is non-empty. Then,_ \[\mathcal{F}_{\mathscr{V}^{\delta}}(\mathsf{X})\geq\mathcal{F}_{\mathscr{V}^{ \delta}}(\mathsf{X}^{\prime})+6\,. \tag{3.11}\] _Moreover, if equality holds true, then \(\alpha(x)=(\sharp^{\mathrm{inner}}(x)+\sharp^{\mathrm{bdry}}(x)-1)\frac{\pi}{3}\) for every \(x\in\partial\mathsf{X}\), \(|e|=1\) for every \(e\in\bigcup_{x\in\partial\mathsf{X}}({\mathsf{l}}^{\mathrm{bdry}}(x)\cup{ \mathsf{l}}^{\mathrm{inner}}(x))\), and \(\mu(\mathsf{X})=\mu(\mathsf{X}^{\prime})\)._ Proof.: By (3.1), we get \[\sum_{x\in\partial\mathsf{X}}\Big{(}\frac{1}{2}\sum_{e\in{ \mathsf{l}}^{\mathrm{bdry}}(x)}\mathscr{V}^{\delta}(|e|)+\sum_{e\in{\mathsf{ l}}^{\mathrm{inner}}(x)}\mathscr{V}^{\delta}(|e|)\Big{)}\] \[\leq \,\mathcal{E}_{\mathscr{V}^{\delta}}(\mathsf{X})-\mathcal{E}_{ \mathscr{V}^{\delta}}(\mathsf{X}^{\prime})=-3\sharp\partial\mathsf{X}+ \mathcal{F}_{\mathscr{V}^{\delta}}(\mathsf{X})-\mathcal{F}_{\mathscr{V}^{ \delta}}(\mathsf{X}^{\prime})\,,\] whence, using that \(\mathsf{X}\) is connected, we deduce that \[\mathcal{F}_{\mathscr{V}^{\delta}}(\mathsf{X})-\mathcal{F}_{\mathscr{V}^{ \delta}}(\mathsf{X}^{\prime})\geq 3\sharp\partial\mathsf{X}+\sum_{x\in \partial\mathsf{X}}\Big{(}\frac{1}{2}\sum_{e\in{\mathsf{l}}^{\mathrm{bdry}}(x )}\mathscr{V}^{\delta}(|e|)+\sum_{e\in{\mathsf{l}}^{\mathrm{inner}}(x)} \mathscr{V}^{\delta}(|e|)\Big{)}\,. \tag{3.12}\] By (3.12), in view of Lemma 3.1, we obtain \[\mathcal{F}_{\mathscr{V}^{\delta}}(\mathsf{X})-\mathcal{F}_{\mathscr{V}^{ \delta}}(\mathsf{X}^{\prime})\geq 3\sum_{x\in\partial\mathsf{X}}\Big{(}1- \frac{\alpha(x)}{\pi}\Big{)}=6\,, \tag{3.13}\] where in the last equality we have used the fact that \(\mathsf{X}\) has simple and closed boundary and the Gauss-Bonnet Theorem to deduce that \(\sum_{x\in\partial\mathsf{X}}(\pi-\alpha(x))=2\pi\). Therefore, (3.11) is proven. Moreover, if the equality holds true, then we should have that (3.3) is satisfied with equality for every \(x\in\partial\mathsf{X}\) ; by Lemma 3.1 this implies that \(\alpha(x)=(\sharp^{\mathrm{inner}}(x)+\sharp^{\mathrm{bdry}}(x)-1)\frac{\pi}{3}\) for every \(x\in\partial\mathsf{X}\) and \(|e|=1\) for every \(x\in\partial\mathsf{X}\) and for any \(e\in{\mathsf{l}}^{\mathrm{bdry}}(x)\cup{\mathsf{l}}^{\mathrm{inner}}(x)\). It follows that all the faces lying on the boundary are equilateral triangles with unitary side-length. In particular, if equality in (3.11) holds true, then \(\mu(\mathsf{X})=\mu(\mathsf{X}^{\prime})\) so that also the last sentence in the statement is proven. In what follows, for every \(s\in\mathbb{N}\), we denote by \(H_{s}\) the regular hexagon with side-length \(s\), centered at the origin, and with two horizontal sides. If \(s=0\), then we set \(H_{0}:=\{0\}\). **Definition 3.3** (**Canonical configuration)**.: Let \(N\in\mathbb{N}\). If \(N=3s^{2}+3s+1+(s+1)k+j\), with \(s,k,j\in\mathbb{N}\cup\{0\}\), \(0\leq k\leq 5\) and \(0\leq j\leq s\), then the canonical configuration is given by \[\overline{\mathsf{X}}_{N}:=\!\big{(}H_{s}\cap\mathcal{T}\big{)} \cup\Big{\{}e^{ir\frac{\pi}{3}}(\alpha_{1}+\alpha_{2}e^{i\frac{\pi} {3}})\,:\,\alpha_{1},r\in\mathbb{N}_{0}\,,\alpha_{2}\in\mathbb{N}\,,\alpha_{1 }+\alpha_{2}=s+1\,,0\leq r\leq k-1\Big{\}}\] \[\cup\Big{\{}e^{ik\frac{\pi}{3}}(\alpha_{1}+\alpha_{2}e^{i\frac{ \pi}{3}})\,:\,\alpha_{1}\in\mathbb{N}_{0}\,,\alpha_{2}\in\mathbb{N}\,,\alpha_{1 }+\alpha_{2}=s+1\,,\alpha_{2}\leq j\Big{\}}\,.\] This amounts to considering a big regular hexagon with side length \(s\) filled with particles, plus \(k\) additional full sides, plus a final partially filled side with \(j\) particles (see Figure 2). By construction, \(\mu(\overline{\mathsf{X}}_{N})=\mathcal{E}_{\mathrm{el}}(\overline{\mathsf{X}}_ {N})=\sharp\mathsf{Ed}^{\mathrm{wire}}(\overline{\mathsf{X}}_{N})=0\), \(\chi(\overline{\mathsf{X}}_{N})=1\) and \[\mathrm{Per}_{\mathsf{gr}}(\overline{\mathsf{X}}_{N})=\sharp\partial\overline{ \mathsf{X}}_{N}=\left\{\begin{array}{ll}6s&\text{if }N=3s^{2}+3s+1\\ 6s+k+1&\text{otherwise.}\end{array}\right. \tag{3.14}\] **Lemma 3.4**.: _For every \(N\in\mathbb{N}\), let \(\widetilde{N}:=N-\sharp\partial\overline{\mathsf{X}}_{N}\). If \(N\neq 9\) then the following inequalities hold:_ 1. \(\sharp\partial\overline{\mathbb{X}}_{N}\leq\sharp\partial\overline{\mathbb{X}}_{ \widetilde{N}}+7\,\). 2. \(\sharp\partial\overline{\mathbb{X}}_{N}\leq\sharp\partial\overline{\mathbb{X}}_{ \widetilde{N}+1}+6\,\). Proof.: First we observe that in the case \(N\leq 6\) both inequalities are trivially satisfied, so that we can focus on the case \(N\geq 7\) for the rest of the proof. (i) We divide the proof in a few cases. If \(N=3s^{2}+3s+1\) with \(s\geq 1\,\), by (3.14), we have that \(\widetilde{N}=3(s-1)^{2}+3(s-1)+1\) and hence, again by (3.14), \(\sharp\partial\overline{\mathbb{X}}_{N}-\sharp\partial\overline{\mathbb{X}}_{ \widetilde{N}}=6\,\), which proves the claim (i) in this case. Let us now consider the case \(N=3s^{2}+3s+1+j\) with \(s\geq 1\) and \(1\leq j\leq s\,\). Then, by (3.14), \(\sharp\partial\overline{\mathbb{X}}_{N}=6s+1\,\), \(\widetilde{N}=3(s-1)^{2}+3(s-1)+1+j-1\,\), and \(\sharp\partial\overline{\mathbb{X}}_{\widetilde{N}}=6(s-1)\) if \(j=1\) and \(\sharp\partial\overline{\mathbb{X}}_{\widetilde{N}}=6(s-1)+1\) if \(j\geq 2\,\). Therefore, \(\sharp\partial\overline{\mathbb{X}}_{N}-\sharp\partial\overline{\mathbb{X}}_{ \widetilde{N}}=6\) for \(j\geq 2\,\), and \(\sharp\partial\overline{\mathbb{X}}_{N}-\sharp\partial\overline{\mathbb{X}}_{ \widetilde{N}}=7\,\), for \(j=1\,\). This proves the claim (i) also for such a range of parameters. Now we pass to the case \(N=3s^{2}+3s+1+(s+1)k\) with \(s\geq 1\,\), \(1\leq k\leq 5\) and \((s;k)\neq(1;1)\) (the case \(s=k=1\) gives \(N=9\)). Then, by (3.14), \(\sharp\partial\overline{\mathbb{X}}_{N}=6s+k+1\,\), \(\widetilde{N}=3(s-1)^{2}+3(s-1)+1+(s-1+1)(k-1)+s-1\,\), and \(\sharp\partial\overline{\mathbb{X}}_{\widetilde{N}}=6(s-1)+k\,\), so that \(\sharp\partial\overline{\mathbb{X}}_{N}-\sharp\partial\overline{\mathbb{X}}_{ \widetilde{N}}=7\,\), thus proving (i) also in this case. Finally, we discuss the case \(N=3s^{2}+3s+1+(s+1)k+j\) with \(s\geq 1\,\), \(1\leq k\leq 5\,\), and \(1\leq j\leq s\,\). Then, by (3.14), \(\sharp\partial\overline{\mathbb{X}}_{N}=6s+k+1\,\), \(\widetilde{N}=3(s-1)^{2}+3(s-1)+(s-1+1)k+j-1\,\), and \(\sharp\partial\overline{\mathbb{X}}_{\widetilde{N}}=6(s-1)+k+1\,\). It follows that \(\sharp\partial\overline{\mathbb{X}}_{N}-\sharp\partial\overline{\mathbb{X}}_{ \widetilde{N}}=6\,\), thus concluding the proof of (i). (ii) Retracing the steps of the proof of (i), we see that the only cases where we need to prove something is when \(\sharp\partial\overline{\mathbb{X}}_{N}-\sharp\partial\overline{\mathbb{X}}_{ \widetilde{N}}=7\), since in all the other cases this difference is \(6\), and (ii) follows from the monotonicity inequality \(\sharp\partial\overline{\mathbb{X}}_{\widetilde{N}}\leq\sharp\partial\overline {\mathbb{X}}_{\widetilde{N}+1}\). The cases in which the difference is \(7\) are: either \(N=3s^{2}+3s+1+1\); or \(N=3s^{2}+3s+1+(s+1)k\) with \(s\geq 1\,\), \(1\leq k\leq 5\) and \((s;k)\neq(1;1)\,\). In the first case, \(\widetilde{N}+1=3(s-1)^{2}+3(s-1)+1+1\), so that by (3.14) it follows that \(\sharp\partial\overline{\mathbb{X}}_{\widetilde{N}+1}=6(s-1)+1\), while \(\sharp\partial\overline{\mathbb{X}}_{N}=6s+1\), so that the claim (ii) follows. Figure 2. The canonical configurations for \(N=1,\dots,21\,\). In the second case, \(\widetilde{N}+1=3(s-1)^{2}+3(s-1)+1+(s-1+1)k\), so that by (3.14) we obtain \(\sharp\partial\overline{\mathsf{X}}_{\widetilde{N}+1}=6(s-1)+k+1\), while \(\sharp\partial\overline{\mathsf{X}}_{N}=6s+k+1\), and the claim (ii) follows also in this case. This concludes the proof of the whole lemma. **Theorem 3.5**.: _Let \(N\in\mathbb{N}\) and let \(\mathsf{X}_{N}\) be a minimizer of \(\mathcal{E}_{\mathscr{V}^{\delta}}\) in \(\mathcal{A}_{N}\). Then \(\mathsf{G}(\mathsf{X}_{N})\) is connected and, up to rotation and translation, \(\mathsf{X}_{N}\) is a subset of the regular triangular lattice with lattice spacing 1. Furthermore, \(\overline{\mathsf{X}}_{N}\) is a minimizer of \(\mathcal{E}_{\mathscr{V}^{\delta}}\) in \(\mathcal{A}_{N}\). Moreover, if \(N\geq 3\), then \(O(\mathsf{G}(\mathsf{X}_{N}))\) has simple and closed polygonal boundary, \(\mathsf{F}(\mathsf{G}(\mathsf{X}_{N}))=\mathsf{F}^{\triangle}(\mathsf{G}( \mathsf{X}_{N}))\) and \(\sharp\mathsf{E}\mathsf{d}^{\mathrm{wire},\mathrm{ext}}(\mathsf{G}(\mathsf{X}_ {N}))=0\)._ Proof.: We preliminarily notice that the claim is satisfied for \(N=1\) and for \(N=2\). In the latter case the minimizer is given by two points at distance equal to one (i.e., by the canonical configuration \(\overline{\mathsf{X}}_{2}\)). Therefore we focus on the case \(N\geq 3\). First, we observe that \(\mathsf{G}(\mathsf{X}_{N})\) is connected, since otherwise we could translate one of its connected components until we create a new bond of length 1, thus strictly decreasing the energy. Analogously, it is easy to see that \(\mathsf{G}(\mathsf{X}_{N})\) does not contain wire edges. Moreover, \(\mathsf{G}(\mathsf{X}_{N})\) has simply closed polygonal boundary \(\Gamma\): if not, we could choose a self-intersection point \(p\) of \(\Gamma\) and rotate one of the components of \(O(\mathsf{G}(\mathsf{X}_{N}))\setminus\{p\}\) around \(p\), until we form another bond of length one, strictly decreasing the energy. It is immediate to check that for \(N=3\) and \(N=4\), the unique (up to rotations and translations) minimizer is given by the canonical configuration. Let us discuss the case \(N=5,6\), by showing first that \(\sharp\partial\mathsf{X}_{N}=N\). To this aim, we fix any point \(\bar{x}\in\mathsf{X}_{N}\), and we consider the half-lines \(\ell_{1},\ldots,\ell_{N-1}\) starting from \(\bar{x}\) and passing through the other points of \(\mathsf{X}_{N}\) (even those not connected to \(\bar{x}\) by a bond). Let moreover \(\alpha_{1}(\bar{x})\leq\ldots\leq\alpha_{N-1}(\bar{x})\) denote the amplitude of the angles formed by two consecutive half-lines. Then \(\alpha_{N-1}(\bar{x})\geq\frac{2\pi}{N-1}\geq\frac{2\pi}{5}\). If \(W\) denotes the corresponding open wedge delimited by the half-lines defining \(\alpha_{N-1}(\bar{x})\), then we have that \(W\cap O(\mathsf{G})=\emptyset\), since the maximum angle that can appear in a triangular face is smaller than \(\frac{2\pi}{5}\). This directly proves that \(\bar{x}\) is not an interior point, thus showing that \(\sharp\partial\mathsf{X}_{N}=N\). Since \(\sharp\partial\mathsf{X}_{N}=N\geq\sharp\partial\overline{\mathsf{X}}_{N}= \mathrm{Per}_{\mathsf{gr}}(\overline{\mathsf{X}}_{N})=(\mathrm{Per}_{\mathsf{gr }}+\mu+\mathcal{E}_{\mathrm{el}})(\overline{\mathsf{X}}_{N})\), we get that \(\mu(\mathsf{X}_{N})=\mathcal{E}_{\mathrm{el}}(\mathsf{X}_{N})=0\), i.e., the claim. Finally, we consider \(N\geq 7\), and we prove the statement by induction on \(N\). We first show that \(\sharp\partial\mathsf{X}_{N}\geq\sharp\partial\overline{\mathsf{X}}_{N}\). Indeed, assume by contradiction that \[\sharp\partial\mathsf{X}_{N}\leq\sharp\partial\overline{\mathsf{X}}_{N}-1\,. \tag{3.15}\] Since \(N\geq 7\), we have that \(N^{\prime}:=N-\sharp\partial\mathsf{X}_{N}\geq N-\sharp\partial\overline{ \mathsf{X}}_{N}+1\geq 2\). Moreover, we set \(\widetilde{N}:=N-\sharp\partial\overline{\mathsf{X}}_{N}\leq N^{\prime}-1\). Assume first that \(N\neq 9\). Then, by Lemma 3.4(i), we have that \(\sharp\partial\overline{\mathsf{X}}_{N}\leq\sharp\partial\overline{\mathsf{X}}_ {\widetilde{N}}+7\leq\sharp\partial\overline{\mathsf{X}}_{N^{\prime}}+7\), so that \(\sharp\partial\mathsf{X}_{N}\leq\sharp\partial\overline{\mathsf{X}}_{N^{\prime}}+6\). Recall that \(\mathsf{X}^{\prime}_{N}=\mathsf{X}_{N}\setminus\partial\mathsf{X}_{N}\), so that \(\sharp\mathsf{X}^{\prime}_{N}=N^{\prime}\). By Lemma 3.2 and using the inductive assumption that \(\overline{\mathsf{X}}_{N^{\prime}}\) is a minimizer of \(\mathcal{E}_{\mathscr{V}^{\delta}}\) (and hence of \(\mathcal{F}_{\mathscr{V}^{\delta}}\)) in \(\mathcal{A}_{N^{\prime}}\), we thus deduce that \[\mathcal{F}_{\mathscr{V}^{\delta}}(\mathsf{X}_{N})\geq\mathcal{F}_{\mathscr{V}^{ \delta}}(\mathsf{X}^{\prime}_{N})+6\geq\mathcal{F}_{\mathscr{V}^{\delta}}( \overline{\mathsf{X}}_{N^{\prime}})+6\geq\mathcal{F}_{\mathscr{V}^{\delta}}( \overline{\mathsf{X}}_{N})\,, \tag{3.16}\] where in the last inequality we have used that \(\sharp\partial\overline{\mathsf{X}}_{N}\leq\sharp\partial\overline{\mathsf{X}}_ {N^{\prime}}+6\) and Lemma 3.4. It follows that all the inequalities above are actually equalities, since \(\mathsf{X}_{N}\) is a minimizer. In particular, \(\mu(\mathsf{X}_{N})=0\) and \(|e|=1\) for every \(e\in\mathsf{E}\mathsf{d}(\mathsf{X}_{N})\) ; but this implies that \(\sharp\partial\mathsf{X}_{N}=\sharp\partial\overline{\mathsf{X}}_{N}\), thus contradicting (3.15). Let us consider now the case \(N=9\). Therefore, if (3.15) holds true, then \(N^{\prime}:=9-\sharp\partial\mathsf{X}_{9}\geq 2\) and \(\widetilde{N}:=9-\sharp\partial\overline{\mathsf{X}}_{9}=1\) so that \(\sharp\partial\overline{\mathsf{X}}_{9}=\sharp\partial\overline{\mathsf{X}}_ {\widetilde{N}}+8\leq\sharp\partial\overline{\mathsf{X}}_{N^{\prime}}+6\), where in the last inequality we have used that \(N^{\prime}\geq 2\) so that, by (3.14), \(\sharp\partial\overline{\mathsf{X}}_{N^{\prime}}\geq 2\geq 2+\sharp \partial\overline{\mathsf{X}}_{\widetilde{N}}\,\). By (3.15), we thus get that \(\sharp\partial\mathsf{X}_{9}\leq\sharp\partial\overline{\mathsf{X}}_{N^{ \prime}}+5\,\). Therefore, by arguing as in (3.16), we get again a contradiction. If follows that in any case \(\sharp\partial\mathsf{X}_{N}\geq\sharp\partial\overline{\mathsf{X}}_{N}\,\). Then, \[(\mathrm{Per}_{\mathsf{gr}}+\mu+\mathcal{E}_{\mathrm{el}})(\mathsf{X}_{N}) \geq\sharp\partial\mathsf{X}_{N}\geq\sharp\partial\overline{\mathsf{X}}_{N}=( \mathrm{Per}_{\mathsf{gr}}+\mu+\mathcal{E}_{\mathrm{el}})(\overline{\mathsf{X} }_{N})\,,\] which implies that the inequalities above are actually equalities and hence that \(\mu(\mathsf{X}_{N})=\mathcal{E}_{\mathrm{el}}(\mathsf{X}_{N})=0\) and that \(\sharp\partial\mathsf{X}_{N}=\sharp\partial\overline{\mathsf{X}}_{N}\,\). Therefore, \(\overline{\mathsf{X}}_{N}\) is a minimizer of \(\mathcal{E}_{\gamma^{\delta}}\) in \(\mathcal{A}_{N}\,\), thus concluding the proof of the theorem.
2307.10452
Mass Loss in Evolved Stars
Intense mass loss through cool, low-velocity winds is a defining characteristic of low-to-intermediate mass stars during the asymptotic giant branch (AGB) evolutionary stage. Such winds return up ~80% of the initial stellar mass to the interstellar medium and play a major role in enriching it with dust and heavy elements. A challenge to understanding the physics underlying AGB mass loss is its dependence on an interplay between complex and highly dynamic processes, including pulsations, convective flows, shocks, magnetic fields, and opacity changes resulting from dust and molecule formation. I highlight some examples of recent advances in our understanding of late-stage stellar mass loss that are emerging from radio and (sub)millimeter observations, with a particular focus on those that resolve the surfaces and extended atmospheres of evolved stars in space, time, and frequency.
Lynn D. Matthews
2023-07-19T20:46:31Z
http://arxiv.org/abs/2307.10452v1
# Mass Loss in Evolved Stars ###### Abstract Intense mass loss through cool, low-velocity winds is a defining characteristic of low-to-intermediate mass stars during the asymptotic giant branch (AGB) evolutionary stage. Such winds return up \(\sim\)80% of the initial stellar mass to the interstellar medium and play a major role in enriching it with dust and heavy elements. A challenge to understanding the physics underlying AGB mass loss is its dependence on an interplay between complex and highly dynamic processes, including pulsations, convective flows, shocks, magnetic fields, and opacity changes resulting from dust and molecule formation. I highlight some examples of recent advances in our understanding of late-stage stellar mass loss that are emerging from radio and (sub)millimeter observations, with a particular focus on those that resolve the surfaces and extended atmospheres of evolved stars in space, time, and frequency. stars: AGB - stars: mass loss - stars: winds, outflows - masers IAU Symposium No. 380, 2023 ## 1 Introduction Asymptotic giant branch (AGB) represent the final thermonuclear burning stage in the life of low-to-intermediate mass stars, including stars like the Sun. The AGB marks the second ascent of the red giant branch for these stars, following the depletion of their core hydrogen supply and the completion of core helium burning. The internal changes to the structure of the star during the AGB cause the effective temperature to cool to \(\sim\)2000-3000 K, while to maintain hydrostatic equilibrium, the star expands to several hundred times its previous size--reaching a diameter of several astronomical units (AU) across (\(\sim 6\times 10^{13}\) cm). At the same the resulting stellar luminosity increases to \(\sim\)5000-10,000 \(L_{\odot}\). AGB stars become unstable to pulsations and typically undergo radial pulsations with periods of order 1 year, accompanied by significant changes in the visible light output of the star (as high as \(\Delta m_{V}\sim\)8 mag). A general overview of the properties of AGB stars can be found in Habing & Olofsson (2003). A consequence of the low effective temperatures of AGB stars is that molecules and dust are able to form and survive in their extended atmospheres. Importantly, the dust that forms helps to drive copious rates of mass loss (\(\dot{M}\sim 10^{-8}\) to \(10^{-4}M_{\odot}\) yr\({}^{-1}\)) through cool, dense, low-velocity winds (\(V_{\rm outflow}\sim 10\)-20 km s\({}^{-1}\)). These winds are thus over a million times stronger than the current solar wind. The dramatic mass loss that occurs during the AGB evolutionary phase has implications for wide range of problems in astrophysics. The details of AGB mass loss (including its duration, as well as the fraction of the initial stellar mass that is shed) dramatically impact stellar evolutionary tracks, affecting the maximum luminosity a given star will reach and the type of stellar remnant that it will ultimately leave behind (e.g., Rosenfield _et al._ 2014; Kaliari _et al._ 2014). The mass lost by AGB stars accounts for \(\gtrsim\)50% of the dust and heavy element enrichment in the Galaxy, thus providing a primary source of raw material for future generations of star and planets (Tielens _et al._ 2005; Karakas 2010). And for extragalactic astronomy and cosmology, accurate prescriptions for AGB mass loss are crucial for stellar population synthesis calculations (e.g., Salaris _et al._ 2014; Villaume _et al._ 2015), for understanding dust production and composition in external galaxies (e.g., Narayanan _et al._ 2021), for interpreting the integrated starlight of distant galaxies (e.g., McGaugh & Schombert 2014), and for devising prescriptions of gas recycling and chemical evolution in galaxy models (e.g., Leitner & Kravtsov 2011; Gan _et al._ 2019). This article does not attempt a comprehensive review of AGB mass loss (see instead, Hofner & Olofsson 2018; Decin 2021). Its main focus is to highlight some of the unique insights that can be gained from observations at cm and (sub)mm wavelengths that resolve AGB stars in space, time, and frequency. ## 2 Challenges to Understanding AGB Winds and Mass Loss In contrast to luminous hot stars where the winds are driven by atomic line opacity (e.g., Lamers & Cassinelli 1999), AGB winds are thought to be primarily dust-driven, with radiation pressure on dust grains transferring momentum to the gas through absorption and/or scatting, resulting in material being dragged outward to power a quasi-steady wind. This basic theoretical framework for AGB winds was established roughly half a century ago (e.g., Wickramasinghe _et al._ 1966; Kwok 1975). However, despite decades of effort, we still lack a complete and fully predictive theory of AGB mass loss (see Hofner & Olofsson 2018). To first order, dust driving appears to work relatively well for subsets of AGB stars with carbon-rich atmospheres (C/O\(>1\)), as the carbonaceous grains that are present tend to have high opacity to stellar radiation, enabling efficient momentum transfer and wind driving. However, more generally, this model has limitations. For example, growing empirical evidence suggests that real AGB winds may often deviate significantly from the idealized picture of steady, spherical symmetric outflows (e.g., Nhung _et al._ 2015; Le Bertre _et al._ 2016; Decin _et al._ 2020). Furthermore, the majority of AGB stars have oxygen-rich chemistries (C/O\(<1\)), and the silicate-rich grains that form in their extended atmospheres generally have insufficient infrared opacity to drive the winds with the efficiency needed to account for the observed mass-loss rates. Hofner _et al._ (2016) showed that the effects of photon scattering may help to alleviate this problem. Nonetheless, a persistent conundrum is that grains require sufficiently cool temperatures (\(\sim\)1000-1500 K) and low densities to form and survive, but such conditions are typically not reached interior to \(r\sim 2-3R_{\star}\) (i.e., \(r\sim\)6-7 AU) around a typical AGB star. Thus some additional process is required to transport material from the stellar "surface" into the wind launch region. It is now widely believed that pulsation and/or convection play key roles in facilitating AGB mass loss (e.g., Willson & Bowen 1985; Hofner 2016; McDonald _et al._ 2018). In broad terms, the interplay between pulsation and convection produces shock waves in the extended atmosphere, pushing gas outward; dust formation subsequently occurs in the wake of the shock; and finally, radiation pressure on the resulting grains drags material outward to power the wind (see Figure 2 of Hofner & Olofsson 2018). However, the underlying physics is highly complex, and many details are poorly understood and poorly constrained observationally. ## 3 Insights from Studies of Large-scale Circumstellar Ejecta For decades, a primary means of studying AGB mass loss has been through observations of the spatially extended circumstellar envelopes (CSEs) of chemically enriched gas and dust that are a ubiquitous feature of these stars. These CSEs may be observed using a wide variety of tracers, including molecular line emission, such as CO (Knapp _et al._ 1998; De Beck _et al._ 2010) or other thermal lines (Patel _et al._ 2011; Claussen _et al._ 2011); far-infrared emission from dust (Young _et al._ 1993; Cox _et al._ 2012), and in some cases, scattered optical light (Mauron & Huggins 2006); far-ultraviolet continuum (Martin _et al._ 2007; Sahai & Stenger 2023), or H i 21-cm line emission from atomic hydrogen (Gerard & Le Bertre 2006; Matthews _et al._ 2013). Historically, AGB CSEs were typically envisioned and modeled as spherically symmetric shells, but many of the aforementioned studies show clearly that CSE morphologies can be extraordinarily diverse. Depending on the age of the central star, its mass-loss rate, and the particular observational tracer, the observed extent of the CSE can range from tens of thousands of AU to a parsec or more, and properties of the CSE can be dramatically shaped by the presence of (sub)stellar companions (Maercker _et al._, 2012; Aydi & Mohamed, 2022) or the star's motion through the surrounding interstellar medium (e.g., Cox _et al._, 2012; Martin _et al._, 2007; Villaver _et al._, 2012; Matthews _et al._, 2013). Global studies of CSEs supply a wide array of fundamental information on the mass-loss properties of evolved stars, including measurements of the mass-loss rate and outflow speed. In addition, they can provide clues on the nature of the central star (age, temperature, initial mass), the timescale of the mass-loss history, and the mass-loss geometry (spherical, bipolar, etc.). Despite the long history of studies of AGB CSEs, observations using the latest generation of radio telescopes continue to yield new insights and surprises. One recent example is the ATOMIUM project1, an Atacama Large Millimeter/submillimeter Array (ALMA) Large Project that targeted a sample of AGB stars and red supergiants in the 214-270 GHz range with the goal of obtaining a better understanding of the chemical and physical processes that govern red giant winds (Decin _et al._, 2020; Gottlieb _et al._, 2022). Results to date show that asphericity appears to be the norm among AGB ejecta and that there is a correlation between the morphology of AGB ejecta and the current mean mass-loss rate. This program has also added to growing evidence that long-period companions (\(P>\)1 yr) commonly play a role in shaping CSEs, and that a common mechanism controls the wind morphology of both AGB stars and planetary nebulae (Decin _et al._, 2020). Footnote 1: [https://fys.kuleuven.be/ster/research-projects/aerosol/atomium](https://fys.kuleuven.be/ster/research-projects/aerosol/atomium) Footnote 2: [https://www.astro.uu.se/deathstar/index.html](https://www.astro.uu.se/deathstar/index.html) Another Large ALMA Project aimed at studying AGB ejecta is DEATHSTAR3, which has used the ALMA Compact Array to obtained spatially resolved CO measurements and line profiles for a sample of \(\sim\)70 chemically diverse AGB stars. Results to date show that large-scale asymmetries and complex velocity profiles are common. Future radiative transfer modeling is underway to determine accurate mass-loss rates and temperature distributions of the gas for the sample (Ramstedt _et al._, 2020; Andriantsaralaza _et al._, 2021). Meanwhile, the NESS4 program has been conducting a volume-limited survey of \(\sim\)850 evolved stars in CO and in the sub-mm continuum using the APEX and JCMT telescopes, with the goal of measuring outflow parameters, gas-to-dust ratios, and other information critical for characterizing the mass-loss histories of a large sample of stars (Scielula _et al._, 2022). It is worth emphasizing that single-dish projects like NESS remain a valuable complement to interferometric surveys such as ATOMIUM and DEATHSTAR, owing to their ability to target larger samples of stars and to characterize spatially extended and diffuse molecular emission in CSEs which can be resolved out in interferometric measurements. Footnote 4: [https://evolvedstars.space](https://evolvedstars.space) ## 4 Advances in Atmospheric Modeling of AGB Stars The complex physics of AGB star atmospheres makes modeling them both challenging and computationally expensive. Approximations of local thermodynamic equilibrium (LTE) break down in the dynamic, time-varying conditions of AGB atmospheres, and a wide range of physics needs to be included (pulsation, convection, dust formation, etc.) to produce meaningful results. Furthermore, because of the enormous spatial extents of AGB star atmospheres and outflows, the relevant spatial scales required in the model can span many orders of magnitude, ranging from the scales of shock regions and sub-surface convective cells (\(\ll R_{\star}\)) to scales of \(>1000R_{\star}\) (\(>10^{16}\) cm) as required to fully trace the evolution of temperature, density, and composition of the wind. Until recently, these challenges have often meant relying on 1D models with simplified physics (e.g., Ireland _et al._, 2011; Liljegren _et al._, 2018). While instructive for some applications, these models have important limitations. For example, since the effects of convection are inherently 3D, the result is a blurring of the distinction between pulsation and convective processes in 1D models (see Freytag & Hofner 2023). Fortunately, computational advances have begun enabling sophisticated new 3D radiation-hydrodynamical models that are able to overcome such limitations by incorporating a wide range of relevant physics, including radiative transfer, frequency-dependent opacities, pulsation, convection, shocks, dust formation, grain growth and evaporation, and wind acceleration (e.g., Freytag _et al._ 2017; Freytag & Hofner 2023). Figure 1 shows two time sequences of images of bolometric surface intensity from hydrodynamic simulations of AGB stars from Freytag & Hofner (2023). The frames separated by a few months. Both models have a similar luminosity, but the \(1M_{\odot}\) model (top) exhibits a lower surface gravity, a more extended atmosphere, and more efficient dust formation, while the \(1.5M_{\odot}\) model (bottom) displays a smaller radius, a better defined surface, and smaller, more granular surface features. While we may not yet have a fully predictive theory of stellar mass loss, models such as these are now giving us incredibly detailed predictions that can be confronted with observations. ## 5 Zooming into the Action Studies of the large-scale CSEs (Section 3) remain an invaluable tool for characterizing AGB mass loss. However, directly confronting the types of highly detailed models described above, and solving many of the outstanding puzzles related to the launch and geometry of AGB winds and their relationship to stellar pulsations, shocks, convection, and other dynamic phenomena, demands additional types of observations. In particular, there is a need for observations that: Figure 1: Time sequences of bolometric surface intensity for AGB stars from the 3D CO5BOLD hydrodynamic models of Freytag & Höner (2023). A 1.0 \(M_{\odot}\) model is shown in the top row and a 1.5 \(M_{\odot}\) model in the bottom row. Snapshots are spaced by 8 and 14 months (top) and 3.5 and 7 months (bottom), respectively, from the starting frame. The size of each box is \(\sim\)5.6 AU across. 1. _Spatially_ resolve the stellar atmosphere on relevant physical scales (i.e., \(r\lesssim\)10 AU, or even \(r\ll R_{\star}\)) to probe the photosphere, surrounding molecular layers, and the dust formation zone. 2. _Temporally_ resolve processes on relevant dynamical timescales (which for AGB stars can range from days to years to decades). 3. _Spectrophotometrically_ distinguish different layers of the atmosphere to trace changes in physical conditions and chemistry. 4. Directly measure gas motions. Fortunately, observations at radio (cm through sub-mm) wavelengths using the latest generation of radio telescopes provide a variety of means to achieve these objectives. The remainder of this review highlights some examples of recent progress in our understanding of AGB mass-loss through radio observations of both molecular masers and thermal continuum emission that resolve the dynamic behavior of AGB atmospheres in space and/or time. I close by briefly highlighting the prospects of future observational facilities for additional progress in these areas. ## 6 Molecular Masers as a Tool for Understanding AGB Mass Loss The first detection of molecular maser emission associated with the extended atmosphere of an evolved star was made by Wilson & Barrett (1968), who detected masing OH lines at 1612, 1665, and 1667 MHz from the red supergiant NML Cyg using the Green Bank 140-ft telescope. This was followed a few years later by the detection of H\({}_{2}\)O (at 22.2 GHz) and SiO (at 43.1 GHz) masers, respectively, in other O-rich red giants (Sullivan, 1973; Thaddeus _et al._, 1974). Today, molecular masers from various transitions and isotopologues of OH, H\({}_{2}\)O, and SiO have been detected in hundreds of O-rich Galactic AGB stars and red supergiants (e.g., Engels & Lewis, 1996; Pardo _et al._, 1998; Kim _et al._, 2010; Rizzo _et al._, 2021), providing a unique resource for probing the gas dynamics and physical conditions of their atmospheres. While C-rich AGB stars generally do not give rise to masers from O-bearing species, they may show maser activity from other molecules, including HCN or SiS (Henkel, 1983; Omont _et al._, 1989; see also Section 6.3 below). More general overviews of the topic of stellar masers can be found in e.g., Humphreys & Gray (2004); Kemball (2007); Colomer (2008); Gray (2012); and Richards (2012). Here I highlight just a few examples of recent results that illustrate the role of maser studies for addressing the objectives outlined in Section 5. I focus primarily on SiO masers, which tend to arise inside the wind launch region of AGB stars and close to the dust formation zone. In contrast, H\({}_{2}\)O and OH masers tend to arise at successively larger radii (\(\gtrsim 10^{14}\) cm and \(\gtrsim 10^{15}\) cm, respectively), beyond the wind launch zone (e.g., Dickinson, 1978). Different transitions and isotopologues of SiO are further segregated according to the specific combinations of temperature and density that are necessary to produce masing in each respective line. In addition to their favorable location in the atmosphere, the compact sizes and high brightness temperatures (often \(>10^{6}\) K) of SiO masing regions provide the advantage of enabling observations of the masers with extraordinarily high angular resolution (\(<\)1 mas) using very long baseline interferometry (VLBI) techniques. For example, the longest baseline of the Very Long Baseline Array (VLBA) of \(\sim\)8600 km gives an angular resolution of \(\sim\)0.5 mas at 43 GHz, corresponding to a spatial resolution of \(\sim\)0.1 AU (\(\sim 0.05R_{\star}\)) for a star at 200 pc. ### Spatial Distributions Thanks to VLBI studies of SiO masers in a number of AGB stars that have been undertaken since the 1990s, it is now well established that SiO masers in AGB stars are typically found to lie (in projection) in ring-like or partial ring-like structures, with a mean radius of roughly twice that of the stellar photosphere (e.g., Diamond _et al._ 1994; Cotton _et al._ 2006; Imai _et al._ 2010). Evidence for spatial segregation is observed between different SiO transitions and isotopologues, allowing them to be used as probes of changes in physical conditions and gas kinematics over scales \(\ll R_{\star}\) (e.g., Desmurs _et al._ 2000; Wittkowski et al. 2007). This information also provides important constraints on maser pumping models (e.g., Humphreys _et al._ 2002; Gray _et al._ 2009). Unfortunately, a persistent challenge has been that owing to bandwidth limitations of previous generations of instruments, it was generally not possible to observe different transitions strictly simultaneously. This, coupled with the typical use of self-calibration procedures (which can erase absolute astrometric information), meant that there has historically been uncertainty regarding the astrometric alignment between different transitions, as well as their locations relative to the central star. While approximate methods can be used to align the different measurements (e.g., Desmurs _et al._ 2000), in many cases, lingering uncertainties can be as high as several mas and potentially result in ambiguities in the interpretation (see, e.g., Soria-Ruiz _et al._ 2004). One example of important progress in overcoming this challenge has been achieved using the Korean VLBI Network (KVN) Multi-Band System (Han _et al._ 2008) together with the so-called frequency phase transfer (FPT) technique (Dodson _et al._ 2014). This approach has enabled simultaneous observations of up to five SiO and H\({}_{2}\)O transitions in several evolved stars (e.g., Dodson _et al._ 2014; Yoon _et al._ 2018; Kim _et al._ 2018). Currently, maximum KVN baselines are \(\sim\)450 km. However, future extension of this technique to longer baselines would be highly desirable to achieve even finer resolution of individual maser-emitting clumps and to enable improved astrometric precision for following their proper motions over time. ### Magnetic Field Measurements Another type of investigation that is possible through observations of stellar masers, including SiO masers, is the study of polarization and magnetic fields in circumstellar environments (see the overview by Vlemmings 2012). For example, in a VLBA study of SiO masers in the OH/IR star OH 44.8-2.3, Amiri _et al._ (2012) measured linear polarization of up to 100% in individual maser clumps, enabling them to map out the magnetic field vectors surrounding the star. For the brightest maser clump they also found evidence of circular polarization, enabling estimation of the magnetic field strength (1.5\(\pm\)0.3 G). Intriguingly, both the distribution of the SiO maser clumps and the orientation of the magnetic field vectors surrounding this star point to a preferred outflow direction for the stellar wind, hinting that a dipole magnetic field may play a role in shaping and defining the outflow. These findings in support of a non-spherically symmetric outflow complement other recent findings pointing to similar trends based on the study of molecular line emission in AGB stars on larger scales (e.g., Decin _et al._ 2020; Hoai _et al._ 2022b; Winters _et al._ 2022). ### Masers in Carbon-rich AGB Stars As noted above, SiO masers are generally absent in AGB stars with carbon-rich chemistries. However, masing action in C-type stars has been observed in a few other species, including HCN (e.g., Omont _et al._ 1989; Izumiura _et al._ 1987; Bieging 2001; Menten _et al._ 2018). HCN is the most common molecule in the atmospheres of C stars after H\({}_{2}\) and CO, although its masing properties have been relatively little studied to date compared with SiO masers in O-rich stars. If we wish to study the inner regions of the CSEs of C-type stars using HCN masers (in a manner analogous to what is possible using SiO masers in O-type AGB stars) it is helpful to target higher \(J\), vibrationally-excited states of HCN, where the opacity is lower. Studies of these transitions have been limited until now, owing to a dearth of observational facilities equipped with receivers covering the necessary frequency range (\(\nu>\)176 GHz). However, recently this has begun to change. For example, Jeste _et al._ (2022) surveyed a sample of 13 C-type stars using the APEX telescope and bands centered at 180 GHz, 230 GHz, and 345 GHz, respectively, providing access to 26 different HCN transitions. Masing was observed in several different transitions, including the HCN (0,11e,0) \(J=2-1\) (\(v_{0}\)=177.2 GHz) line, which was detected in 11 targets, suggesting that it is a common feature of carbon stars. Furthermore, the observed velocity extents of theses masers indicate that the lines are originating in the acceleration zone where dust is forming, implying they have the potential to serve as an important new diagnostic tool for the study of wind launching in C-type AGB stars. For stars with multi-epoch observations, clear changes in the line profile were seen with time (e.g., Figure 2), including in some cases, over the course of only a few days. ### The Time Domain #### 6.4.1 Global Measurements Adding a temporal dimension of maser studies significantly expands what we can learn about the time-varying atmospheres of AGB stars compared with single-epoch observations. As noted in Section 1, radial pulsations are a defining characteristic of AGB stars, and these commonly have periods of order 1 year. These are accompanied by changes in the visible light output of the star by factors of up to a thousandfold (e.g., Reid & Goldston 2002 and references therein). In the case of SiO masers, it has long been recognized that variations in the SiO line profiles are correlated with the pulsation cycle. For example, for Mira-type variables, the study of Pardo _et al._ (2004) found a correlation between the integrated intensity of the SiO \(v\)=1, \(J=1-0\) line at 43.1 GHz and both the infrared and optical light curves. The masers tend to vary in phase with the infrared, but lag the optical light curve phase by \(\sim\)0.05 to 0.2. The authors cited this as evidence that the SiO masers must be radiatively pumped. However, secular variations of the SiO masers not obviously linked with the pulsation cycle were also seen. While there have been numerous time-domain studies of SiO and other stellar masers over the past few decades, our ability to fully exploit and interpret the results for understanding AGB star atmospheres and mass-loss (as well as the underlying maser physics) has been hampered by several limitations of these studies. Among these are: (i) limited instrumental bandwidths (which have precluded simultaneously monitoring multiple maser transitions); (ii) limited spectral resolution (which obscures the complex velocity structure of the line and may Figure 2: Observed variability of the HCN (0,11e,0) \(J=2-1\) maser at 177.2 GHz in the carbon star IRC+10216. The different colored lines show the results from different observing dates. From Jeste _et al._ (2022). make it impossible to discern subtle changes with time); (iii) limited signal-to-noise ratios (preventing the detection of weaker lines); (iv) sample selection biases [e.g., exclusion of semi-irregular and irregular (non-Mira-type) variables]; (v) and monitoring programs which are either short-lived (a few years or less) and/or sparsely sampled (observing cadences of \(\geq\)1 month), thus producing observations which are unable to sample all relevant dynamical timescales for the stars. Fortunately, recent progress has been made in nearly all of these areas. One example of the power of simultaneously monitoring multiple lines with a wide frequency band is the study by Rizzo _et al._ (2021), which surveyed 67 O-rich AGB stars and red supergiants between \(\lambda\)7 mm and \(\lambda\)1 mm, targeting SiO rotational transitions between \(J=1-0\) and \(J=5-4\), vibrational numbers \(v\)=0 to 6, and 3 different isotopologues (\({}^{28}\)SiO, \({}^{29}\)SiO, and \({}^{30}\)SiO). This study resulted in the detection of several new SiO lines in many of the targets, thus revealing the fascinating complexity of their multi-wavelength SiO spectra. Among these was first detection of an SiO \(v\)=6 line. Additionally, dramatic variations in the line profiles of some targets were seen on timescales as short as \(\sim\)2 weeks. Evidence for SiO maser variability over even shorter timescales was reported by Gomez-Garrido _et al._ (2020). These authors performed daily monitoring of a sample of 6 stars and found evidence of rapid (\(\sim\)1 day) intensity variations of \(\sim\)10-25% in multiple SiO lines in two semi-regular variables (RX Boo and RT Vir). Similar variations were not seen in the Mira-type variables in the sample. The authors postulated that the semi-regular variables may have intrinsically smaller maser-emitting clumps and more chaotic shock behaviors in their atmospheres. However, high-cadence VLBI monitoring observations of semi-regular variables will be needed to test these ideas and improve our understanding of these phenomena. #### 6.4.2 Spatially and Temporally Resolved Imaging Spectroscopy As described above, multi-epoch maser measurements provide important insights into the time-varying behavior of AGB star atmospheres. When this is combined with spatially resolved measurements (particularly with VLBI resolution), our ability to interpret the results in a physically meaningful way is significantly enhanced (e.g., Richards _et al._ 1999; Gray & Humphreys 2000; Phillips _et al._ 2001; Wittkowski _et al._ 2007). Undoubtedly one the most spectacular examples of the power of spatially resolved maser monitoring observations for the study of evolved stars is the 78-epoch study of the SiO \(v\)=1, \(J=1-0\) masers in the Mira variable TX Cam undertaken by Gonidakis _et al._ (2013). Using the VLBA, these authors observed the star on a \(\sim\)2-4 week cadence over the course of nearly 5 years, resulting in a dramatic "movie" of the star's evolving atmosphere. This study confirmed that the proper motions of SiO maser clumps can be used to trace gas motions close to the stellar photosphere, revealing both the expansion and infall of gas. The width and boundary of the SiO maser "ring" in TX Cam (actually a shell seen in projection) was found to vary with stellar pulsation phase, and evidence for the creation of shocks with velocity of \(\sim\)7 km s\({}^{-1}\) was observed during each pulsation cycle. These shocks in turn affected the intensity and variability of the masers. Importantly, the TX Cam observations showed no evidence of strong shocks (\(>\)10 km s\({}^{-1}\)), in agreement with past analysis of radio continuum light curves of other AGB stars (Reid & Menten 2007; see also Section 7. This supports a model where stronger shocks are damped by the time they reach \(r\sim 2R_{\star}\). Additionally, the distribution and velocity structure of the masers are strongly suggestive of a bipolar outflow (Figure 3), adding to evidence that such geometries are in fact commonplace for AGB mass loss (see also Sections 2 & 3). Although the TX Cam movie is now a decade old, it is worth highlighting again here to emphasize the incredible scientific richness of data sets of this kind for understanding the physics of AGB star atmospheres and to underscore the importance of undertaking similar observations in the future for additional AGB stars spanning a range of properties. ## 7 Studies of Radio Photospheres Despite the many advantages of masers for probing the atmospheric properties and mass-loss physics of evolved stars, such studies suffer from certain limitations. For example, maser emission is not observed in all AGB stars, and for some AGB stars, the maser emission may become at times too weak to detect. In addition, the interpretation of changes in the maser emission over time can be challenging in cases where the spatial distribution of the maser clumps is not spatially resolved, or where only a single observing epoch is available. In such instances, it can be difficult to distinguish changes resulting from varying physical conditions (e.g., changes in temperature or density) from changes caused by motions of the maser-emitting gas. Fortunately, recent advances in other observational techniques can help to provide complementary information, including observations of thermal continuum emission from the atmospheric region known as the _radio photosphere_. The existence of so-called radio photospheres in AGB stars was first established by Reid & Menten (1997). These authors examined a sample of nearby AGB stars and found that the flux densities at cm through far-infrared wavelengths were systematically higher than predicted from a simple blackbody model based on the known stellar effective temperatures. This led Reid & Menten to postulate that the stars must have an optically thick layer (i.e., a radio photosphere) lying at \(r\sim 2R_{\star}\). They developed a model for the radio photosphere in which the opacity arises primarily from interactions of free electrons with neutral H and H\({}_{2}\). For a typical O-rich AGB star, the \(\tau\)=1 surface of the radio photosphere lies at \(r\sim\)2-3 AU and the spectral index of the emission is slightly shallower than a blackbody (\(\alpha\approx\)1.86). Figure 4 shows schematically where the radio photosphere lies in relation to the other atmospheric layers in a typical O-rich AGB star. Crucially, the radio photosphere resides in the Figure 3: Contour maps of the velocity-integrated SiO \(v\)=1, \(J=1-0\) maser emission in TX Cam, as observed during four different epochs over the span of \(\sim\)7 months. Data from each epoch are indicated by a different color. Based on the proper motions of the maser spots between epochs, it is apparent that the expansion velocity is higher along the SE-NW axis (dashed line) compared with the NE-SW axis, indicating a bipolar geometry. From Gonidakis _et al._ (2013). zone between the classical photosphere and the wind launch region. A consequence is that the properties of the radio photosphere will be impacted by the shocks, pulsation, convection, and other key physical processes that are believed to be responsible for helping to transport material into the wind launch region at \(r\sim 10R_{\star}\)(Reid & Menten, 1997; Gray _et al._, 2009; see also Section 2). The emission from radio photospheres is thermal, and its brightness temperature is too low to be studied at ultra-high angular resolution using VLBI techniques. However, the radio photospheres of nearby AGB stars (\(d\lesssim\)200 pc) can be resolved with the longest baseline configurations of the Very Large Array (VLA) and ALMA. Using \(\lambda\)7 mm observations obtained with the legacy VLA, Reid & Menten (2007) and Menten _et al._ (2012) produced the first spatially resolved images of the radio photospheres of 4 nearby AGB stars (the O-rich stars Mira, W Hya, R Leo, and the carbon star IRC+10216, respectively). The three O-rich stars also exhibit SiO masers, and Reid & Menten (2007) used simultaneous observations of the \(\lambda\)7 mm continuum and the SiO maser emission to establish unambiguously for the first time that the SiO masers are distributed in a shell exactly centered on the stellar photosphere. Another key finding to emerge from the above studies was that some of the radio photospheres showed clear evidence for deviation from spherical symmetry. However, with only a single measurement epoch, it was impossible to discern whether these shapes were static or time-varying. Taking advantage of the order-of-magnitude boost in continuum sensitivity of Figure 4: Schematic cross-section of the atmospheric layers in a typical O-rich AGB star. A radio photosphere lies at \(\sim 2R_{\star}\), just interior to the dust formation zone and wind launch region. The properties of the radio photosphere are therefore susceptible to the underlying physical processes that help to launch the wind, including pulsation, convective flows, and shocks. The radio photosphere is also adjacent to the region that gives rise to SiO maser emission in many AGB stars. Adapted from Menten & Reid (1997). the upgraded Karl G. Jansky VLA, Matthews _et al._ (2015, 2018) reobserved the stars studied by Reid & Menten (2007) and Menten _et al._ (2012). The resulting observations confirmed that asymmetric shapes are a common feature of radio photospheres. Furthermore, secular shape changes were discernible in observations taken several years apart. This latter finding suggests that the observed non-spherical shapes most likely result from a combination of pulsation and/or convective effects rather than rotation or the tidal effects of a companion. As part of their analysis, Matthews _et al._ (2018) showcased how the interpretation of marginally spatially resolved observations of radio photospheres can be further enhanced through the application of a class of radio imaging techniques known as regularized maximum likelihood (RML) methods. These imaging algorithms have been exploited recently to meet the challenges of VLBI imaging at mm wavelengths using sparse arrays, where traditional CLEAN deconvolution tends to perform poorly (see overview by Fish _et al._ 2016). However, many of the same challenges apply stellar imaging, and applying RML methods to their \(\lambda\) 7 mm VLA data, Matthews _et al._ found that it was possible to achieve robust, super-resolved images with resolution as fine as \(\sim\) 0.6\(\times\) the diffraction limit. This enabled clearly discerning evidence of brightness asymmetries and non-uniformities in the radio photospheres observed with VLA resolution which were not visible in CLEAN images (Figure 5). The observed photospheric features appear qualitatively consistent with the giant convective cells originally predicted to occur in red giant atmospheres by Schwarzschild (1975) and that are seen in the bolometric intensity images produced by recent 3D hydrodynamic simulations (e.g., Figure 1). The formation and dissipation of these cells is suspected of playing an important role in AGB mass loss (e.g., Hofner & Olofsson 2018). The wavelength dependence of the opacity in radio photospheres (Reid & Menten 1997) implies that shorter wavelengths probe successively deeper layers of the atmosphere. This means that the different wavelength coverages of the VLA and ALMA are highly complementary for the study of radio photospheres, and that observations of a given star at multiple wavelengths can be used to measure the run of temperature with depth in its atmosphere (e.g., Matthews _et al._ 2015; Vlemmings _et al._ 2019; O'Gorman _et al._ 2020). The higher frequencies available at ALMA are also valuable for providing an additional boost in angular resolution. While the VLA's 35 km maximum baselines and highest frequency (\(\lambda\)7 mm receiver) provide a FWHM resolution of \(\theta\sim\)40 mas (sufficient to marginally resolve nearby AGB stars within \(d\,\hbox{\hbox{$<$}\kern-8.0pt\lower 4.3pt\hbox{$\sim$}}\,\)200 pc), the combination of ALMA's longest baseline configuration (16 km maximum baselines) and Band 7 (\(\lambda\)0.89 mm) receiver can achieve \(\theta\sim\)12-20 mas, sufficient to supply several resolution elements across a nearby AGB star. Using one such high-resolution ALMA data set at \(\lambda\)0.89 mm, Vlemmings _et al._ (2017) reported evidence for a "hot spot" on the surface of the AGB star W Hya which they interpreted as evidence for a pocket of chromospheric gas with a brightness temperature \(T_{B}\,\)\(>\)53,000 K. Figure 5: Spatially resolved images of radio photospheres of nearby AGB stars at \(\lambda\) 7 mm, obtained using the Jansky VLA. The images were produced using RML imaging techniques, which enabled a modest level of super-resolution (see text). Adapted from Matthews _et al._ (2018) and Matthews _et al._, in prep. The presence of such hot plasma associated with such a cool star (\(T_{\rm eff}\)\(\approx\)2300 K) is confounding, and would seem to require a combination of strong shock heating and long post-shock cooling times, at odds with current pulsation and convection models. On the other hand, a re-analysis of the same data by Hoai _et al._ (2022a) seem to show no evidence for the presence of this hot spot on W Hya, suggesting the possibility that its origin may have been due to an imaging artifact. Follow-up observations of W Hya and other similar stars are clearly of interest to investigate these findings. ## 8 Prospects for the Study of AGB Mass Loss with Next Generation Radio Arrays Section 7 described several examples of recent results that illustrate what is possible to achieve from spatially resolved imaging of the thermal continuum of evolved stars at cm and (sub)mm wavelengths using current state-of-the-art observational facilities. While these results are both groundbreaking and scientifically valuable, we can anticipate an enormous leap in such capabilities in the coming decade thanks to planned next-generation radio facilities, including the Next Generation Very Large Array (ngVLA; Murphy 2018) and the Square Kilometer Array (SKA; e.g., Schilizzi 2004; Braun _et al._ 2019). The ngVLA will be built in the United States and Mexico, and its "Main Array" is expected to have \(\sim\)218 dishes of 18 m diameter spread over an area several hundred km across. With its combination of frequency coverage (1.2-116 GHz), thermal sensitivity (\(\sim\)0.2-0.7 \(\mu\)Jy beam\({}^{-1}\) hr\({}^{-1}\)), and angular resolution (\(\sim\)1 mas at 100 GHz), the ngVLA will be a game-changer for stellar imaging (e.g., Figure 6) and for the study of evolved stars and their CSEs over all relevant spatial scales, ranging from \(\ll R_{\star}\) to \(\gtrsim 10^{6}R_{\star}\) (Matthews & Claussen 2018; Carilli _et al._ 2018; Akiyama & Matthews 2019). At the highest angular resolutions of the ngVLA Main Array, some examples of science related to AGB stars and their mass loss that will be enabled include: * The ability resolve radio surfaces to beyond \(d\gtrsim\)1 kpc (thus expanding samples of resolved AGB stars by \(\times\)300). * Resolution of radio surfaces over two decades in frequency for nearby stars (\(d\lesssim\)200 pc). * Simultaneous, astrometrically registered studies of photospheric continuum and multiple maser lines. * Ability to undertake detailed comparison with (contemporaneous) optical/infrared images from facilities such as CHARA and the VLT (see Paladini _et al._ 2018; Ridgway _et al._ 2019). One of the most exciting prospects of the ngVLA for stellar science will be the ability for the first time, to make "movies" of the evolving radio photospheres of nearby stars over the course of their pulsation cycles and to quantitatively characterize changes in stellar properties over time (Akiyama & Matthews 2019). Currently, images of radio photospheres made with the VLA and ALMA have insufficient angular resolution and imaging fidelity to discern subtle changes in parameters such as stellar radius and brightness temperature with time, or to chronicle the evolution of surface features that are predicted to occur over timescales of weeks or months (see Figure 1; see also, e.g., Figure 3 of Freytag _et al._ 2017). However, as shown by Akiyama & Matthews (2019), this will change dramatically with the ngVLA. Indeed, time-lapse movies of the thermal emission should provide exquisite levels of detail comparable to what can now be seen in time-lapse movies of SiO masers with VLBI resolution (cf. Section 6.4.3). Furthermore, it is worth noting that simultaneous studies of both thermal and maser emission should provide unprecedented levels of detail for helping to reveal further insights into the mass-loss process of these dynamic and fascinating stars. The SKA mid-frequency array will be built in the Karoo desert of South Africa, and its initial design (SKA1-Mid) is expected to cover frequencies from 350 MHz to 15.4 GHz. Because of its shorter maximum baselines (\(\leq\)150 km) and more limited frequency coverage, SKA1-Mid will not be able to rival the ngVLA for spatially resolved stellar imaging, though it will be able to moderately resolve radio photospheres at \(d\mathrel{\hbox{\hbox to 0.0pt{\lower 2.365pt\hbox{$\sim$}} \kern-3.0pt\raise 1.72pt\hbox{$<$}}}\)200 pc. In addition, it will be a powerful tool for obtaining sensitive radio light curves of hundreds of evolved stars. As shown by Menten & Reid (1997; see also Reid & Goldston 2002), the measurement of radio light curves supplies valuable information on the amplitudes of shocks in AGB star atmospheres, even in the absence of spatially resolved measurements. However, radio light curves are currently available for only a handful of AGB stars. Obtaining useful light curves requires a combination of good sensitivity (typical flux densities are \(\mathrel{\hbox{\hbox to 0.0pt{\lower 2.365pt\hbox{$\sim$}} \kern-3.0pt\raise 1.72pt\hbox{$<$}}}\)1 mJy in cm bands), accurate calibration (better than \(\sim\)10%), and both frequent and long-term temporal sampling (every 1-2 weeks over timescales of many months). For these reasons, such measurements are technically and logistically challenging with current arrays. However, SKA should be able to produce the most accurate radio light curves for AGB stars to date, with quasi-simultaneous coverage across a wide range of frequencies (see also Marvel 2004). ## 9 Summary The mass loss that occurs during the late stages of stellar evolution has implications for a wide range of problems in astrophysics. However, we do not yet have a complete and fully predictive theory. Sophisticated new 3D hydrodynamic models for AGB star atmospheres are now available--including pulsation, convection, and other vital physics--that make highly detailed predictions that can be confronted with observations. However, rigorously testing such models requires access to observations that spatially resolve the stellar atmosphere and wind launch region on scales (\(r\mathrel{\hbox{\hbox to 0.0pt{\lower 2.365pt\hbox{$\sim$}} \kern-3.0pt\raise 1.72pt\hbox{$<$}}}\)10 AU), and temporally resolve relevant dynamical timescales (which can span days to months to years). In this review, I have highlighted examples of recent cm and (sub)mm wavelength observations, including observations of molecular masers and thermal continuum emission, that are making progress in these areas and helping to advance our understanding of late-stage stellar mass loss. Even greater advances are anticipated in the next decade when a new generation of radio telescopes, including the ngVLA and SKA, comes online. Figure 6: Simulated observations of a radio photosphere at \(\lambda\)7 mm (46 GHz) for an AGB star at \(d\)=200 pc. A bolometric surface intensity model from Freytag _et al._ (2007; left) was used as a proxy for the expected appearance of the thermal radio emission. The second image shows a simulated observation of the model with the current VLA ‘A’ configuration (35 km maximum baselines). The right two panels show 1-hour simulated observations with the ngVLA Main Array (1000 km maximum baselines), imaged using two different methods: traditional CLEAN, and a regularized maximum likelihood (sparse modeling) method. For additional information, see Akiyama & Matthews (2019). Acknowledgements: LDM was supported in part by grant AST-2107681 from the National Science Foundation.
2306.08134
Uncovering and Exploiting Hidden APIs in Mobile Super Apps
Mobile applications, particularly those from social media platforms such as WeChat and TikTok, are evolving into "super apps" that offer a wide range of services such as instant messaging and media sharing, e-commerce, e-learning, and e-government. These super apps often provide APIs for developers to create "miniapps" that run within the super app. These APIs should have been thoroughly scrutinized for security. Unfortunately, we find that many of them are undocumented and unsecured, potentially allowing miniapps to bypass restrictions and gain higher privileged access. To systematically identify these hidden APIs before they are exploited by attackers, we developed a tool APIScope with both static analysis and dynamic analysis, where static analysis is used to recognize hidden undocumented APIs, and dynamic analysis is used to confirm whether the identified APIs can be invoked by an unprivileged 3rdparty miniapps. We have applied APIScope to five popular super apps (i.e., WeChat, WeCom, Baidu, QQ, and Tiktok) and found that all of them contain hidden APIs, many of which can be exploited due to missing security checks. We have also quantified the hidden APIs that may have security implications by verifying if they have access to resources protected by Android permissions. Furthermore, we demonstrate the potential security hazards by presenting various attack scenarios, including unauthorized access to any web pages, downloading and installing malicious software, and stealing sensitive information. We have reported our findings to the relevant vendors, some of whom have patched the vulnerabilities and rewarded us with bug bounties.
Chao Wang, Yue Zhang, Zhiqiang Lin
2023-06-13T20:55:10Z
http://arxiv.org/abs/2306.08134v1
# Uncovering and Exploiting Hidden APIs in Mobile Super Apps ###### Abstract. Mobile applications, particularly those from social media platforms such as WeChat and TikTok, are evolving into "super apps" that offer a wide range of services such as instant messaging and media sharing, e-commerce, e-learning, and e-government. These super apps often provide APIs for developers to create "miniapps" that run within the super app. These APIs should have been thoroughly scrutinized for security. Unfortunately, we find that many of them are undocumented and unsecured, potentially allowing miniapps to bypass restrictions and gain higher privileged access. To systematically identify these hidden APIs before they are exploited by attackers, we developed a tool APIScore with both static analysis and dynamic analysis, where static analysis is used to recognize hidden undocumented APIs, and dynamic analysis is used to confirm whether the identified APIs can be invoked by an unprivileged 3rd-party miniapps. We have applied APIScore to five popular super apps (i.e., WeChat, WeCorn, Baidu, QQ, and Tiktok) and found that all of them contain hidden APIs, many of which can be exploited due to missing security checks. We have also quantified the hidden APIs that may have security implications by verifying if they have access to resources protected by Android permissions. Furthermore, we demonstrate the potential security hazards by presenting various attack scenarios, including unauthorized access to any web pages, downloading and installing malicious software, and stealing sensitive information. We have reported our findings to the relevant vendors, some of whom have patched the vulnerabilities and rewarded us with bug bounties. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none: + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none none + FootnoteFootnote †: copyrighted: none none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none APIs to confirm whether they are true APIs, and further classifies them into checked and unchecked ones based on whether it can only be invoked by the 1st-party miniapps using _Dynamic API Classification_. We have tested APISCope with five popular super apps: WeChat, WeCom, Baidu, QQ, and TikTok. Our evaluation results show that all the tested super apps contained hidden APIs. Interestingly, our study found hidden APIs in different categories, with some super apps having more hidden APIs than documented ones. For example, the API category of Payment of WeChat contains 28 hidden APIs, which is significantly more than its documented ones (i.e., only one). We also measure the usage of hidden APIs in both 1st party miniapps and 3rd party miniapps. We found that the use of undocumented APIs is common among both 1st-party miniapps and 3rd-party miniapps regardless of their category. It is evident that not all hidden APIs may pose security risks when misused. Therefore, our objective was to dive into the security implications of hidden APIs. Specifically, we focused on the hidden APIs that lack security checks but can access sensitive Android OS resources. To achieve this, we proposed the use of dynamic analysis techniques. Our dynamic analysis approach involves identifying APIs that call native APIs, which can access sensitive resources. We achieved this by hooking APIs that access sensitive resources and monitoring their use by unchecked and undocumented APIs. After conducting our investigation, we found that WeChat has 39 hidden unchecked APIs (7.77%) that invoke Android APIs protected by permissions. Similarly, WeCom has 40 (6.75%), Baidu has 8 (7.61%), Tiktok has 32 (26.23%), and QQ has 38 (12.88%) such APIs, which can have security risks. To further validate our findings, we conducted several attack case studies by developing a number of malicious miniapp using these hidden APIs. Specifically, in WeChat, we developed a malicious mini-app to exploit the hidden private_openUr1 API to access arbitrary malicious content without detection by the super apps. Additionally, by using the installDownloadTask hidden API, we developed a mini-app that can download and install harmful Android apps surreptitiously. Malicious apps have the capability to piffer a user's sensitive information. Our demonstration reveals the utilization of hidden APIs such as captureScreen, which enables malicious miniapps to steal screenshots, getLocalPhoneNumber, which permits theft of the user's phone number, and searchContacts, which facilitates the theft of the user's contact information. **Contributions.** We make the following contributions: * We are the first to discover that super apps may provide hidden, i.e., undocumented, APIs (for the 1st-party miniapps), and those hidden APIs that do not have permission checks can be exploited by the 3rd-party miniapps for privileged accesses. * We propose APISCope to systematically identify and classify the hidden APIs in super apps, with two novel techniques to statically recognize the APIs and dynamically execute and classify them. * We implement APISCope, and evaluate it with 5 super apps and find all of them containing hidden APIs, some of which can be exploited by malicious 3rd-party miniapps. We have made the responsible disclosure to their vendors, and received bug bounties from some of them. ## 2. Background Miniapps are programs that run on top of host apps instead of directly on the operating system. Host apps have to function like an operating system and provide resources (e.g., location, phone numbers, addresses, and social network information) to miniapps through APIs. Mobile super apps are organized in a layered architecture, with each layer focusing on different aspects like portability, security, and convenience, but working together to support miniapp execution within host apps, as shown in Figure 1: * **Mini-Application Layer**, which is the top layer of a super-app runtime. All miniapps, including 1st-party and 3rd-party miniapps, are located in this layer. To prevent one miniapp from accessing resources of other miniapps, the host app creates an isolated process for each miniapp. If privileged access is given to 1st-party miniapps, it must be controlled and checked to prevent 3rd-party miniapps from using them. Typically, miniapps are implemented using JavaScript ( JavaScript, 2018). * **JavaScript Framework Layer** provides APIs for resource accesses and management, which are consumed by miniapps in the Application Layer. These APIs allow miniapps to access resources (such as location-based services) and manage UI elements (such as opening a new UI window). The JavaScript Framework Layer is also implemented using JavaScript. * **Customized V8 Layer**, which provides support for native C/C++ libraries such as WebGL to power the execution of miniapps. It also acts as a bridge between the JavaScript Framework layer and lower-layers. When miniapps call APIs such as wx.getLocation, the Framework layer sends the API name and parameters to the Customized V8 layer, which then passes the request to the underlying layers. This layer is usually implemented using C/C++. * **Service Abstraction Layer**, which provides an interface to access services from either the super apps (e.g., user account information) or the underlying OS (e.g., Bluetooth, location-based services). In the case of the wx.getLocation API, this layer communicates with the host app using IPC to invoke the Java API getSystemService(LOCATION_SERVICE) to retrieve the current location. This layer is implemented using a combination of Java and C/C++ code for the Android platform. Figure 1. Architecture of Super App Runtime in Android ## 3. Motivation and Problem Statement This section describes the motivation of this work by providing some key observations in SS3.1, then define the problem, the scope and the threat model in SS3.2. ### Key Observations As alluded earlier, when manually inspecting the implementation of some of the 1st-party minaieps offered by WeChat, we found that other than the public APIs that all the miniapps can access without restrictions, the 1st-party miniapp Tencent Doc actually uses some undocumented APIs (e.g., openUr1 for opening arbitrary URLs). Moreover, the designers of WeChat do not make the APIs available to be public (their documentation does not even mention openUr1), and have placed security checks to prevent openUr1 from being accessed by arbitrary miniapps. For example, whenever a 3rd-party miniapp attempts to invoke openUr1, WeChat will throw an insufficient permission exception (i.e., "fail: no permission") and terminate its execution. The use of openUr1 in the 1st-party Tencent Doc miniap prompted us to investigate the possibility of other hidden APIs offered by WeChat without proper security checks. This inspired us to explore the feasibility of identifying and exploiting these APIs, but we faced two challenges: (i) identifying the hidden APIs and (ii) properly invoking them to test for potential vulnerabilities. Through further exploration, we made two key observations to address these challenges. **Observation-I: Undocumented API Recognition.** By manually inspecting the implementation of WeChat, we found that multiple suspicious undocumented functions are co-located with their documented APIs. That is, those functions and the public APIs are located in the same super app packages, and their implementations look similar to that of the documented APIs (e.g., they have similar function signature, similar parameter type and return value type). We start by inferring whether those functions are indeed undocumented APIs, since intuitively the public APIs and undocumented APIs are APIs, and the developers would have followed the same practice to implement them. Without surprise, we found the implementation of openUr1, which confirms our observation. In Figure 2, we show 3 API implementations of WeChat. Although the code is highly obfuscated (where the names of the classes and methods are replaced with meaningless letters, such as "a","b"), we still can observe some invariants: WeChat's public API getLocation (line 1-13) and its undocumented API openUr1 (line 14-25) both have the same parameter types and return types, as well as the same superclass (i.e., class b). As such, we can use these invariants (e.g., the superclass of the API, the parameters of the API) collected from the public APIs to search for possible undocumented APIs. For instance, as shown in Figure 2, we identified another function private_openUr1 (lines 28-38) that has the same function signature, which is very likely an undocumented API. **Observation-II: Undocumented API Inveaction.** Although there may be undocumented APIs (e.g., private_openUr1) provided by WeChat, we have to find a way to invoke them (if they are indeed APIs). Interestingly, when we directly invoke undocumented APIs such as private_openUr1 in a miniap, we obtain an error, "fail: not supported", which is different from the error we observed when invoking openUr1 with "fail: no permission". As such, we infer that the accessibility of the API private_openUr1 is not the same as that of openUr1 (since the observed error messages are different), and there may be a way to invoke it. As such, we further inspected the normal invocation of the documented APIs, and seek to obtain insights from the process. To be more precise, as described in SS2, the JavaScript Framework Layer acquires the invocation request during a regular API call and transfers it to the lower layers via the interfaces exposed by the Customized V8 Layer. In Figure 3, we provide a code snipp illustrating the API invocation chain of WeChat, where the invocation request for the getLocation API (line 3 in the top-left frame) is eventually passed to the NativeGlobal.invokeHandler function (line 11 in the bottom-left frame), which in turn conveys the API invocation request to the underlying layers. Notably, the NativeGlobal.invokeHandler function receives three inputs: the API name (e.g., getLocation), the API parameters, and a callback function ID (which enables the API to manage the asynchronous call). Given that NativeGlobal.invokeHandler can deliver the normal invocation request to the underlying layers, we conclude that it also has the capabilities to deliver undocumented API invocation requests. Therefore, we feed the API name private_openUr1 and its parameter (which is a URL) to the interface and let it pass the API name and the URL to the underlying layers. Interestingly, we find that the underlying layers handle the passed API name and the parameter as normal API invocations and further pass the invocation requests to the host apps. As shown in Figure 4, while WeChat restricts the undocumented APIs to be accessed by mini-apps, unfortunately we find that not all undocumented APIs are protected through security checks. In particular, WeChat has enforced the security check for the undocumented API openUr1, but it does not add Figure 2. APIs implementations of WeChat. the security checks for the undocumented API private_openUr1, which has the exact same functionalities as openUr1. Also, the API name and parameters are not obfuscated since they have to be passed to lower layers. ### Problem Statement and Scope Since our manual investigation has revealed that there are indeed hidden APIs in the super app platform and some of them can be exploited, the goal of this work is to develop techniques to uncover them. More specifically, we need to recognize the hidden APIs based on how documented APIs are implemented and executed, and meanwhile test them to determine whether they can be invoked by 3rd-party miniapps to bypass security restrictions (or those APIs themselves may have vulnerabilities). Please note that we do not consider all those 3rd-party invocable APIs as exploitable, since whether an API is exploitable depends on the functionalities of the APIs (e.g., the API implements privileged operations). Also, since there are multiple super apps available today, ideally, we would like to develop generic techniques to cover them all. However, our observation is heavily based on the miniapp run-time architecture presented in Figure 1. Therefore, the super apps that do not follow this architecture, e.g., do not use V8 engine to execute their miniapp code, will be out of our scope. Finally, because of the convenience and also our expertise, we focus on the super apps running on Android platform, though in theory our approach should also work for the iOS platform. ### Threat Model As previously discussed, our objective is to develop techniques for detecting hidden APIs that lack security checks before a malicious app exploits them. In this context, the attacker is a malware that has been installed on the user's mobile device. We will not delve into the details of how this malware can be installed, as we believe it is practical to assume that super apps are not aware of such types of malware until we report our findings to them. It is worth noting that previous research on super apps has also made similar assumptions (Zhou et al., 2018). Undocumented APIs refer to functions or APIs that are not included in the official documentation, regardless of whether it is in English or Chinese. An attacker could acquire knowledge about the existence of these hidden APIs by reverse engineering the super app client or by reading technical blogs on the internet. Specifically, undocumented APIs may have access to sensitive resources that are safeguarded by Android OS. If an attacker exploits these APIs, they can launch attacks against the victim users. ## 4. Challenges and Insights **(I) Challenges in API Recognition.** The first step of our APIScope is to identify undocumented APIs when given a host app. Intuitively, it sounds trivial, since when given an API, we could compare it with the APIs released on the official documentation to decide whether it is documented or not. However, it is challenging to determine whether an internal function or an interface is an API. For instance, there are 3,702 functions and interfaces implemented in JavaScript, not to mention those implemented in 92 native C/C++ libraries, and 56,492 Java classes in WeChat's latest version. Note that we do not have to consider the functions at lower-layer's implementations (i.e., any layer below the JavaScript framework), since the hidden APIs are not exposed at these layers. Obviously, we cannot directly treat all these functions as APIs. Also, although for a specific implementation of host apps (e.g., WeChat), simple pattern matching approaches can be applied to Figure 4. The Workflow of API invocations. Public API invocation getLocation (green line); Checked Undocumented API openUr1 (red line); Unchecked Undocumented API private_openUr1 (purple line). Figure 3. An Example of WeChat API Invocation At JavaScript Framework Layer. recognize APIs. For example, when implementing the callbacks of the APIs, WeChat uses android.webkit.ValueCallback at the Service Abstraction layer to handle all the callback results. From the callbacks, we can locate the corresponding APIs and extract patterns to pinpoint the rest APIs. However, there are multiple super apps, each of which could have different implementations. For example, unlike the implementation of WeChat, TiKTok uses com.he.jsbinding.JsContext.ScopeCallback at the Service Abstraction layer to handle the callback results of their APIs, and the pattern for WeChat will fail when dealing with TiKTok. Moreover, such a pattern-matching approach requires recognizing callbacks first, which may be challenging due to the code obfuscation. As discussed in SS3.1, the miniapp is executed on top of the super apps (e.g., Android apps), which is often heavily obfuscated. It is hard to recognize callbacks statically unless we fully understand the obfuscated code, and as such, we need a more obfuscation-resilient approach instead of simple pattern matching. **Insights.** We notice that there exist some invariants such as the method signatures of public APIs and their superclasses in the API implementations, as illustrated in SS3.1 based on super app WeChat (e.g., every API has the same superclass a, though this name is obfuscated; every public API must contain the name of the API for the references by the miniapps, and this cannot be obfuscated but can be easily recognized). As such, we can first extract these API invariants based on these public API implementations, from which to recognize the rest of the APIs. This process can be automated since it is easy to identify these API invariants when the implementation of public APIs is provided. **(II) Challenges in API Classification.** Once we have identified all these hidden APIs, we still need to further classify them into different categories and determine whether they are invocable (when there is no security check). It will be very challenging if we only use static analysis to decide this, and thus we need to rely on dynamic analysis to dynamically invoke them. However, to invoke a hidden API, we still need to recognize the interface that can communicate with the underlying layers. Although we have already known that the interface communicates with the underlying layers takes the API name as its inputs (as described in SS3.1), it is still challenging to know whether this interface accepts the API name as its input before we actually execute it (due to the obfuscated JavaScript code). Meanwhile, although multiple dynamic tools are available for JavaScript, they cannot be applied to our case directly due to the highly customized JavaScript framework implementations. For example, most JavaScript analysis tools (e.g., Jalangi2 (Jalangi, 2018)) are designed for traditional web browsers. They cannot run with the super apps since the offered APIs are different. Moreover, most of these tools need to instrument the testing instances, which involves the modification of the testing instances. In our case, the testing instances are the miniapps (not web applications), which usually have integrity checks and cannot be modified easily. **Insights.** To invoke the API for its behavior classification, we need to find the interface, e.g., NativeGlobal.invokeHandler as shown in Figure 3. Interestingly, to identify this interface, we can monitor how a public API is executed, e.g., how it is invoked (its name, parameters), and when it is passed between the boundary of the layers. More specifically, we notice that we can use function trace analysis to identify interfaces such as NativeGlobal.invokeHandler, since the API execution starts from the invocation, and ends at the interface boundary. By tracing all of the function executions with their parameters and then identifying them based on the use of the API name, which is passed as parameters, we can automatically identify the interface, which is typically the last invocation point in the JavaScript layer. With the identified invocation point, we can then feed it with different API names and invoke them to classify further (e.g., whether they can be invoked by the 3rd-party miniapps). ## 5. APIScope As shown in Figure 5, our developed APIScope consists of two phases of analysis--static analysis first and then dynamic analysis, with the following two key components: * [leftmargin=*,noitemsep,topsep=0pt] * **Static API Recognition (SS5.1).** This component takes the binary code of super apps (i.e., APKs) and the list of the official APIs in the documentation as input, and produces the undocumented APIs as output. At a high level, it first decompiles the APKs by Soot (Soot, 2018), automatically extracts the invariants based on the public APIs, and then uses the invariants to recognize the hidden APIs from the implementations of super apps. * **Dynamic API Classification (SS5.2).** This component takes the hidden APIs as input, and classifies them into three different categories: unchecked hidden APIs (exploitable by 3rd miniapps), checked APIs (available to only 1st-party miniapps), and non-APIs, as the final output. At a high level, it first uses the Test Case Generator to produce two types of test cases: one is for API invocation identification executed by a lightweight tracing engine for the monitored execution, and the other is for API classification. With these test cases, APIScope eventually identifies the interfaces as well as the categories of the APIs. Figure 5. APIScope Architecture ### Static API Recognition To recognize APIs, APIScope first needs to extract the invariants based on the decompiled code of public APIs. With the invariants, it then recognizes the hidden APIs. Therefore, it is a two-step process. In the following, we describe these two steps in greater details. **Step-I: Automatic Invariants Extraction.** APIScope first needs to extract the invariants based on the decompiled code of the public APIs. from the implementations of the super apps. In particular, when given an API, APIScope will aggressively identify as many invariants as possible from the implementation, and these invariants include: (i) the method signatures (e.g., the return type, the number of the parameters, and parameter types); (ii) the superclass; (iii) the super packages (e.g., in super app Baidu com.baidu. swan.apps is the super package of com.baidu. swan.apps.scheme.actions.f as shown in Figure 6), and (iv) their callers. Again, they are invariants because they will not be changed in the API implementation (both public and undocumented) for a specific super app, though the specific content for the invariant may be changed across super apps. For instance, in the superclass invariant of APIs, in WeChat, when comparing any two implementations of the provided APIs (e.g., getLocation and private_openUr1), we can easily recognize that they are both extended from the superclass a, as shown in Figure 2; similarly, the superclass of APIs provided by Baidu is extended from the same superclass aa, as shown in Figure 6. **Step-II: Undocumented API Recognition.** With the invariants, APIScope then recognizes the undocumented APIs. In particular, it iterates each of the function implementations again, by matching the invariants extracted; if it matches with all the invariants as in the public APIs and it has not been added in the undocumented set yet, this function's implementation is an undocumented API. That is, we have used quite restrictive patterns that need to exist in all public API implementations for a particular super app, and a function must contain all of these invariants in order to be considered an undocumented API. Regarding how exactly APIScope identifies them, we present a detailed algorithm in Appendix SSA for the readers of interest. ### Dynamic API Classification With the identified undocumented APIs, next we need to invoke each of them to decide whether they can be exploited by attackers based on the error messages obtained while executing the corresponding test cases for each of the API. This is a three-step process, starting from test case generation, followed by API invocation identification using function trace analysis, and finally the API classification through dynamic API probing. **Step-I: Test Case Generation.** In this step, we use our test case generator to produce test cases. The test cases are the JavaScript code snippets that contain the APIs to be invoked (with their parameters configured). For example, wx.getLocation({type: "wgs84"}) is a test case for testing API wx.getLocation (how to invoke such test cases will be described in API invocation identification). There are two types of test cases: one for API invocation identification and the other for API classification. The goal of API invocation identification is to execute the documented API, and use the function trace analysis to identify the invocation point. Therefore, we only need to generate a few test cases (which are the test cases of documented APIs). However, in API classification, which invokes the undocumented APIs and categorizes them based on their outputs, we need to produce at least one valid test case for each undocumented API (to obtain the outputs). In particular, since each API may accept one or multiple parameters, to produce a valid test case, we have to identify all the types (e.g., Integer, Boolean) of the parameters, through which we can further feed each API a list of parameter instances in the right order (e.g., testAPI(true, 1234)): * **Parameter Type Extraction.** While APIScope could identify the types of parameters through documentation analysis, such an approach cannot identify the types of parameters for undocumented APIs. Therefore, we need a more reliable approach to ensure that we can extract parameter types for both documented and undocumented APIs. Our idea is to analyze the implementations of the APIs, since we have already identified the implementations for both documented and undocumented APIs as described in SS5.1. For instance, in WeChat's implementation, we notice that the types of the parameters of an API can be recognized by inspecting the methods invoked by JSON instances, e.g., in the implementation of getLocation, we can notice that a JSON object invokes method optString("paramname", paramvalue), which indicates that getLocation has a "paramname" parameter with type String. Similarly, if the API accepts a Boolean value as its parameter, there will be a method optBoolean("paramname", paramvalue) in its implementation. * **Parameter Instance Generation.** The parameters must be instantiated before being fed into the APIs. We used a pre-defined template-based approach to instantiate the parameters. At a high level, the template specified the appropriate values with different types that can be used to produce the parameters (e.g., "1" and "0" are used when the "type" of the parameter is of type "number", and "test" was used when the "type" of the parameter is of type "string"). For instance, WeChat API showToast (which shows a message to the user) has two parameters title and duration, with types string and number, respectively. As such, we produced an instance with the predefined template, where title is set to Figure 6. APIs implementations of Baidu. Note that lines 1 – 12 contain a documented API, and lines 14 – 25 contain an undocumented API. "test" and duration is set to "1". Using such a template method, we successfully instantiated all the parameters. * **Parameter Order Permutation.** Although we have instantiated the parameters, we still do not know the orders of those parameters for the undocumented APIs, as the parameters in the Service Abstraction layer are all encapsulated in JSON objects. Therefore, we have to properly order the parameters, and we use a brute-force approach. For example, true and 1234 are two parameters of testAPI, which could have two possible combinations: testAPI(true, 1234)) and testAPI(1234, true). We just assume that all those combinations are valid and invoke them one-by-one (the invalid ones will be filtered out during the API classification, which will be described later). Given that one API can accept no more than 4 parameters (which results in 24 combinations), according to our static analysis with the code, we believe such a brute-force approach is acceptable. Specifically, we would like to clarify certain technical details. First, during our dynamic analysis, we only explore a limited range of inputs. This is because dynamic tracing does not require a broad range of input to expose hidden APIs. Additionally, the test case generation is sufficient for testing security checks, such as whether the hidden API is protected by security checks. In other words, as long as valid inputs are provided to the API, our tool can trigger the API if there are no security checks. If there are security checks, we can observe errors. Our objective is not to enumerate all possible inputs, as we are not fuzzing the actual hidden API. Second, hidden APIs may require complex parameter types, such as JSON-objects. These complex parameter types are combinations of other basic parameter types (e.g., integer, string), and can be recursively derived until they become primitive types. For instance, an object may contain a string, an integer, and a boolean. We can simply inflate each parameter based on its respective parameter type. As APIs implemented in the Service Abstraction Layer lack states or context, it is unnecessary to determine their execution state within this layer. Our testing process involves providing our tool with a code snippet containing the API to be tested, which is sufficient for our purposes. The JavaScript Framework Layer handles most of the checks, so the API invocation is checked before its order or dependency state is resolved. **Step-II: API Invocation Identification.** Next, APIScope needs to execute the generated test cases on top of our customized V8 engine to identify how the documented API is invoked, so that it can later similarly invoke the undocumented ones. Intuitively, when we test a specific API, we need to compile and produce a testing miniapp that contains the API for our test. However, this approach is not scaled and can slow down our testing performance. Interestingly, we notice that we can let the V8 engine directly inject the JavaScript code into the JavaScript Framework Layer (the V8 engine has a function named script, which accepts JavaScript code as input, and injects the code for the JavaScript Framework Layer to execute). Since the JavaScript code is injected into the JavaScript Framework layer, the super apps will handle the code as they handle the code in a regular miniapp. Also, in most cases, V8 Engine has a built-in Profiler, but the super apps do not directly expose any interfaces for developers to use. Meanwhile, although it is true that different platforms may customize the V8 Engine to enable their desired functionalities, they will not intentionally remove the built-in Profiler since it is also helpful for their own debugging purposes. Therefore, as long as we can find a way to invoke Profiler, we will be able to collect the traces. Fortunately, we can use Frida [16], an Android hooking tool, to dynamically instrument the V8 Engine to invoke startProfiling of Profiler and let it start profiling, and collect the function traces of documented API execution. With the collected function traces, we then present how to find the desired interface using function trace analysis, a standard technique widely used in program analysis. As discussed in SS3.1, API invocation is a complicated process involving multiple layers. Fortunately, the Profiler only runs inside the JavaScript Framework layer, and we can just monitor the function traces produced at this layer since we aim to identify how to invoke an API from the JavaScript layer. In particular, our analysis starts from the API of our interests (e.g., wx.getLocation), identifies all the functions involved based on the dependencies of parameter and API names, and eventually identifies the last invocation function, e.g., NativeGlobal.invokeHandler (see Figure 3), which is the desired interface we aim to discover. Specifically, the dependencies are indeed the chained relationship, and we actually build such dependencies based on the parameters that are fed into the functions (we can monitor the changes of parameters of the functions). For example, when we execute wx.getLocation, we will observe a function named NativeGlobal.invokeHandler that takes a parameter named getLocation as its inputs. Therefore, we know that wx.getLocation and NativeGlobal.invokeHandler have dependencies. To provide a detailed explanation of how our trace analysis works, we will utilize an example that features the implementations of API invocations across three layers, namely the JavaScript Framework layer, the Customized V8 layer, and the Service Abstraction layer. The process begins with the JavaScript Framework layer, which initiates the API invocation by calling NativeGlobal.invokeHandler. This invocation is then handed over to the Customized V8 layer, which is responsible for handling it. As shown in Figure 7, this step is represented line 10 of the JavaScript Framework layer's implementation. Next, the Customized V8 layer extracts critical information from the API invocation, including the API name, its parameters, and any corresponding callbacks. This information is obtained from lines 28-32 of the Customized V8 layer's implementation. The Customized V8 layer then proceeds to invoke the relevant APIs at the Service Abstraction Layer through the use of the Java Native Interface (JNI) [21]. Finally, during the API invocations at the Service Abstraction layer (line 4), this layer may need to communicate with the Customized V8 layer for additional operations, such as performing permission checks if the API requires them. We have omitted this code for the sake of brevity. In summary, our trace analysis provides insight into the entire process of API invocations across the three layers of the system. We track the flow of control and collect data on API names, parameters, and callbacks to enable a more comprehensive analysis of the system's behavior. **Step-III: Dynamic Probing for API Category Classification.** With the identified interfaces of how to invoke a public API, we then use it to similarly invoke undocumented APIs, by first generating the corresponding test cases, and then injecting the JavaScript code using the script function into the V8 engine, as described earlier. When executing a particular test case, there could be three types of outcomes: the tested "API" is a checked API (when invoked, a permission denial will be observed based on the standard error messages), the tested "API" is an unchecked API (which can be invoked successfully), the tested "API" is not an API. As such, we can use the following strategies to identify them. * [leftmargin=*,noitemsep,nolistsep] * **Unchecked APIs.** Similar to the public APIs, the unchecked undocumented APIs can be invoked without requiring additional permissions. As such, we first deliver a public API invocation request, such as getLocation, and record the feedback of the host app. For example, WeChat and Baidu will not print any errors when the invocation request gets approved, and we then use this as a signature to see whether an invocation request is successfully executed. * **Checked APIs.** The checked APIs are the APIs that are protected by security checks, which can only be invoked by their 1st-party miniapps. In the event of a security check failure, the super apps will generate error messages notifying the user of insufficient permissions. This exception applies to all APIs within various super apps, albeit with minor variations in the error messages displayed. For example, when 3rd-party mini-apps attempt to invoke a checked API of WeChat, the host app will throw an error message "fail: no permission". For WeCom, the error message becomes "fail: access denied". Therefore, we use keywords such as "fail", "no permission" and "access denied" to match and decide whether the invocation request gets denied. If so, it is a checked API. * **Non-APIs.** Theoretically, APIScope may have false positives, and as such, our tool may mistakenly recognize some non-APIs. Therefore, we need to filter them out. To that end, we first create an invalid request and then send it to the host app to see the feedback. For example, if we initiate an invalid request and send it to WeChat, WeChat will reject the invocation request and throw an error message "fail: not supported". Then, such an error message is used as a signature to match the non-APIs. As an example, in the case of WeChat, if we attempt to use the API openUr1, the super app will generate an error message stating "fail: no permission". This error message implies that the API is a checked hidden API. On the other hand, if we use the API private_openUr1, the super app will handle the invocation request as a regular request without displaying any error message. As a result, we can conclude that this API is an unchecked hidden API. ## 6. Evaluation We have developed a prototype of APIScope with 5K lines of code on top of open source tools such as Soot (Soot, 2017) for decompilation and Frida (Soot, 2017) for tracing. In this section, we present the evaluation results. We first describe our experimental setup in SS6.1, and then APIScope's effectiveness in SS6.2. The efficiency of APIScope is presented in Appendix-SSB for readers of interests. ### Experiment Setup **The Tested Host Apps.** Today, there are quite a number of super apps that support the execution of miniapps. Although we wish to test all of them, eventually we selected five of them, as shown in Table 1, and these include WeChat, WeCom and QQ from Tencent Holdings Ltd., Baidu from Baidu Inc., and TikTok from ByteDance Ltd. We excluded other super apps such as Alipay and Snapchat particularly because they do not build on the V8 engine (making our tool unsuitable for them at this moment). Also, to study the security issues of the tested super apps correspondingly, we registered an account in each platform, downloaded their development tools and SDKs, built miniapps by following their official documents, and inspected their code. Among them, Baidu has a relatively closed ecosystem, where only the enterprise developers are allowed to Figure 7. The implementations of API invocations across three layers (WeChat) register as their developers. However, they allow individuals to apply for trial accounts to use their development tools to develop miniapps, and therefore, we tested Baidu using their trial accounts. **The Tested Miniapps.** We believe it is important to measure the usage of undocumented APIs in 1st-party and 3rd-party miniapps for two reasons. First, understanding how 1st-party miniapps use these APIs can help us comprehend the entire ecosystem. Second, if 3rd-party developers know about these APIs, they may use them, which can lead to security issues if these APIs have access to sensitive resources. To analyze the usage of undocumented APIs in 1st-party miniapps, we searched for interfaces provided by host apps and collected 236 miniapps from WeChat and WeCom, 340 miniapps from Baidu, and 24 miniapps from QQ. We could not find information about the 1st-party miniapps of TikTok, so we did not report their API usage. We could not scan all 3rd-party miniapps because there is no public dataset or crawlers available. Therefore, we can only measure the usage of hidden APIs among 3rd-party miniapps within the WeChat ecosystem. We collected 267, 359 miniapps using Mini-Crawler (Marcus et al., 2017) within 3 weeks. **The Testing Environment.** We performed our static analysis on one laptop, which has 6 cores, Intel Core i7-10850H (4.90 GHz) CPUs and 64 GB RAM, and our dynamic analysis on a Google Pixel 4 running Android 11 and a Google Pixel 2 running Android 9, since we particularly focused on the Android version of miniapps. ### Effectiveness The effectiveness evaluation aims to quantify how APIScope uncovered the hidden APIs in terms of the specific numbers for the involved analysis (which is presented in Table 2), and their qualities (i.e., whether there are any false positives). It is worth noting that the manually created cases are indeed rare. For example, for Baidu, we automatically created 423 test cases, and created another 56 test cases manually, so the manual efforts are around 11%, i.e., 56/(56+423) = 0.11. Other super apps even have a lower amount of manual efforts than Baidu (e.g., WeCom has 2.9 % manual efforts). Specifically, the effectiveness of our static analysis is measured by the identification of API invariants, the number of identified API candidates (i.e., the functions that are very likely to be APIs). However, whether those API candidates are really APIs are determined in dynamic API classification. For the API invariants, while we have listed four invariants in SS5.1, not all of them will exist in all super apps (e.g., Baidu and QQ do not have caller invariant), as shown in Table 2. That is why APIScope aggressively identifies as many invariants as possible. With these invariants, it sufficiently recognizes the undocumented APIs even though some of them do not exist in other super apps. During static API recognition, APIScope recognized in total 1,829 API candidates for these super apps. Among them, WeCom contains the most hidden API candidates (683), followed by WeChat (containing 575 API candidates). Tiktok has fewer API candidates (i.e., 124 API candidates), likely due to its smallest LoC compared to other super apps. The effectiveness of dynamic analysis is measured by the number of traced functions during API invocation identification and the number of test cases used during API classification. Among the test cases, we also quantify the number of automatically generated test cases and manually created test cases. We can see that most of the test cases are automatically generated by our test case generation algorithm, and the number of automatically generated test cases is greater than the number of API candidates due to the parameter order permutation (as discussed in SS5.2). With our dynamic classification for the identified APIs, APIScope detected a large number of hidden APIs, many of which are unchecked (as reported in Table 2). WeChat has more APIs (590 public APIs, 502 undocumented unchecked APIs, and 65 undocumented checked APIs) than the other super apps. However, TikTok has a relatively small number of APIs (383 public APIs, 120 undocumented unchecked APIs, and 2 undocumented checked APIs). With respect to the percentage of undocumented unchecked and checked APIs, WeCom has the most undocumented unchecked APIs (46.3%) and undocumented checked APIs (6.4%). **Correctness of Our Result.** We quantify whether there are any false positives or false positives for the identified hidden APIs. First, a false positive here means that the identified API is not hidden, or is not an API. By design, APIScope will not have false positives for two reasons: (1) the invariants we extracted have very strict patterns (they have to exist among all public APIs and all of them have to be present in the undocumented APIs), and (2) our dynamic probing for API classification can filter out those non-APIs, which eliminate potential false positives. Nevertheless, we still thoroughly scrutinized each API identified for WeChat by conducting a manual check to ensure that there were no false positives. In other words, we made sure that the tool did not mistakenly classify non-APIs as APIs. Thanks to our design, we did not come across any false positives during our examination. Second, with respect to false negatives (i.e., "true" hidden API is missed by APIScope), we note that theoretically APIScope could have false negatives, for instance, if our invariants are too strong. However, we will not be able to quantify this, since we do not have the ground truth, unless we can manually examine each line of code. Therefore, we leave this to future work. **Categories of the Identified APIs.** With the identified APIs, we can then obtain some insights with them, such as which category contains more hidden APIs. To this end, we manually walked through each API, and categorize them based on the categories of the documented ones, to classify the undocumented (i.e., hidden) APIs. This result is presented in Table 3. Interestingly, we found that most of the categories contain undocumented unchecked APIs. In particular, for some of the super apps (e.g., WeChat), their undocumented unchecked APIs can be even more than the documented APIs in some of the categories (e.g., the API category Payment has 28 undocumented APIs, which is way more than their documented APIs). Finally, we found that some well-documented APIs of a specific super app may not be open to the public in other super apps. For example, getUserInfo is an undocumented API of Baidu, while WeChat has the same API with the same functionalities, \begin{table} \begin{tabular}{l l c c c c c} \hline \hline \multirow{2}{*}{**Name**} & \multirow{2}{*}{**Vendor**} & \multirow{2}{*}{**Version**} & \multirow{2}{*}{**VS**} & \multirow{2}{*}{**Date**} & \multirow{2}{*}{**Installs**} & \multicolumn{1}{c}{**1st-party miniap**} \\ & & & & & & **being tested?** \\ \hline Baidu & Baidu & 12.21 & 7.6 & 08/13/2021 & 5,000,000+ & ✓ \\ QQ & Tencent & 8.8 & 7.2 & 10/5/2021 & 10,000,000+ & ✓ \\ T4Tok & JetPhCone & 17.9 & 7.2 & 10/10/2021 & 100,0000+ & ✗ \\ WeChat & Tencent & 8.0 & 8.0 & 0/7/21/2021 & 100,0000+ & ✓ \\ WeCom & Tencent & 3.1 & 8.0 & 0/7/14/2021 & 100,000+ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1. Summary of the Tested Super Apps which is publicly accessible. Finally, since APIScope is a systematic and mostly automated tool, it can inspect API changes based on previous versions of the super app implementations as long as we can obtain both their APKs and documentation. We have a detailed evaluation of API evaluation in Appendix-SC for interested readers. **Usage of Hidden APIs (Among the 1st-party Miniapps).** We obtained many 1st-party miniapps and classified them into categories based on their meta-data. From the data in Table 4, we found that the use of undocumented APIs is common among 1st-party \begin{table} \begin{tabular}{c c miniapps regardless of their category. WeCom had the highest percentage of 1st-party miniapps using undocumented APIs at 38.1%, followed by QQ at 37.5%, WeChat at 36.9%, and Baidu at 34.7%. We also observed that 1st-party miniapps in the Traveling, Shopping, and Finance categories were more likely to use undocumented APIs, and these APIs were often related to payment. For example, many miniapps in these categories would use the undocumented API verifyPaymentPassword to verify payment passwords. Next, we sought to understand the most popular undocumented APIs and how often they are used by 1st-party miniapps. We grouped the APIs by name and counted the number of miniapps that used each API. This information is presented in table 5. We found that 7 undocumented APIs provided by Baidu were used by their 1st-party miniapps, 34 undocumented APIs provided by WeChat were used by their 1st-party miniapps (only 19 of which are listed in Table 5 due to space constraints), 43 undocumented APIs provided by WeCom were used by their 1st-party miniapps (again, only those used by more than two miniapps are shown), and 14 undocumented APIs provided by QQ were used by their 1st-party miniapps. Finally, we present whether there are any missing security checks for these undocumented APIs from our API classification result in the last column of Table 5. We found that 3 out of 7 (42.9%) APIs used by Baidu's 1st-party miniapps do not have security checks and can be invoked and exploited by 3rd-party miniapps; 16 of 34 (47.06%) APIs of WeChat; 22 of 43 (51.16%) APIs of WeCom; and 12 of 14 (85.7%) APIs of QQ can be exploited by 3rd-party miniapps. We also noticed that different vendors have different security restrictions on their undocumented APIs. For example, WeChat and WeCom place security checks on their undocumented APIs that are related to payment (wx.requestVirtualPayment), authentication (wx.startFacialRecognitionVerify) and access to resources (wx.open|/r1). **Usage of Hidden APIs (Among the 3rd-party Miniapps).** Based on the data presented in Table 6, we have discovered that the utilization of undocumented APIs is widespread among 3rd-party miniapps, regardless of their category. The percentage of 3rd-party miniapps employing undocumented APIs is 29.54%. Our observations have further revealed that 3rd-party miniapps in the Shopping and Business categories are more inclined to use undocumented APIs, particularly those linked to sensitive operations like payment. In addition, we conducted an analysis to comprehend the most popular undocumented APIs and the frequency of their usage by \begin{table} \begin{tabular}{l|r r r|r r|r r|r r} \hline \hline \multirow{2}{*}{**Category**} & \multicolumn{2}{c|}{**WeChat**} & \multicolumn{2}{c|}{**WeChat**} & \multicolumn{2}{c|}{**WeChat**} & \multicolumn{2}{c|}{**Baidu**} & \multicolumn{2}{c}{**QQ**} \\ \cline{2-10} & **UP** & **Appl** & **\%** & **UP** & **Appl** & **\%** & **UP** & **Appl** & **\%** \\ \hline Business & 14 & 49 & 26.86 & 16 & 49 & 32.72 & 21 & 38.53 & 1 & 3 & 33.3 \\ Education & 6 & 26 & 28.11 & 7 & 26 & 26.99 & 16 & 31.31 & - & 3 & 0.0 \\ E-learning & 5 & 9 & 58.58 & 5 & 9 & 35.62 & 12 & 33.86 & - & 1 & 0.0 \\ Entertainment & 9 & 17 & 52.9 & 9 & 17 & 52.9 & 27 & 25.87 & 2 & 100.00 \\ Finance & - & - & 1,1000 & - & 1,1000 & - & 21 & 23.08 & - & 0.0 \\ Food & - & - & 0.0 & - & - & - & 0.0 & - & 0.0 \\ Games & 18 & 36.00 & 28 & 36.00 & - & - & 0.0 & - & 0.0 \\ Government & 2 & 7 & 26.8 & 2 & 7 & 26.6 & 3 & 8.735 & 1 & 100.00 \\ Health & 2 & 7 & 26.6 & 2 & 7 & 26.5 & 1 & 5.20 & - & 1 & 0.0 \\ Job & - & 1 & - & 0.0 & - & - & - & 0.0 & - & 0.0 \\ Lifestyle & 2 & 5 & 48.0 & 2 & 5 & 40.0 & 3 & 15.20 & - & 1 & 0.0 \\ Photo & 3 & 7 & 42.9 & 3 & 7 & 42.9 & - & 0.0 & - & 0.0 \\ Shopping & 1 & 1,1000 & - & 1 & 10000 & - & 2 & 0.0 & - & 0.0 \\ Social & 4 & 8 & 50.0 & 4 & 8 & 50.0 & 1 & 42.50 & - & 1 & 0.0 \\ Sports & - & - & 0.0 & - & - & 1 & 0.0 & - & 0.0 \\ Tool & 15 & 52.73 & 15 & 52.73 & 27.3 & 47.34 & 4 & 5 & 50.0 \\ Traffic & 3 & 5 & 50.00 & 3 & 50.00 & 4 & 17 & 30.00 & - & 1 & 0.0 \\ Travelling & 2 & - & 2,1000 & 2 & 10000 & 1 & 56.8 & 1 & 2 & 50.00 \\ Un Unrecognized & - & - & 0.0 & - & 0.0 & 1 & 25.80 & - & 0.0 \\ \hline Total & 87 & 256.56 & 36.90 & 290 & 256.38 & 1118 & 30.36 & 34.75 & 9 & 24 & 37.52 \\ \hline \hline \end{tabular} \end{table} Table 4. The 1st party miniapps that have used the undocumented APIs. The first column indicates the number of 1st-party mini-apps using undocumented APIs, and the second column represents the total number of 1st-party miniapps. We calculate the percentage of mini-apps by using the first column divided by the second. \begin{table} \begin{tabular}{l l l l l l} \hline \hline **API Name** & **Category** & **\# App** & **\%** & **App** & **w/ Check** \\ \hline swan\_lioffation & 104 & 88.34 & _\$_ \\ swan\_lioffation & Login & 31 & 26.27 & _\$_ \\ swan\_postMessage & Unrecognized & 8 & 6.78 & _\$_ \\ swan\_gg\#HSUS & User Info & 4 & 3.39 & _\$_ \\ swan\_g\#Gmm\_s\_/dr\_s\_/dr\_s\_/dr\_s\_/dr\_s\_/dr\_s\_/dr\_s\_/dr\_s\_/dr\_s\_/dr\_s_/dr\_s\_/dr\_s\ 3rd-party miniapps. We categorized the APIs by name and tallied the number of miniapps that leveraged each API. We have found that 103 undocumented APIs provided by WeChat were utilized by their 3rd-party miniapps. Among these APIs, it is notable that 79 of them lack security checks. As shown in Table 7, we present a summary of undocumented APIs that have been utilized by over 50 mini-apps. It is evident that a majority of these hidden APIs lack proper security measures. To further understand the details, we delyed into a selection of them to uncover why 3rd-party mini-apps have knowledge of them and whether they are being exploited. Our investigation has yielded some intriguing findings. (i) While some APIs are not publicly documented, Tencent does share them with certain vendors who work closely with them and permit these vendors to request access. An example of such an API is request-FacetofacePayment (Tencent, 2018) (which is used by 40,091 miniapps). (ii) There were some concealed APIs that were once freely available for use without any security checks. However, Tencent subsequently banned them. One such API is "open\(\nu\)r1" (Tencent, 2018). Interestingly, even though Tencent has banned the usage of this API, a whopping 17,140 miniapps have yet to remove the invocation of this API from their code (obviously, this will not work). This API has already been banned by Tencent prior to our report. (iii) There are still some APIs that remain usable until we notify Tencent of the issue. For example, captureScreen (12 miniapps used this API) can be utilized to obtain the user's sensitive information (See SS7.2). ## 7. Exploiting Unchecked Hidden APIs ### Quantifying the Security Risks **Methodology.** After quantifying the number of unchecked undocumented APIs, our goal is to gain a better understanding of whether or not these APIs pose any security risks. While it is possible to manually analyze each API individually, it is not very practical or reliable, especially given the vast number of APIs we need to analyze (more than 1,500 APIs). However, our observation is that for an undocumented API to have potential security implications, it must be able to access sensitive information and resources on the Android system (e.g., location, files, and the internet). Therefore, if we find that the hidden API calls a native API, we can conclude that it has the potential to pose security risks. Otherwise, we can proceed to examine the implementation of each method within that hidden API, conducting the process recursively as needed. However, not all invoked APIs manipulate sensitive resources within the Android system. For example, the android.graphics API offers graphics tools that allow developers to draw directly onto the screen. It is evident that invoking these APIs would not result in any security consequences. Therefore, we consider APIs that access resources protected by permissions (such as location, the Internet, and file system) to have security risks. Consequently, we opted to utilize a lightweight dynamic analysis approach to identify such APIs. Specifically, we hook all Android APIs that access sensitive resources, which are typically protected by Android permissions, and invoke unchecked undocumented APIs one by one. By monitoring whether the sensitive resource access APIs are invoked during this process, we can determine whether the undocumented APIs are implemented based on them. Furthermore, we are able to infer whether these APIs posed any security risks. While this approach may not uncover all the APIs since the execution of the hidden APIs may depend on the parameters and may not trigger the underlying security sensitive APIs, it can at least provide a lower-bound. **Results.** We categorize the hidden APIs by analyzing the Android APIs that utilize the resources and grouping them accordingly. As shown in Table 8, we have identified 39 APIs (7.77%) in WeChat, 40 APIs (6.75%) in WeChat, 8 APIs (7.08%) in Baidu, 32 APIs (26.67%) in Tiktok and 38 APIs (12.88%) in QQ that invoke Android APIs that are protected by permissions. It should be noted that WeChat and WeChat have the most APIs that can access sensitive resources, while Baidu has the least number of such APIs. This is likely due to the fact that super apps require more Android permissions. To be more specific, WeChat requires 92 permissions, which is larger than \begin{table} \begin{tabular}{c c c c} \hline \hline **Category** & \# U & \# App & \% \\ \hline Business & 8,116 & 14,887 & 54.82 \\ E-learning & 335 & 2,088 & 16.04 \\ Education & 2,738 & 40,410 & 6.78 \\ Entertainment & 1,286 & 5,258 & 24.46 \\ Finance & 262 & 1,408 & 18.61 \\ Food & 1,107 & 6,345 & 17.45 \\ Games & 1,777 & 4,745 & 37.45 \\ Government & 929 & 7,808 & 11.90 \\ Health & 795 & 6,422 & 12.38 \\ Job & 177 & 4,399 & 4.02 \\ Lifestyle & 11,846 & 35,371 & 33.49 \\ Photo & 136 & 1,981 & 6.87 \\ Shopping & 44,629 & 46,202 & 96.04 \\ Social & 217 & 5,094 & 3.81 \\ Sports & 312 & 3,378 & 9.24 \\ Tool & 3,423 & 72,301 & 4.73 \\ Traffic & 580 & 6,932 & 8.92 \\ Tauselling & 309 & 2,160 & 14.31 \\ \hline Total & 78,974 & 267,359 & 29.54 \\ \hline \hline \end{tabular} \end{table} Table 6. The 3rd party WeChat miniapps that have used the undocumented APIs. \begin{table} \begin{tabular}{l c c c c} \hline \hline **APIName** & **Category** & \# App & \%:**App** & \% **Check** \\ \hline ws.request/FacetofacePayment & Frequent 40,091 & 14.98 & ✓ \\ w.create/WXData & Mike & 21,834 & 8.16 & ✗ \\ w.get/Bxo/orientation & UI & 18,699 & 8.91 & ✗ \\ w.create/Content & Context & 17,421 & 6.51 & ✓ \\ w.open/roid/Wxoiser & Mike & 17,140 & 6.41 & ✓ \\ w.p.p.p./roid/Webview & Week/View & 15,315 & 5.73 & ✓ \\ w.x.n.ar/ugt/Bach/Native & Navigate & 13,077 & 5.01 & ✓ \\ w.c. that of Baidu (82). These accessed sensitive resources include camera, location, audio, and Internet. It is important to note that hidden APIs that access sensitive resources do not necessarily mean that they can access them without requiring permission. Specifically, in addition to the resources that are safeguarded by Android permissions, we are also including SharedPreferences in our checklist. This is because miniapps may utilize this Android API to store files in the space belonging to the super apps, which could potentially compromise the files of both the super apps and other apps. Next, our objective is to understand the Android APIs utilized by the undocumented APIs. For this purpose, we count the number of Android APIs invoked by each hidden API of the super app, and classify them based on the names of the corresponding Android API Packages. We exclude the API packages that only be invoked once. It can be observed from Figure 8 that the API most commonly used is SharedPreferences. This is reasonable, as many of the APIs involve file operations. The available APIs consist of those dedicated to saving screenshots onto disks, which can be utilized to launch A3. Besides file access APIs, numerous hidden APIs make use of Internet access APIs for different purposes, including payment processing, network resource access, and more. The currently available APIs comprise those responsible for website access, which can be leveraged to trigger A1, APIs created for APK downloading and installation, which can be utilized to launch A2, and APIs for querying contact information, which can be employed to initiate A5. Please note that there are also APIs that access NFC, Camera, and Telephony Manager (which can be used to launch A4). However, since they have only been invoked once, we have excluded them from the figure. ### Attack Case Studies We present a few case studies to demonstrate how we can exploit those hidden unchecked (i.e., unprotected) APIs. For proof of concept, we present five case studies covering from arbitrary webpage access to information theft, as shown in Table 9. **(A1) Arbitrary Web Page Access.** We made a malicious miniapp that can open any webpage using the hidden APIprivate_openurl. Super apps usually have an allowlist of approved domains to prevent users from accessing untrusted sources (i.e., miniapps usually utilize the official APIw.request to access websites, and any network requests made through this API will be thoroughly vetted), but our malware can bypass these restrictions and navigate to any webpage without being vetted. This vulnerability allows our miniapp to open phishing websites and steal sensitive information, which is more powerful than previous phishing attacks (Zhu et al., 2020). We were successful in this attack on several super apps but could not test it on TikTok because it does not have the necessary APIs. This vulnerability is a significant security risk for super apps because they have a unique threat model that differs from web browsers. Super apps only allow access to specific domains, unlike web browsers that can access any website. This vulnerability has been confirmed as a high-severity vulnerability by Tencent. **(A2) Malware Download and Installation.** We developed a malicious miniapp that can download and install malware using APIs installDownloadTask or addDownloadTaskStraight. Regular miniaps cannot download or install APK files on a mobile device because they have limited capabilities and can only download certain file types from specific servers. However, by using these APIs, a miniapp can download and install harmful APKs, which can cause significant damage to the user's mobile security and privacy. This attack works on both WeChat and WeCom. Finally, although APKs cannot be installed without the user consent, miniApps is Figure 8. Android APIs used by the hidden APIs from different companies. \begin{table} \begin{tabular}{l l|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Resource**} & \multicolumn{2}{c|}{**WeChat**} & \multicolumn{2}{c|}{**WebCom**} & \multicolumn{2}{c|}{**Baidu**} & \multicolumn{2}{c|}{**TikTok**} & \multicolumn{2}{c}{**OG**} \\ \cline{2-9} & \# UUs & \% & \# UUs & \% & \# UUs & \% & \# UUs & \% & \# UUs & \% \\ \hline Bluetooth & 3 & 0.59 & 0.51 & - & - & - & - & - \\ Camera & 1 & 0.20 & 1 & 0.17 & - & - & - & - & 1 & 0.34 \\ Location & - & - & - & - & - & - & - & - & 1 & 0.34 \\ Media & 5 & 0.96 & 5 & 0.84 & - & - & 11 & 8.12 & 11 & 3.71 \\ NIC & 3 & 0.59 & 3 & 0.51 & - & - & - & - & - & - \\ Network & 16 & 3.19 & 16 & 2.70 & 7 & 6.19 & 20 & 16.00 & 24 & 8.34 \\ Package & 0.59 & 4 & 0.67 & 1 & 0.88 & - & - & - & 1 & 0.34 \\ Storage & 25 & 4.98 & 26 & 4.38 & 3.65 & - & - & - & - & - \\ Telephony & - & - & - & - & - & 1 & 0.83 & - & 2.371 \\ \hline Total & 39 & 2.771 & 40 & 6.925 & 8 & 7.08 & 37 & 24.00 & 38 & 12.88 \\ \hline \hline \end{tabular} \end{table} Table 8. The sensitive resources that undocumented unchecked APIs accessed. UUs means undocumented sensitive APIs. Please note that a single hidden API may have access to multiple types of resources. Therefore, the total number of hidden APIs may not be equal to the sum of all the APIs that have been identified for each individual resource type. running inside the Super Apps, and as long as Super App has the installing permission (which most users will grant because they trust Super Apps), the malicious miniApp can install arbitrary APKs. **(A3) Screenshot-based Information Theft.** We made a malicious miniapp that uses the captureScreen to secretly take screenshots and store them without the user's permission. This could be used by attackers to steal sensitive information like passwords and credit card numbers from the user's screen. The consequences of this kind of attack are serious. For example, the attacker could use them to steal the victim's identity and open fake accounts or make illegal purchases. They could also use the screenshots to commit financial fraud by stealing the victim's credit card. **(A4) Phone Number Theft.** The malicious miniapps may use getLocalPhoneNumber to illicitly obtain the user's phone numbers. The hidden API is implemented by getLine1Number, which is a built-in feature of the Android SDK intended to provide the phone number associated with the SIM card currently inserted in the device. Nevertheless, access to phone number information from the SIM card may be blocked or restricted by some carriers or manufacturers, thereby rendering this attack unsuccessful in certain cases. **(A5) Contact Information Theft.** A miniapp can potentially access sensitive information, such as friend list (including the usernames and WeChat ID) using searchContacts. Our experiments were conducted primarily in 2021, during which we found that this hidden API was still functional based on our raw results. Upon reporting the issue to WeChat, we were informed that another group had already reported the problem to them (CVE-2021-40180 (Xu et al., 2021)), and that the exploit no longer works on the new version of WeChat. ## 8. Discussion **Limitations and Future Work.** Although effective, APIScope can still be improved in various ways. It is possible for the tool to have false positives and negatives, although none have been encountered through dynamic validation and manual verification. Also, while currently tested on Android, additional work is needed to support other platforms. However, our findings are representative across different platforms, as miniapp codebases are similar. Note that APIScope is limited to super-apps that use the V8 engine and is not suitable for those that do not (e.g., Alipay). In our study, we discovered some hidden APIs that may be vulnerable, such as the installDownloadTask and addDownloadTaskStraight APIs, which are susceptible to SQL injection attacks. Attackers can compromise super app file download tasks by replacing the download URL of the WeChat update package with a malicious one. We also noticed that there are two APIs called dumpHeapSnapshot and HeapProfiler that also have vulnerabilities. These APIs are designed to save data from the V8 engine to a file, but our miniapp misuses them to write to any file it wants. While Android tries to prevent this, important files like chat histories are still at risk. This could lead to serious problems because our miniapp could overwrite important files of other miniapps and their host apps, which breaks the security measures put in place by super apps. Our experiment proved that we could overwrite a file called EnMicroMsg. db, which stores chat history on WeChat. Attackers might want to make these miniapps because chat history can be used as evidence in court. We plan to develop a tool that can identify hidden API vulnerabilities (e.g., SQL injection and buffer overflow). **Ethics and Responsible Disclosure.** Being an attack work by nature, we must carefully address the ethical concerns. To this end, we have followed the community practice when exploiting the vulnerabilities and demonstrated our attacks. First, for proof of concept, we developed quite a number of malicious miniapps and launched attacks against our own accounts and devices. We have never uploaded our malicious miniapps onto the markets to harm other users. Second, we have disclosed the vulnerabilities and our attacks against WeChat to Tencent in September 2021, and the other four super apps in November 2021. They have all acknowledged and confirmed our findings, and so far among them Tencent (the biggest super app vendor with 1.2 billion monthly users) has confirmed with 4 vulnerabilities, ranked 1 low, 2 medium, and 1 high, and awarded us with bug bounty and fixed them. TikTok has been patched too, but not Baidu at this time of writing. ## 9. Related Work **Super Apps Security.** More and more super apps have started to support the miniapp paradigm. Correspondingly, its security has received increasing attention. For instance, Lu et al. (Lu et al., 2021) identified multiple flaws in WeChat, and demonstrated how an attacker would be able to launch phishing attacks against mobile users and collect sensitive data from the host apps. Zhang et al. (Zhang et al., 2021) developed a crawler, and understood the super apps by measuring the program practices of the provided miniapps, including how often the miniapp code will be obfuscated. Most recently, Zhang et al. (Zhang et al., 2021) studied the identity confusion in WebView-based super apps, and identified that multiple super apps contain this vulnerability. A new attack named cross-miniapp request forgery (CMRF) (Zhang et al., 2021) was also recently discovered, which exploits the missing checks of miniapp IDs for various attacks. Differently from those works, our study uncovers the undocumented APIs provided by the super apps and demonstrates how they can be exploited. In a broader scope, there is a large body of research studying the security of other super apps including web browsers and their lightweight apps, such as Google Instant apps (Gupta et al., 2021). In particular, Aomzo et al. (Gupta et al., 2021), and Tang et al. (Tang et al., 2021) point out that Google Instant Apps can be abused to mount password-stealing attacks. **Undocumented API Detection and Exploitation.** APIScope is the first system to detect and exploit undocumented APIs in mobile super apps like WeChat. Previous work has focused on detecting undocumented APIs in other platforms, such as Android and iOS, or on identifying missing security checks (e.g., (Gupta et al., 2021; Lu et al., 2021; Lu et al., 2021; Wang et al., 2021; Wang et al., 2021; Wang et al., 2021; Wang et al., 2021)). For example, PScout analyzed undocumented APIs in Android (Gupta et al., 2021), and Li et al. showed that there are 17 undocumented Android APIs that are widely accessed by 3rd-party apps (Zeinab and Yu, 2021). Zeinab and Yuans studied access control vulnerabilities caused by residual APIs (Zeinab and Yu, 2021). In addition, there are ways to invoke undocumented APIs in iOS (Li et al., 2021; Wang et al., 2021) and detect their abuses (Li et al., 2021). Yang et al. (Yang et al., 2021) proposed BridgeScope to identify sensitive JavaScript bridge APIs in hybrid apps. Undocumented APIs have also been found in the Java language and exploited by attackers (Li et al., 2021; Wang et al., 2021). APIScope builds on this previous work to specifically focus on mobile super-apps. Finding hidden APIs in super apps using traditional techniques is difficult due to the combination of web views, host native apps, and mini app execution environments, along with code scattering and obfuscation. Our new approach monitors parameter propagation to detect API usage, using robust signatures based on super classnames and public methods. We have also created a method for automatic test case generation and API classification. ## 10. Conclusion In this paper, we have revealed that super apps often contain undocumented and unchecked APIs for their 1st-party mini-apps, which can grant elevated privileges such as APK downloading, arbitrary web view accessing, and sensitive information querying. Unfortunately, these undocumented APIs can be exploited by malicious 3rd-party mini-apps, as they lack security checks. To address this issue, we have designed and implemented APISCOP, a tool that can statically identify these undocumented APIs and dynamically verify their exploitability. Through our testing on five popular super apps such as WeChat and TikTok, we have found that all of them contain these types of APIs. Our findings suggest that super app vendors must thoroughly examine and take caution with their privileged APIs to prevent them from becoming potential exploit points.
2303.03867
Completeness for categories of generalized automata
We present a slick proof of completeness and cocompleteness for categories of $F$-automata, where the span of maps $E\leftarrow E\otimes I \to O$ that usually defines a deterministic automaton of input $I$ and output $O$ in a monoidal category $(\mathcal K,\otimes)$ is replaced by a span $E\leftarrow F E \to O$ for a generic endofunctor $F : \mathcal K\to \mathcal K$ of a generic category $\mathcal K$: these automata exist in their `Mealy' and `Moore' version and form categories $F\text{-}\mathsf{Mly}$ and $F\text{-}\mathsf{Mre}$; such categories can be presented as strict 2-pullbacks in $\mathsf{Cat}$ and whenever $F$ is a left adjoint, both $F\text{-}\mathsf{Mly}$ and $F\text{-}\mathsf{Mre}$ admit all limits and colimits that $\mathcal K$ admits. We mechanize some of of our main results using the proof assistant Agda and the library `agda-categories`.
Guido Boccali, Andrea Laretto, Fosco Loregian, Stefano Luneia
2023-03-07T13:09:26Z
http://arxiv.org/abs/2303.03867v1
# Completeness for categories of generalized automata ###### Abstract We present a slick proof of completeness and cocompleteness for categories of \(F\)_-automata_, where the span of maps \(E\gets E\otimes I\to O\) that usually defines a deterministic automaton of input \(I\) and output \(O\) in a monoidal category \((\mathcal{K},\otimes)\) is replaced by a span \(E\gets FE\to O\) for a generic endofunctor \(F:\mathcal{K}\to\mathcal{K}\) of a generic category \(\mathcal{K}\): these automata exist in their 'Mealy' and 'Moore' version and form categories \(F\)-M\(\mathsf{My}\) and \(F\)-\(\mathsf{Mre}\); such categories can be presented as strict 2-pullbacks in \(\mathsf{Cat}\) and whenever \(F\) is a left adjoint, both \(F\)-\(\mathsf{My}\) and \(F\)-\(\mathsf{Mre}\) admit all limits and colimits that \(\mathcal{K}\) admits. We mechanize our main results using the proof assistant Agda and the library agda-categories. 10.4230/LIPIcs.CVIT.2016.23 F. Loregian was supported by the ESF funded Estonian IT Academy research measure (project 2014-2020.4.05.19-0001). ###### Acknowledgements. _A Rene, parce qu'il faut ruser pour te lire_. ## 1 Introduction One of the most direct representations of _deterministic automata_ in the categorical settings consists (cf. [1, 4, 5]) of a span of morphisms \(E\gets E\times I\to O\), where the left leg provides a notion of _next states_ of the automaton given a current state \(E\) and an input \(I\), and the right leg provides an output \(O\) given the same data. According to whether the output morphism depends on both the current state and an input or just on the state, one can then talk about classes of _Mealy_ and _Moore automata_, respectively. This perspective of 'automata in a category' naturally captures the idea that morphisms of a category can be interpreted as a general abstraction of processes/sequential operations. The above notion of deterministic automata carries over to any monoidal category, on which the various classical notions of automata, e.g., minimization, bisimulation, powerset construction, can be equivalently reconstructed and studied as in the monograph [5]. In [1, 6], automata are generalized to the case in which, instead of taking spans from the monoidal product of states and inputs \(E\otimes I\), one considers spans \(E\gets FE\to O\) for a generic endofunctor \(F:\mathcal{K}\to\mathcal{K}\), providing an abstraction for the ambient structure that allows the automata to advance to the 'next' state and give an output. A general theorem asserting that the category of Mealy and Moore automata \(\mathsf{Mly}_{\mathcal{K}}(I,O)\), \(\mathsf{Mre}_{\mathcal{K}}(I,O)\) in a monoidal category \((\mathcal{K},\otimes)\) are complete and cocomplete whenever \(\mathcal{K}\) is itself complete and cocomplete can be obtained with little effort, cf. [5, Ch. 11], but the proof given therein is a bit ad-hoc, and provides no intuition for why finite products and terminal objects tend to be so complicated. With just a little bit more category-theoretic technology, some general considerations can be made about the shape of limits in such settings: colimits and connected limits can be computed as they are computed in \(\mathcal{K}\) (as a consequence of the fact that a certain functor _creates_ them, cf. [10]), whereas products (and in particular the empty product, the terminal object) have dramatically different shapes than those provided in \(\mathcal{K}\). The profound reason why this is the case comes from the fact that such a terminal object (which we refer to \(O_{\infty}\)) coincides with the terminal coalgebra of a specific endofunctor, which, depending on the case of Moore and Mealy automata, is given by \(A\mapsto O\times RA\) and \(A\mapsto RO\times RA\) respectively. The complicated shape of the terminal object \(O_{\infty}\) in \(\mathsf{Mly}_{\mathcal{K}}(I,O)\) is then explained by Adamek's theorem, which presents the terminal object \(O_{\infty}\) as inverse limit in \(\mathcal{K}\). In this paper, we show that under the same assumption of completeness of the underlying category \(K\), the completeness of \(F\)-automata can be obtained by requiring that the endofunctor \(F\) admits a right adjoint \(R\). The proof we provide follows a slick argument proving the existence of (co)limits by fitting each \(\mathsf{Mly}_{\mathcal{K}}(I,O)\) and \(\mathsf{Mre}_{\mathcal{K}}(I,O)\) into a strict 2-pullback in \(\mathsf{Cat}\), and deriving the result from stability properties of limit-creating functors. **Outline of the paper.** The present short note develops as follows: * first (Section 2) we introduce the language we will employ and the structures we will study:1 categories of automata valued in a monoidal category \((\mathcal{K},\otimes)\) (in two flavours: 'Mealy' machines, where one considers spans \(E\gets E\otimes I\to O\), and 'Moore', where instead one consider pairs \(E\gets E\otimes I,E\to O\)) and of \(F\)-automata, where \(F\) is an endofunctor of \(\mathcal{K}\) (possibly with no monoidal structure). 'Mealy' automata are known as 'deterministic automata' in today's parlance, but since we need to distinguish between the two kinds of diagram from time to time, we stick to an older terminology. Footnote 1: An almost identical introductory short section appears in [2], of which the present note is a parallel submission –although related, the two manuscripts are essentially independent, and the purpose of this repetition is the desire for self-containment. * Then (Theorem 3.6), to establish the presence of co/limits of shape \(\mathcal{J}\) in categories of \(F\)-automata, under the two assumptions that \(F:\mathcal{K}\to\mathcal{K}\) is a left adjoint in an adjunction \(F\stackrel{{ c}}{{\tau_{1}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The results we get are not particularly surprising; we have not, however, been able to trace a reference addressing the co/completeness properties of \(F\text{-}\mathsf{Mly},F\text{-}\mathsf{Mre}\) nor an analogue for the 'behaviour as an adjunction' theorems expounded in [11, 12]; in the case \(F=\_\otimes I\) co/completeness results follows from unwieldy ad-hoc arguments (cf. [5, Ch. 11]), whereas in Theorem 3.6 we provide a clean, synthetic way to derive both results from general principles, starting by describing \(F\text{-}\mathsf{Mly}\) and \(F\text{-}\mathsf{Mre}\) as suitable pullbacks in \(\mathsf{Cat}\), in Proposition 3.5. We provide a mechanisation of our main results using the proof assistant Agda and the library agda-categories: we will add a small Agda logo () next to the beginning of a definition or statement whenever it is accompanied by Agda code, pointing directly to the formalisation files. The full development is available at [https://github.com/iwilare/categorical-automata](https://github.com/iwilare/categorical-automata). ## 2 Automata and \(F\)-automata The only purpose of this short section is to fix notation; classical comprehensive references for this material are [1, 5]; in particular, [1, Ch. III] is entirely devoted to the study of what here are called \(F\)-Moore automata, possibly equipped with an 'initialization' morphism. ### Mealy and Moore automata For the entire subsection, we fix a monoidal category \((\mathcal{K},\otimes,1)\). [Mealy machine] () A Mealy machine _in \(\mathcal{K}\) of input object \(I\) and output object \(O\) consists of a triple \((E,d,s)\) where \(E\) is an object of \(\mathcal{K}\) and \(d,s\) are morphisms in a span_ (2.1) [The category of Mealy machines] Mealy machines of fixed input and output \(I,O\) form a category, if we define a _morphism of Mealy machines_\(f:(E,d,s)\to(T,d^{\prime},s^{\prime})\) as a morphism \(f:E\to T\) in \(\mathcal{K}\) such that (2.2) Clearly, composition and identities are performed in \(\mathcal{K}\). The category of Mealy machines of input and output \(I,O\) is denoted as \(\mathsf{Mly}_{\mathcal{K}}(I,O)\). [Moore machine] () A Moore machine _in \(\mathcal{K}\) of input object \(I\) and output object \(O\) is a diagram_ (2.3) Remark 2.4 (The category of Moore machines).: Moore machines of fixed input and output \(I,O\) form a category, if we define a _morphism of Moore machines_\(f:(E,d,s)\to(T,d^{\prime},s^{\prime})\) as a morphism \(f:E\to T\) in \(\mathcal{K}\) such that (2.4) ### \(F\)-Mealy and \(F\)-Moore automata The notion of \(F\)_-machine_ arises by replacing the tensor \(E\otimes I\) in (2.1) with the action \(FE\) of a generic endofunctor \(F:\mathcal{K}\to\mathcal{K}\) on an object \(E\in\mathcal{K}\), in such a way that a Mealy/Moore machine is just a \((\_\otimes I)\)-Mealy/Moore machine; cf. [6, ff. 2.1.3\({}^{\circ}\)], or Chapter III of the monograph [1]. This natural idea acts as an abstraction for the structure that allows the machine to advance to the 'next' state and give an output, and it leads to the following two definitions (where we do _not_ require \(\mathcal{K}\) to be monoidal). [F-Mealy machine] () Let \(O\in\mathcal{K}\) be a fixed object. The objects of the category \(F\mbox{-}\mathsf{Mly}_{/O}\) (or simply \(F\mbox{-}\mathsf{Mly}\) when the object \(O\) is implicitly clear) of \(F\)-Mealy machines of output \(O\) are the triples \((E,d,s)\) where \(E\in\mathcal{K}\) is an object and \(s,d\) are morphisms in \(\mathcal{K}\) that fit in the span \[E\xleftarrow{d}FE\xleftarrow{s}O \tag{2.5}\] A morphism of \(F\)-Mealy machines \(f:(E,d,s)\to(T,d^{\prime},s^{\prime})\) consists of a morphism \(f:E\to T\) in \(\mathcal{K}\) such that \[\begin{array}{c}\includegraphics[width=142.26378pt]{images/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig//// fig/// fig//// fig/// fig//// fig/// fig//// fig/// fig/// fig/// fig//// fig/// fig/// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig///// fig//// fig//// fig//// fig//// fig///// fig//// fig///// fig///// fig///// fig///// fig//// fig///// fig///// fig//// fig///// fig///// fig///// fig///// fig///// fig//// fig//// fig///// fig///// fig///// fig//// fig///// fig///// fig///// fig//// fig//// fig//// fig///// fig///// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// ## 3 Completeness and behaviour in \(F\)-Mly and \(F\)-Mre The first result that we want to generalise to \(F\)-machines is the well-known fact that, considering for example Mealy machines, if \((\mathcal{K},\otimes)\) has countable coproducts preserved by each \(I\otimes\) -, then the span (2.1) can be 'extended' to a span (3.1) where \(d^{+},s^{+}\) can be defined inductively from components \(d_{n},s_{n}:E\otimes I^{\otimes n}\to E,O\). Under the same assumptions, each Moore machine (2.3) can be 'extended' to a span (3.2) where \(d^{*},s^{*}\) can be defined inductively from components \(d_{n},s_{n}:E\otimes I^{\otimes n}\to E,O\).2 Footnote 2: Assuming countable coproducts in \(\mathcal{K}\), the free _monoid_\(I^{*}\) on \(I\) is the object \(\sum_{n\geq 0}I^{n}\); the free _semigroup_\(I^{+}\) on \(I\) is the object \(\sum_{\geq 1}I^{n}\); clearly, if \(1\) is the monoidal unit of \(\otimes\), \(I^{*}\cong 1+I^{+}\), and the two objects satisfy ‘recurrence equations’ \(I^{+}\cong I\otimes I^{+}\) and \(I^{*}\cong 1+I\otimes I^{*}\). In the case of Mealy machines, the object \(I^{+}\) corresponds to the _free semigroup_ on the input object \(I\), whereas for Moore machines one needs to consider the _free monoid_\(I^{*}\): this mirrors the intuition that in the latter case an output can be provided without any previous input. Note that the extension of a Moore machine gives rise to a span of morphisms from the same object \(E\otimes I^{*}\), i.e., a Mealy machine that accepts the empty string as input. A similar construction can be carried over in the category of \(F\)-Mealy machines, using the \(F\)-algebra map \(d:FE\to E\) to generate iterates \(E\stackrel{{ d_{n}}}{{\longleftarrow}}F^{n}E\stackrel{{ s_{n}}}{{\longrightarrow}}O\): From now on, let \(F\) be an endofunctor of a category \(\mathcal{K}\) that has a right adjoint \(R\). Examples of such arise naturally from the situation where a triple of adjoints \(L\dashrightarrow G\dashrightarrow R\) is given, since we obtain adjunctions \(LG\dashrightarrow RG\) and \(GL\dashrightarrow GR\): * every homomorphism of rings \(f:A\to B\) induces a triple of adjoint functor between the categories of \(A\) and \(B\)-modules; * similarly, every homomorphism of monoids \(f:M\to N\) induces a 'base change' functor \(f^{*}:N\text{-}\mathsf{Set}\to M\text{-}\mathsf{Set}\); * every essential geometric morphism between topoi \(\mathcal{E}\leftrightarrows\mathcal{F}\), i.e. every triple of adjoints \(f_{!}\dashrightarrow f^{*}\dashrightarrow f_{*}\); * every topological functor \(V:\mathcal{E}\rightarrow\mathcal{B}\)[3, Prop. 7.3.7] with its fully faithful left and right adjoints \(L\dashrightarrow V\dashrightarrow R\) (this gives rise to a comodality \(LV\), left adjoint to a modality \(RV\)). (Dynamics of an \(F\)-machine).(\(\mathcal{G}\)) For any given \(F\)-Mealy machine (3.3) we define the family of morphisms \(s_{n}:F^{n}E\to O\) inductively, as the composites (3.4) Under our assumption that \(F\) has a right adjoint \(R\), this is equivalent to the datum of their mates \(\bar{s}_{n}:E\to R^{n}O\) for \(n\geq 1\) under the adjunction \(F^{n}\xrightarrow[\overline{\eta_{n}}]{}R^{n}\) obtained by composition, iterating the structure in \(F\xrightarrow[\overline{\eta}]{}R\). Such a \(s_{n}\) is called the \(n\)th _skip map_. Observe that the datum of the family of all \(n\)th skip maps (\(s_{n}\mid n\in\mathbb{N}_{\geq 1}\)) is obviously equivalent to a single map of type \(\bar{s}_{\infty}:E\to\prod_{n\geq 1}R^{n}O\). Reasoning in a similar fashion, one can define extensions \(s:E\to O\), \(s\circ d:FE\to E\to O\), \(s\circ d\circ Fd:FFE\to O\), etc. for an \(F\)-Moore machine. This is the first step towards the following statement, which will be substantiated and expanded in Theorem 3.6 below: The category \(F\)-Mre of Definition 2.6 has a terminal object \(\mathfrak{o}=(O_{\infty},d_{\infty},s_{\infty})\) with carrier \(O_{\infty}=\prod_{n\geq 0}R^{n}O\); similarly, the category \(F\)-Mly has a terminal object with carrier \(O_{\infty}=\prod_{n\geq 1}R^{n}O\). (Note the shift in the index of the product, motivated by the fact that the skip maps for a Moore machine are indexed on \(\mathbb{N}_{\geq 0}\), and on \(\mathbb{N}_{\geq 1}\) for Mealy.) The'modern' way to determine the presence of a terminal object in categories of automata relies on the elegant coalgebraic methods in [7]; the interest in such completeness theorems can be motivated essentially in two ways: * the terminal object \(O_{\infty}\) in a category of machines tends to be 'big and complex', as a consequence of the fact that it is often a terminal coalgebra for a suitably defined endofunctor of \(\mathcal{K}\), so Adamek's theorem presents it as inverse limit of an op-chain. * Coalgebra theory allows us to define a _bisimulation_ relation between states of different \(F\)-algebras (or, what is equivalent in our blanket assumptions, \(R\)-coalgebras), which in the case of standard Mealy/Moore machines (i.e., when \(F=\_\otimes I\)) recovers the notion of bisimulation expounded in [7, Ch. 3]. The following universal characterisation of both categories as pullbacks in \(\mathsf{Cat}\) allows us to reduce the whole problem of completeness to the computation of a terminal object, and thus prove Theorem 3.6. **Proposition 3.5**.: 1. _the category_ \(F\)_-_Mly _of_ \(F\)_-Mealy machines given in Definition_ 2.5 _fits in a pullback square_ \[\begin{array}{c}\includegraphics[height=142.26378pt]{./figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/ **Theorem 3.6** (Limits and colimits of \(\boldsymbol{F}\)-machines).: * _Let_ \(\mathcal{K}\) _be a category admitting colimits of shape_ \(\mathcal{J}\)_; then,_ \(F\)_-_M_re _and_ \(F\)_-_M_ly _have colimits of shape_ \(\mathcal{J}\)_, and they are computed as in_ \(\mathcal{K}\)_;_ * _Equalizers (and more generally, all connected limits) are computed in_ \(F\)_-_M_re _and_ \(F\)_-_M_ly _as they are computed in_ \(\mathcal{K}\)_; if_ \(\mathcal{K}\) _has countable products and pullbacks,_ \(F\)_-_M_re _and_ \(F\)_-_M_ly _also have products of any finite cardinality (in particular, a terminal object)._ Proof of Theorem 3.6.: It is worth unraveling the content of [10, V.6, Ex. 3], from which the claim gets enormously simplified: the theorem asserts that in any strict pullback square of categories \[\begin{CD}\includegraphics[width=142.26378pt]{images/.pdf}\end{CD} \tag{3.7}\] if \(U\) creates, and \(V\) preserves, limits of a given shape \(\mathcal{J}\), then \(U^{\prime}\) creates limits of shape \(\mathcal{J}\). Thus, thanks to Proposition 3.5, all connected limits (in particular, equalizers) are created in the categories of \(F\)-Mealy and \(F\)-Moore machines by the functors \(U^{\prime}:F\text{-}\mathsf{Mly}\to(F_{/O})\) and are thus computed as in \((F_{/O})\), i.e. as in \(\mathcal{K}\); this result is discussed at length in [5, Ch. 4] in the case of \((\_\otimes I)\)-machines, i.e. classical Mealy machines, to prove the following: * assuming \(\mathcal{K}\) is cocomplete, all colimits are computed in \(F\)-Mly as they are computed in the base \(\mathcal{K}\); * assuming \(\mathcal{K}\) has connected limits, they are computed in \(F\)-Mly as they are computed in the base \(\mathcal{K}\); Discrete limits have to be treated with additional care: for classical Moore machines (cf. Definition 2.3) the terminal object is the terminal coalgebra of the functor \(A\mapsto A^{I}\times O\) (cf. [7, 2.3.5]): a swift application of Adamek theorem yields the object \([I^{*},O]\); for classical Mealy machines (cf. Definition 2.1) the terminal object is the terminal coalgebra for \(A\mapsto[I,O]\times[I,A]\); similarly, Adamek theorem yields \([I^{+},O]\). Adamek theorem then yields the terminal object of \(F\)-Mre as the terminal coalgebra for the functor \(A\mapsto O\times RA\), which is the \(O_{\infty,0}\) of Claim 4, and the terminal object of \(F\)-Mly as \(O_{\infty,1}\) and for \(A\mapsto RO\times RA\) (in \(F\)-Mly). All discrete limits can be computed when pullbacks and a terminal object have been found, but we prefer to offer a more direct argument to build binary products. Recall from Construction 3.2 the definition of dynamics map associated to an \(F\)-machine \(\mathfrak{c}=(E,d,s)\). Now, our claim is two-fold: * the object \(O_{\infty}:=\prod_{n\geq 1}R^{n}O\) in \(\mathcal{K}\) carries a canonical structure of an \(F\)-machine \(\mathfrak{o}=(O_{\infty},d_{\infty},s_{\infty})\) such that \(\mathfrak{o}\) is terminal in \(F\)-Mly; * given objects \((E,d_{E},s_{E}),(T,d_{T},s_{T})\) of \(F\)-Mly, the pullback (3.8) is the carrier of a \(F\)-machine structure that exhibits \(\mathfrak{p}=(P_{\infty},d_{P},s_{P})\) as the product of \(\mathfrak{c}=(E,d_{E},s_{E}),\mathfrak{f}=(T,d_{T},s_{T})\) in \(F\)-Mly. In this way, the category \(F\)-\(\mathsf{Mly}\) comes equipped with all finite products; is easy to prove a similar statement when an infinite number of objects (\(\mathfrak{e}_{i}\mid i\in I\)) is given by using wide pullbacks whenever they exist in the base category. Observe that the object \(P_{\infty}\) can be equivalently characterized as the single wide pullback obtained from the pullback \(P_{n}\) of \(\bar{s}_{E,n}\) and \(\bar{s}_{T,n}\) (or rather, an intersection, since each \(P_{n}\to E\times T\) obtained from the same pullback is a monomorphism): (3.9) Showing the universal property of \(P_{\infty}\) will be more convenient at different times in one or the other definition. In order to show our first claim in 1, we have to provide the \(F\)-machine structure on \(O_{\infty}\), exhibiting a span (3.10) on one side, \(s_{\infty}\) is the adjoint map of the projection \(\pi_{1}:O_{\infty}\to RO\) on the first factor; the other leg \(d_{\infty}\) is the adjoint map of the projection deleting the first factor, thanks to the identification \(RO_{\infty}\cong\prod_{n\geq 2}R^{n}O\); explicitly then, we are considering the following diagram: (3.11) To prove the first claim, let's consider a generic object \((E,d,s)\) of \(F\)-\(\mathsf{Mly}\), i.e. a span (3.12) and let's build a commutative diagram (3.13) for a unique morphism \(u:E\to O_{\infty}=\prod_{n\geq 1}R^{n}O\) that we take exactly equal to \(\bar{s}_{\infty}\). The argument that \(u\) makes diagram (3.13) commutative, and that it is unique with this property, is now a completely straightforward diagram chasing. Now let's turn to the proof that the tip of the pullback in (3.8) exhibits the product of \((E,d_{E},s_{E}),(T,d_{T},s_{T})\) in \(F\)-\(\mathsf{Mly}\); first, we build the structure morphisms \(s_{P},d_{P}\) as follows: * \(d_{P}\) is the dotted map obtained thanks to the universal property of \(P_{\infty}\) from the commutative diagram (3.14) * \(s_{P}:FP_{\infty}\to O\) is obtained as the adjoint map of the diagonal map \(P_{\infty}\to O_{\infty}\) in (3.8) composed with the projection \(\pi_{1}:O_{\infty}\to RO\). Let's now assess the universal property of the object \[P_{\infty}\xleftarrow{d_{P}}FP_{\infty}\xleftarrow{s_{P}}O \tag{3.15}\] We are given an object \(\mathfrak{z}=(Z,d_{Z},s_{Z})\) of \(F\)-Mly and a diagram (3.16) commutative in all its parts. To show that there exists a unique arrow \([u,v]:Z\to P_{\infty}\) (3.17) we can argue as follows, using the joint injectivity of the projection maps \(\pi_{n}:O_{\infty}\to R^{n}O\): first, we show that each square (3.18) is commutative, and in particular that its diagonal is equal to the \(n\)th skip map of \(Z\); this can be done by induction, showing that the composition of both edges of the square with the canonical projection \(O_{\infty}\to R^{n}O\) equals \(\bar{s}_{n,Z}\) for all \(n\geq 1\). From this, we deduce that there exist maps (3.19) (cf. (3.9) for the definition of \(P_{n}\)) for every \(n\geq 1\), But now, the very way in which the \(z_{n}\)s are defined yields that each such map coincides with \(\langle u,v\rangle:Z\to E\times T\), thus \(Z\) must factor through \(P_{\infty}\). Now we have to exhibit the commutativity of diagrams (3.20) and this follows from a straightforward diagram chasing. This concludes the proof. **Remark 3.7**.: _Phrased out explicitly, the statement that \(\mathfrak{o}=(O_{\infty},d_{\infty},s_{\infty})\) is a terminal object amounts to the fact that given any other \(F\)-Mealy machine \(\mathfrak{e}=(E,d,s)\), there is a unique \(u_{E}:E\to O_{\infty}\) with the property that_ \[\tikzfig{e_{1}} \tag{3.21}\] _are both commutative diagrams; a similar statement holds for \(F\)-Moore automata._ ### Adjoints to behaviour functors In [11, 12] the author concentrates on building an adjunction between a category of machines and a category collecting the _behaviours_ of said machines. Call an endofunctor \(F:\mathcal{K}\to\mathcal{K}\) an _input process_ if the forgetful functor \(U:\mathsf{Alg}(F)\to\mathcal{K}\) has a left adjoint \(G\); in simple terms, an input process allows to define free \(F\)-algebras.3 Footnote 3: Obviously, this is in stark difference with the requirement that \(F\) has an adjoint, and the two requests are independent: if \(F\) is a monad, it is always an input process, regardless of \(F\) admitting an adjoint on either side. In [11, 12] the author concentrates on proving the existence of an adjunction \[L:\mathsf{Beh}(F) \tag{3.22}\] where \(\mathsf{Mach}(F)\) is the category obtained from the pullback \[\tikzfig{e_{1}} \tag{3.23}\] \(\Delta\) is the diagonal functor, and \(\mathsf{Beh}(F)\) is a certain comma category on the free \(F\)-algebra functor \(G\). Phrased in this way, the statement is conceptual enough to carry over to \(F\)-Mealy and \(F\)-Moore machines (and by extension, to all settings where a category of automata can be presented through a strict 2-pullback in \(\mathsf{Cat}\) of well-behaved functors -a situation that given (3.5), (3.6), (3.23) arises quite frequently). **Theorem 3.8**.: _(_[_7_]_ _There exists a functor \(B:F\mbox{-}\mathsf{Mre}\to\mathsf{Alg}(F)_{/(O_{\infty},d_{\infty})}\), where the codomain is the slice category of \(F\)-algebras and the \(F\)-algebra \((O_{\infty},d_{\infty})\) is determined in Claim 4. The functor \(B\) has a left adjoint \(L\)._ Proof.: Recall that the functor \(B:F\mbox{-}\mathsf{Mre}_{O}\to\mathsf{Alg}(F)_{/O_{\infty}}\) is defined as follows on objects and morphisms: \[\tikzfig{e_{1}} \tag{3.24}\] A typical object of \(\mathsf{Alg}(F)_{/O_{\infty}}\) is a tuple \(((A,a),u)\) where \(a:FA\to A\) is an \(F\)-algebra with its structure map, and \(u:A\to O_{\infty}\) is an \(F\)-algebra homomorphism, i.e. a morphism \(u\) such that \(d_{T}\circ Fu=u\circ a\). A putative left adjoint for \(B\) realises a natural bijection \[F\text{-}\mathsf{Mre}_{O}\big{(}L((A,a),u),(E,d,s)\big{)}\cong\mathsf{Alg}(F)_{ /O_{\infty}}\big{(}((A,a),u),B(E,d,s)\big{)} \tag{3.25}\] between the following two kinds of commutative diagrams: (3.26) There is a clear way to establish this correspondence. The functor \(B\) is defined as follows: * on objects \(\mathfrak{e}=(E,d,s)\) in \(F\text{-}\mathsf{Mre}\), as the correspondence sending \(\mathfrak{e}\) to the unique map \(u_{E}:E\to O_{\infty}\), which is an \(F\)-algebra homomorphism by the construction in (3.13); * on morphisms, \(f:(E,d,s)\to(F,d^{\prime},s^{\prime})\) between \(F\)-Moore machines, \(B\) acts as the identity, ultimately as a consequence of the fact that the terminality of \(O_{\infty}\) yields at once that \(u_{F}\circ f=u_{E}\). The adjunction in Theorem 3.2 is actually part of a longer chain of adjoints obtained as follows: recall that every adjunction \(G:\mathcal{K}\rightleftarrows\mathcal{H}:U\) induces a 'local' adjunction \(\tilde{G}:\mathcal{K}_{/U\alpha}\leftrightarrows\mathcal{H}_{/A}:\tilde{U}\) where \(\tilde{U}(FA,f:FA\to A)=Uf\). Then, if \(F\) is an input process, we get adjunctions (3.27)
2304.08995
The Simons Observatory: Beam characterization for the Small Aperture Telescopes
We use time-domain simulations of Jupiter observations to test and develop a beam reconstruction pipeline for the Simons Observatory Small Aperture Telescopes. The method relies on a map maker that estimates and subtracts correlated atmospheric noise and a beam fitting code designed to compensate for the bias caused by the map maker. We test our reconstruction performance for four different frequency bands against various algorithmic parameters, atmospheric conditions and input beams. We additionally show the reconstruction quality as function of the number of available observations and investigate how different calibration strategies affect the beam uncertainty. For all of the cases considered, we find good agreement between the fitted results and the input beam model within a ~1.5% error for a multipole range l = 30 - 700 and an ~0.5% error for a multipole range l = 50 - 200. We conclude by using a harmonic-domain component separation algorithm to verify that the beam reconstruction errors and biases observed in our analysis do not significantly bias the Simons Observatory r-measurement.
Nadia Dachlythra, Adriaan J. Duivenvoorden, Jon E. Gudmundsson, Matthew Hasselfield, Gabriele Coppi, Alexandre E. Adler, David Alonso, Susanna Azzoni, Grace E. Chesmore, Giulio Fabbian, Ken Ganga, Remington G. Gerras, Andrew H. Jaffe, Bradley R. Johnson, Brian Keating, Reijo Keskitalo, Theodore S. Kisner, Nicoletta Krachmalnicoff, Marius Lungu, Frederick Matsuda, Sigurd Naess, Lyman Page, Roberto Puddu, Giuseppe Puglisi, Sara M. Simon, Grant Teply, Tran Tsan, Edward J. Wollack, Kevin Wolz, Zhilei Xu
2023-04-18T13:58:11Z
http://arxiv.org/abs/2304.08995v2
# The Simons Observatory: Beam characterization for the Small Aperture Telescopes ###### Abstract We use time-domain simulations of Jupiter observations to test and develop a beam reconstruction pipeline for the Simons Observatory Small Aperture Telescopes. The method relies on a map maker that estimates and subtracts correlated atmospheric noise and a beam fitting code designed to compensate for the bias caused by the map maker. We test our reconstruction performance for four different frequency bands against various algorithmic parameters, atmospheric conditions and input beams. We additionally show the reconstruction quality as function of the number of available observations and investigate how different calibration strategies affect the beam uncertainty. For all of the cases considered, we find good agreement between the fitted results and the input beam model within a \(\sim\) 1.5% error for a multipole range \(\ell\) = 30 - 700.
2306.00568
Metasurface-based hybrid optical cavities for chiral sensing
Quantum metasurfaces, i.e., two-dimensional subwavelength arrays of quantum emitters, can be employed as mirrors towards the design of hybrid cavities, where the optical response is given by the interplay of a cavity-confined field and the surface modes supported by the arrays. We show that, under external magnetic field control, stacked layers of quantum metasurfaces can serve as helicity-preserving cavities. These structures exhibit ultranarrow resonances and can enhance the intensity of the incoming field by orders of magnitude, while simultaneously preserving the handedness of the field circulating inside the resonator, as opposed to conventional cavities. The rapid phase shift in the cavity transmission around the resonance can be exploited for the sensitive detection of chiral scatterers passing through the cavity. We discuss possible applications of these resonators as sensors for the discrimination of chiral molecules.
Nico S. Bassler, Andrea Aiello, Kai P. Schmidt, Claudiu Genes, Michael Reitz
2023-06-01T11:30:17Z
http://arxiv.org/abs/2306.00568v1
# Metasurface-based hybrid optical cavities for chiral sensing ###### Abstract Quantum metasurfaces, i.e., two-dimensional subwavelength arrays of quantum emitters, can be employed as mirrors towards the design of hybrid cavities, where the optical response is given by the interplay of a cavity-confined field and the surface modes supported by the arrays. We show that, under external magnetic field control, stacked layers of quantum metasurfaces can serve as helicity-preserving cavities. These structures exhibit ultranarrow resonances and can enhance the intensity of the incoming field by orders of magnitude, while simultaneously preserving the handedness of the field circulating inside the resonator, as opposed to conventional cavities. The rapid phase shift in the cavity transmission around the resonance can be exploited for the sensitive detection of chiral scatterers passing through the cavity. We discuss possible applications of these resonators as sensors for the discrimination of chiral molecules. pacs: 42.50.Nn, 42.50.Pq, 42.25.Ja Conventional isotropic (e.g., metallic) mirrors reverse the handedness (or helicity) of circularly polarized light by turning right-circularly polarized (RCP) light into left-circularly polarized (LCP) light and vice versa [1; 2]. This makes it impossible to realize helicity-preserving (HP) cavities or even chiral cavities (i.e., cavities only supporting light modes of a certain handedness), with conventional mirrors [3; 4]. There is however a great current scientific and technological interest in the design of HP mirrors and resonators [5; 6; 7; 8], in particular for the enhancement of so-called dichroic effects. Dichroism refers to the (typically weak) differential absorption of circularly polarized light by chiral scatterers such as molecular enantiomers [9]. Enhancing dichroic effects with optical resonators can result in better sensitivities for the discrimination of molecular enantiomers [10; 11; 12; 13], a desired task for biochemical applications. In the strong light-matter coupling regime, chiral cavities have furthermore been proposed to create novel light-dressed states of matter by breaking the time-reversal symmetry in materials, leading to the emerging field of chiral polaritonics [14; 15]. In this work we show that HP cavities can be implemented with quantum metasurfaces employed as mirrors. These structures have emerged as platforms for achieving strong and highly directional light-matter interactions and can most prominently be realized with cold atoms trapped in optical lattices [16]. They can exhibit close to perfect reflection of incoming light [16; 17; 18; 19; 20; 21] and have numerous other applications e.g., as platforms for topological quantum optics [22; 23; 24], nonlinear quantum optics [25; 26; 27; 28; 29; 30] or quantum information processing [31; 32; 33; 34; 35; 36]. The main ingredient of our approach is to manipulate the polarization of the incoming light field via the orientation of the effective two-level systems that make up the metasurfaces, which can for instance be tuned via an external magnetic field. More generally, this work falls within the scope of _hybrid cavities_, i.e., the design of optical resonators going beyond the simple textbook picture of a single electromagnetic mode confined between two non-reactive mirrors. Instead, strongly dispersive optical elements such as photonic crystals or plasmonic metasurfaces are used as reflectors [37; 38; 39; 40; 41] with the aim to surpass the performance of standard cavities, implying a highly non-Markovian behavior of the cavity as characterized by non-Lorentzian, typically Fano-type lineshapes [42]. **HP mirror** - Let us present an implementation procedure for a HP mirror using a stacked system of quantum metasurfaces. We start by introducing the formalism for a single metasurface [43]. To this end, we consider a 2D quasi-infinite quantum emitter array where the emitters are situated in the \(xy\) plane at positions \(\mathbf{r}_{j}\). For simplicity, one may imagine a square lattice, however most of the results derived in the following are equally valid for other Bravais lattices and can furthermore also be extended to non-Bravais lattices [23; 43]. The layer is comprised of \(\mathcal{N}\) emitters with internal electronic structure described by a \(J=0\to J=1\) transition at transition frequency \(\omega_{0}\). In the following, we will work in the Cartesian polarization basis. The transition dipole operator for each emitter can be written as \(\mathbf{d}=\sum_{\nu}\mathbf{d}_{\nu}\sigma_{\nu}+\text{h.c.}\) with \(\mathbf{d}_{\nu}=\left\langle g\right|\mathbf{d}\left|\nu\right\rangle\) and \(\sigma_{\nu}=\left|g\right\rangle\left\langle\nu\right|\) is the corresponding lowering operator for each electronic transition (\(\nu=x,y,z\)). In addition, we consider a laser drive entering from the left in the form of a plane wave with positive-frequency amplitude \(\mathbf{E}_{\text{in}}^{(+)}=(E_{\text{in},x},E_{\text{in},y},0)^{\top}\) and laser frequency \(\omega_{l}=2\pi c/\lambda=ck_{l}\) where \(\lambda\) and \(k_{l}\) are the laser wavelength and wavenumber, respectively. In a frame rotating at the laser frequency, the Hamiltonian describing the dynamics of the emitter array is given by the sum of the free evolution and the dipole-dipole interaction (\(\hbar=1\)) \[\mathcal{H}_{0}+\mathcal{H}_{\text{d-d}}=-\Delta\sum_{j,\nu}\sigma^{\dagger}_{j, \nu}\sigma_{j,\nu}+\sum_{j,j^{\prime},\nu,\nu^{\prime}}\Omega^{\nu\nu^{\prime \prime}}_{jj^{\prime}}\sigma^{\dagger}_{j,\nu}\sigma_{j^{\prime},\nu^{\prime}}, \tag{1}\] with the laser detuning \(\Delta=\omega_{l}-\omega_{0}\) and \(\sigma_{j,\nu}\) is the lowering operator for the \(\nu\)-transition within a particular emitter \(j\). Assuming normally incident illumination, the laser drive adds as \(\mathcal{H}_{l}=\sum_{j,\nu}(\eta_{\nu}\sigma^{\dagger}_{j,\nu}+\text{h.c.})\) with Rabi frequencies \(\eta_{\nu}=d_{\nu}E^{(+)}_{\text{in},\nu}\) and \(\eta_{z}=0\). In addition to the coherent processes, the collective loss of excitations due to spontaneous emission is described by the Lindblad term \[\mathcal{L}[\rho] =\!\!\sum_{j,j^{\prime},\nu,\nu^{\prime}}\Gamma^{\nu\nu^{\prime}}_ {jj^{\prime}}\left[\sigma_{j,\nu}\rho\sigma^{\dagger}_{j^{\prime},\nu^{\prime }}\!-\!\frac{1}{2}\left\{\sigma^{\dagger}_{j,\nu}\sigma_{j^{\prime},\nu^{ \prime}},\rho\right\}\right], \tag{2}\] where the last term denotes an anticommutator and the diagonal elements describe the independent spontaneous emission of the emitters \(\Gamma^{\nu\nu^{\prime}}_{jj}=\Gamma_{0}\delta_{\nu\nu^{\prime}}\) with \(\Gamma_{0}=\omega_{0}^{3}d^{2}/(3\pi\epsilon_{0}c^{3})\) (we assume the dipole moments to be identical in the following \(d_{\nu}\equiv d\)). The rates \(\Omega^{\nu\nu^{\prime}}_{jj^{\prime}}\), \(\Gamma^{\nu\nu^{\prime}}_{jj^{\prime}}\) describe coherent/incoherent scattering of photons between emitters \(j\) and \(j^{\prime}\) and between transitions \(\nu\) and \(\nu^{\prime}\) and can be derived as real and imaginary parts of the photonic Green's tensor (see App. A) as \[\Omega^{\nu\nu^{\prime}}_{jj^{\prime}}-\mathrm{i}\frac{\Gamma^{\nu\nu^{\prime }}_{jj^{\prime}}}{2}=-\mu_{0}\omega_{0}^{2}\,\mathbf{d}_{\nu}^{*}\cdot \mathbf{G}(\mathbf{r}_{jj^{\prime}})\cdot\mathbf{d}_{\nu^{\prime}}, \tag{3}\] expressed in terms of the vacuum permeability \(\mu_{0}\) and depending on the interparticle separation \(\mathbf{r}_{jj^{\prime}}=\mathbf{r}_{j^{\prime}}-\mathbf{r}_{j}\). The Green's tensor is defined such that the real part of the self-interaction at \(j=j^{\prime}\) vanishes. From the steady-state solution of the quantum master equation \(\dot{\rho}=\mathrm{i}[\rho,\mathcal{H}_{0}+\mathcal{H}_{\text{d-d}}+\mathcal{ H}_{l}]+\mathcal{L}[\rho]\), the dipole amplitudes and thereby the transmitted and reflected fields can be computed (see App. A). The transmission matrix of the metasurface connecting the polarization components of the input field to the outgoing field expresses as [43] \[\boldsymbol{\mathcal{T}}_{m}=\mathds{1}+\mathrm{i}\frac{\widetilde{\Gamma}(0) }{2d^{2}}\boldsymbol{\alpha}_{\text{red}}, \tag{4}\] where \(\widetilde{\Gamma}(0)\) is the effective decay rate at zero quasi-momentum and \(\boldsymbol{\alpha}_{\text{red}}\) is the 2D polarizability tensor of the metasurface, relating the induced dipole moment to the incoming electric field. However, in the limit of large external magnetic fields \(\mu|\mathbf{B}|\gg\widetilde{\Gamma}(0)\) (magnetic moment \(\mu\)), all dipole transitions orthogonal to the magnetic field Figure 1: _Helicity-preserving metasurface cavity._ (a) A HP quantum metasurface cavity of length \(\ell\approx n\lambda/2\) can be constructed with composite mirrors of orthogonal dipole orientation (e.g., along \(x\) and \(y\)), with an appropriate relative phase separation of \(\phi_{m}=k_{l}\ell_{m}=\pi/2+2\pi n_{m}\). (b), (c) Absolute value squared of RS vectors \(\mathbf{G}_{\pm}(\mathbf{R})\) for a cavity length \(\ell=12.505\lambda\), \(\ell_{m}=5\lambda/4\), for square lattices with lattice spacing \(a=0.8\lambda\) illuminated by RCP light with input polarization vector \(\mathbf{E}^{(+)}_{\text{in}}=(1,\mathrm{i},0)^{\top}/\sqrt{2}\) close to the cavity resonance. The plots are normalized to the input intensity \(I_{\text{in}}=|\mathbf{E}^{(+)}_{\text{in}}|^{2}\). The metasurfaces are defined with a finite curvature radius, leading to Gaussian confinement of the mode (beam waist \(w_{0}=8\lambda\)). (d) Cavity transmission \(|t_{c}|^{2}\) as a function of the laser detuning \(\Delta\) for different cavity lengths \(\ell\). (e) Chiral sensing with HP metasurface cavities: Ideal chiral scatterers passing through a cavity with the same handedness yield a phase shift in the cavity transmission output which can for instance be measured by homodyne detection (LO: local oscillator) and leads to a phase detection sequence as schematically illustrated in the diagram. direction become very off-resonant and one may focus on the polarization component in the direction of the magnetic field, thereby reducing the description to an effective two-level model [43]. The transmission amplitude of the metasurface for a single component \(E_{\mathrm{in},\nu}\) is then simply given by [17; 18] \[t_{m}=1+\frac{\mathrm{i}\widetilde{\Gamma}(0)/2}{\widetilde{\Omega}(0)-\Delta- \mathrm{i}\widetilde{\Gamma}(0)/2}, \tag{5}\] where \(\widetilde{\Omega}(0)=\sum_{j}\Omega_{0j}^{\nu\nu}\), \(\widetilde{\Gamma}(0)=\sum_{j}\Gamma_{0j}^{\nu\nu}\) describe the dipole-induced collective frequency shift and decay rate arising from the \(\nu\) transition dipoles (for an arbitrary index 0 on the array). The complex transmission and reflection amplitudes are connected as \(t_{m}=1+r_{m}\) while \(|t_{m}|^{2}+|r_{m}|^{2}=1\). Most notably, if the laser frequency matches the collective metasurface resonance \(\omega_{l}=\omega_{\nu}+\widetilde{\Omega}(0)\), perfect reflection of incoming light is obtained as \(|r_{m}|^{2}=1\). In the following, for the sake of clarity, we proceed with the simplified two-level description. Finally, to obtain a HP mirror, we consider now two copies of quantum metasurfaces separated by a distance \(\ell_{m}\), one with dipoles pointing in \(x\)-direction and one with dipoles pointing in \(y\)-direction with a path length difference of \(k_{l}\ell_{m}=\phi_{m}=2\pi n_{m}+\pi/2\) (\(n_{m}\in\mathbb{N}_{0}\)) between the two polarizations. The combination of these two mirrors is a helicity-perserving mirror. The path length difference rotates the \(y\)-polarization components by \(\pi\), thereby reversing the mirror operation which does not conserve helicity for an ordinary mirror. A full transfer matrix calculation showing this can be found in App. E. **HP cavity** - A HP optical cavity can now be simply implemented by two HP metasurface mirrors separated by a distance \(\ell\) (see Fig. 1(a)). The two layers making up the mirror consist of dipoles with perpendicular dipole orientations, leading to vanishing interactions between the two cavities in the far field. For simplicity, we thus continue the discussion for a single cavity while keeping in mind that the actual setup consists of two noninteracting copies. A full discussion for both polarization components can be found in App. E. Solving the coupled-dipole equations and neglecting the contributions from all evanescent terms, a simple expression for the total transmitted field can be obtained as \(E^{(+)}(z>\ell)=t_{c}E_{\mathrm{in}}^{(+)}\mathrm{e}^{\mathrm{i}k_{l}z}\) with the cavity transmission coefficient (assuming \(k_{0}\approx k_{l}\), for derivation see App. C) \[t_{c}=\frac{\left(\Delta-\widetilde{\Omega}(0)\right)^{2}}{\left(\Delta- \widetilde{\Omega}(0)+\mathrm{i}\frac{\widetilde{\Gamma}(0)}{2}\right)^{2}+ \frac{\widetilde{\Gamma}(0)^{2}}{4}\mathrm{e}^{\mathrm{i}k_{l}\ell}}. \tag{6}\] We remark that instead of solving the coupled-dipole equations for the two arrays, the same result can be obtained from classical transfer matrix theory [30; 38; 44] where the transfer matrix of a single metasurface can be expressed in terms of the mirror polarizability \(\zeta_{m}=-\mathrm{i}r_{m}/t_{m}=\widetilde{\Gamma}(0)/[2(\widetilde{\Omega}(0 )-\Delta)]\) as \[\mathbf{T}_{m}=\begin{pmatrix}1+\mathrm{i}\zeta_{m}&\mathrm{i}\zeta_{m}\\ -\mathrm{i}\zeta_{m}&1-\mathrm{i}\zeta_{m}\end{pmatrix}. \tag{7}\] The total transfer matrix is then simply obtained as \(\mathbf{T}=\mathbf{T}_{m}\mathbf{T}_{f}\mathbf{T}_{m}\) with the free space propagation matrix \(\mathbf{T}_{f}=\mathrm{diag}(\mathrm{e}^{\mathrm{i}k_{l}\ell},\mathrm{e}^{- \mathrm{i}k_{l}\ell})\). The condition that the transmission ought to equal unity at the cavity resonance \(|t_{c}|^{2}=1\), yields the following expression for the cavity resonance \[\Delta-\widetilde{\Omega}(0)=-\frac{\widetilde{\Gamma}(0)}{2}\tan(k_{l}\ell). \tag{8}\] To demonstrate that the resulting cavity consisting of two HP mirrors indeed conserves the helicity, we compute the Riemann-Silberstein (RS) vectors [45] \[\mathbf{G}_{\pm}(\mathbf{R})=\frac{1}{\sqrt{2}}\left(\mathbf{E}(\mathbf{R}) \pm\mathrm{i}\mathcal{Z}\mathbf{H}(\mathbf{R})\right), \tag{9}\] which describe the combined electromagnetic field of chiral polarization and \(\mathcal{Z}=(\epsilon_{0}c)^{-1}\) is the vacuum impedance. The absolute value of these quantities is plotted in Figs. 1(b), (c), for RCP light entering the cavity, confirming that the cavity preserves the helicity while also showing a strong field enhancement. The difference in absolute value between the RS vectors can be seen as a measure for the chirality density inside the cavity. The cavity itself is however not chiral as _any_ elliptical input polarization is supported. The magnetic field \(\mathbf{H}(\mathbf{R})\) is determined via Maxwell's equations from the excitations of the electric dipoles on the metasurface as detailed in App. A. The transmission profile around the cavity resonance is illustrated in Fig. 1(d) for different cavity lengths for a lattice spacing of \(a=0.8\lambda\) where the collective dipole shift is close to zero, i.e., \(\widetilde{\Omega}(0)\approx 0\). If the cavity length \(\ell\) exactly matches \(n\lambda/2\) (\(n\in\mathbb{N}\)), the cavity resonance coincides with the resonance of the individual arrays and no transmission is obtained as all the light is reflected. If \(\ell\) becomes slightly larger, a narrow transmission window opens up as the mirrors and the cavity now possess different resonance frequencies. Further increasing the cavity length leads to a strongly asymmetric Fano-type profile with a larger linewidth and the cavity resonance drifting towards infinity for \(\ell\rightarrow(n+\frac{1}{2})\lambda/2\). Once the next multiple of \(\lambda/2\) is approached, the cavity linewidth becomes narrow again and the cavity resonance shifts towards \(\omega_{\nu}+\widetilde{\Omega}(0)\) as \(\tan(k_{l}\ell)\to 0\). The distance between the zero and the maximum of the transmission can be used as a measure for the cavity linewidth \(\kappa=\widetilde{\Gamma}(0)|\tan(k_{l}\ell)|\). We present a coupled-modes theory for the input-output description of cavities made from quantum metasurface mirrors in App. G. **Chiral sensing** - We now consider the scenario depicted in Fig. 1(e) where chiral scatterers with radiative linewidth \(\gamma_{s}\) are sent through the cavity. We assume the resonance of the scatterer \(\omega_{s}\) to be far-detuned from the cavity resonance \(|\Delta_{s}|\gg|\Delta_{c}|,\gamma_{s}\) with \(\Delta_{s/c}=\omega_{l}-\omega_{s/c}\). In this case, the effect of a scatterer with the same helicity as the cavity is to increase the path length of light passing through the cavity and thereby effectively shift the cavity length by a small amount \(\delta\ell_{s}=-\arctan(\gamma_{s}/2\Delta_{s})/k_{l}\) (for derivation see App. F), such that the total cavity length is now given by \(\ell+\delta\ell_{s}\). On the other hand, a scatterer with the opposite helicity as the cavity mode does not cause a shift (assuming ideal chiral scatterers, in reality both helicities will lead to differential shifts). Due to the quick phase switch around the cavity resonance for cavity lengths close to \(n\lambda/2\), a small perturbation of the cavity length can lead to a considerable phase shift of the cavity transmission (see Fig. 2(a)). This is the central idea of the sensing scheme discussed in the following. One can then proceed to compute the relative phase change in the cavity transmission between lengths \(\ell\) and \(\ell+\delta\ell_{s}\) on the cavity resonance (assuming \(\ell\approx n\lambda/2\)) \[\varphi=\arg\frac{t_{c}(\ell+\delta\ell_{s})}{t_{c}(\ell)}\Big{|}_{\text{res },\ell\approx\text{n}\frac{\lambda}{2}}\approx\arctan\biggl{(}\frac{1}{\tan(k _{l}\delta\ell_{s})}\biggr{)}, \tag{10}\] which reaches a value of \(\pi[\theta(k_{l}\delta\ell_{s})-1/2]\) as \(\delta\ell_{s}\to 0\), implying a phase jump from \(-\pi/2\) to \(\pi/2\) around the cavity resonance for lengths close to \(n\lambda/2\) (\(\theta(x)\) is the Heaviside function). If the cavity length departs from \(n\lambda/2\), the cavity linewidth increases and the phase switch gets diminished, as illustrated in Figs. 1(d) and 2(a). An experimental setup to measure this phase is homodyne detection as illustrated in Fig. 1(e) where the phase between a local oscillator (for instance obtained from beam splitting the input field) is compared to the phase of the output field. Suppose we consider a signal beam with which we drive the cavity \(\alpha(t)=\sqrt{F}\exp(-\mathrm{i}\omega_{0}t+\mathrm{i}\theta)\) and the local oscillator field \(\alpha_{L}(t)=\sqrt{F_{\text{LO}}}\exp(-\mathrm{i}\omega_{0}t+\mathrm{i} \theta_{\text{LO}})\) with intensities (number of photons per unit of time) \(F\) and \(F_{\text{LO}}\). Then considering homodyne detection for a Fabry-Perot cavity leads to an uncertainty in the measured phase for an integration time \(T\) of the measurement and a quantum efficiency \(\eta_{Q}\) which is encoded in the intensity difference \(m_{-}\) with variance \((\Delta m_{-})_{\text{res}}^{2}=\eta_{Q}TF_{\text{LO}}\) and expectation value (assuming the phase variation to happen on a timescale much slower than the optical frequency) \[\langle m_{-}(t,T)\rangle=2\eta_{Q}\sqrt{FF_{\text{LO}}}|t_{c}|\int_{t}^{t+T} \mathrm{d}t^{\prime}\sin\left(\varphi(t^{\prime})\right). \tag{11}\] Here, we have approximated that \(F_{\text{LO}}\gg F\) and have taken \(\theta+\theta_{\text{LO}}=2\pi n\) which can be obtained by phase matching the local oscillator and the signal beam. Since the cavity is a linear element, the resulting phase uncertainty (see sketch in Fig. 2(b)) is independent of any cavity properties and only depends on the properties of the state of the incoming beam, which is assumed to be classical for this calculation. This uncertainty could however be improved upon by choosing a phase-squeezed input field instead of a coherent one. As an alternative to describing the passage of particles through the cavity with transfer matrix theory, the dipole theory can be extended to include the presence of an additional chiral scatterer which can be represented by Figure 2: _Chiral sensing._ (a) Cavity phase \(\varphi_{c}=\arg(t_{c})\) versus small perturbations of the cavity length \(\delta\ell_{s}\) for different cavity lengths \(\ell\) (square lattice, \(a=0.8\lambda\)). For each \(\ell\), the resonance condition described by Eq. (8) is fulfilled. (b) Illustration of phase uncertainty in phase space for a coherent state. From a geometric point of view, it becomes clear that the phase uncertainty becomes smaller when the amplitude of the field is larger. (c) Homodyne detection signal \(\langle m_{-}(t)\rangle\) for (ideal) right- and left-handed chiral scatterers (RHS/LHS) passing through an RCP cavity with corresponding shot noise shown in light blue and red, respectively. The dashed black line indicates the entry of a particle into the cavity and the dotted red line indicates its exit. In addition, a rotational average was performed as detailed in App. H. We have used \(\Delta_{s}=10\Gamma_{0}\), \(\gamma_{s}=\Gamma_{0}\) with an average of one photon within the time interval \(\Gamma_{0}^{-1}\), i.e., \(F=\Gamma_{0}\), and an integration time of \(T\Gamma_{0}=2000\). The linewidth for the metasurfaces was assumed to be the same as for the infinite square lattice at \(a=0.8\lambda\). The detuning of the incoming laser was chosen to be \(\Delta=\Gamma_{0}/100\). coupled electric and magnetic dipoles (see App. H). These equations of motion are simulated in Fig. 2(c) for right- and left-handed scatterers (RHS/LHS) entering an RCP cavity, showing a clear distinction in the resulting signal. Here, the classical shot noise of a coherent input field is used to estimate the phase error for a homodyne detection. One can also see several aspects of the cavity physics in this plot. First, the relaxation to the steady state occurs very slowly owing to the very small decay rate of the hybrid cavity. We can also observe that with sufficient detector integration the shot noise can be overcome in order to detect a single scatterer. Let us finally briefly discuss the applicability of the chiral sensing scheme to the discrimination of molecular enantiomers. Chiral molecules are in general not perfect chiral scatterers. This is manifested in the fact that their circular dichroism CD = \((A_{+}-A_{-})/(A_{+}+A_{-})\), i.e., the difference in absorbance between RCP and LCP light, is not unity but some small finite value. This can have several reasons, but the most physical one is that the magnetic dipole linewidth of an optical transition is usually much weaker than the electric dipole linewidth. This implies that the assumption of coupling to only a single polarization component is unrealistic. Instead, both enantiomers will lead to a small differential shift in optical path length. Aside from the absolute magnitude of this change in path length however, the presented strategy retains generality and the applicabilty is a question of detailed system parameters. More so, we claim that we have mapped the problem of chirality sensing of enantiomers onto a controllable cavity optomechanical setup. **Conclusions and Outlook** - We have shown that HP mirrors and cavities can be created from stacked quantum metasurfaces with orthogonal dipole orientation. We remark that our proposal could be analogously implemented with arrays of classical dipoles such as plasmonic lattices in which case no external magnetic field control would be needed as the polarization can be controlled by the geometry of the individual plasmonic elements [46]. We then proposed to use these narrow-linewidth HP cavity modes for the optical sensing of chiral scatterers by discussing how the phase of the output field is modified by an off-resonant scatterer passing through the cavity. We furthermore discussed the phase uncertainty for homodyne detection which can be minimized by tuning the input intensity and the integration time of the detector. We also discussed briefly the applicability to the discrimination of molecular enantiomers. Future endeavors will see the extension of our formalism to additionally describe possible derogating effects such as motion, vacancies and nonlinearities of the quantum emitter array. We remark that, in addition to the use proposed in this work, layered metasurfaces can enable a host of other applications. For instance, stacking many of these layers leads to Bragg-mirror physics which could be used to tailor the frequency windows of optical elements based on quantum metasurfaces. Also, tilting the metasurfaces with respect to each other gives rise to moire superlattices which are known to exhibit exotic optoelectronic phenomena in solid-state platforms [47]. Preliminary calculations show however that, for normally incident light, the twisting angle between the layers does not matter. Even more general polarization structures of the cavity mode, such as Faraday cavities [14], might also be implementable. **Acknowledgments** - We acknowledge fruitful discussions with L. Mauro and J. Fregoni which led to the initial idea for this project. This work was supported by the Max Planck Society and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 429529648 - TRR 306 QuCoLiMa ("Quantum Cooperativity of Light and Matter").
2302.02850
Landau theory for ferro-paramagnetic phase transition in finitely-strained viscoelastic magnets
The thermodynamic model of visco-elastic deformable magnetic materials at finite strains is formulated in a fully Eulerian way in rates. The Landau theory applies for ferro-to-para-magnetic phase transition, the gradient theory (leading exchange energy) for magnetization with general mechanically dependent coefficient, hysteresis in magnetization evolution by Landau-Lifshitz-Gilbert equation involving objective corotational time derivative of magnetization, and demagnetizing field are considered in the model. The Kelvin-Voigt viscoelastic rheology with a higher-order viscosity (exploiting the concept of multipolar materials) is used, allowing for physically relevant frame-indifferent stored energies and for local invertibility of deformation. The model complies with energy conservation and Clausius-Duhem entropy inequality. Existence and a certain regularity of weak solutions is proved by a Faedo-Galerkin semi-discretization and a suitable regularization.
Tomáš Roubíček
2023-02-06T15:18:43Z
http://arxiv.org/abs/2302.02850v1
# Landau theory for ferro-paramagnetic phase transition in finitely-strained viscoelastic magnets ###### Abstract The thermodynamic model of visco-elastic deformable magnetic materials at finite strains is formulated in a fully Eulerian way in rates. The Landau theory applies for ferro-to-paramagnetic phase transition, the gradient theory (leading exchange energy) for magnetization with general mechanically dependent coefficient, hysteresis in magnetization evolution by Landau-Lifshitz-Gilbert equation involving objective corotational time derivative of magnetization, and demagnetizing field are considered in the model. The Kelvin-Voigt viscoelastic rheology with a higher-order viscosity (exploiting the concept of multipolar materials) is used, allowing for physically relevant frame-indifferent stored energies and for local invertibility of deformation. The model complies with energy conservation and Clausius-Duhem entropy inequality. Existence and a certain regularity of weak solutions is proved by a Faedo-Galerkin semi-discretization and a suitable regularization. _Keywords_: Elastodynamics, ferromagnetic, phase transition, micromagnetics, magnetostriction, Kelvin-Voigt viscoelasticity, thermal coupling, large strains, multipolar continua, semi-Galerkin discretization, weak solutions. _AMS Subject Classification:_ 35Q74, 35Q79, 65M60, 74A30, 74F15, 74N30, 80A20. ## 1 Introduction - deforming magnetic continua The magnetic materials which are not completely rigid represent interesting, important, and difficult multi-physical concatenation of mere (thermo)continuum mechanics and mere micromagnetism. Beside homogeneous visco-elastic magnets, it may concern elastically rather soft materials filled with magnetic particles, e.g. rocks (which can be considered soft on long time scales) and polymers (i.e. so-called magneto-rheological elastomers or ferrogels), which however needs to involve creep which is not consider in this paper rather for not making the model too complicated. We will focus ourselves to general finite (also called large) strain mechanics in the Eulerian formulation. This magneto-mechanical subject has been addressed in [14, Ch.6] or anisothermal but not with explicitly articulated equations [10] and also, in a thermodynamic context, [35, Ch.6]. Even in the purely mechanical isothermal cases, and a-fortiori in anisothermal situations, the visco-elastodynamics at finite strains has been articulated in [2, 3] as a difficult open problem as far as existence of weak solutions concerns. There is a certain agreement that, for analytical reasons, a certain enough strong dissipation mechanism is to be involved to make the dynamical problem parabolic, although some hyperbolic models exist, as mentioned below. The simplest variant is the _Kelvin-Voigt viscoelastic rheology_. The mentioned _Eulerian approach_ is standardly believed to be well fitted with fluids. It is particularly suitable in situations when there is no natural reference configuration or where a reference configuration becomes less and less relevant during long-time evolution, which may however apply also for solids. A formulation of equations in current deforming configuration needs rather velocity/strain than displacement to be involved in the momentum equation. The advantage is an easier possibility to involve interaction with outer spatial fields (here magnetic and gravity) and avoiding the pull-back and push-forward manipulation. On the other hand, there is a necessity to involve convective derivative and transport equations and also evolving the shape of the body is troublesome. In isothermal situations, such model was formulated and analyzed as incompressible in [30, 31] and as compressible in [24, 43]. The mentioned higher gradients that would allow for reasonable analysis can now be involved rather in the dissipative than conservative part, so that their influence manifests only in fast evolutions. In the isothermal situations it was used in quasistatic case in [47] and in dynamical case in [50] when considering the stored energy in the actual configuration, which then gives an energy pressure in the stress tensor. In anisothermal situations, such free-energy pressure would be directly added into stress tensor in an non-integrable way and likely would cause technical difficulties. The main attributes of the devised model are: * Concept of _hyperelastic materials_ (whose conservative-stress response comes from a free energy) combined with the _Kelvin-Voigt viscoelastic rheology_ and also evolution of magnetization is driven by this free energy. * Inertial effects in fully compressible context (in particular with varying mass density) are considered. * The rate formulation in terms of velocity and deformation gradient is used while the deformation itself does not explicitly occur. * Magnetic phenomena covered by the model includes: ferro-to-para magnetic phase transition, hysteresis due to the pinning effects, exchange energy depending on deformation gradient (and in particular on compression/expansion), and demagnetizing field. * Mechanical consistency in the sense that _frame indifference_ of the free energy (which is in particular _nonconvex_ in terms of deformation gradient and in magnetization) and its _singularity_ under infinite compression in relation with _local non-interpenetration_ as well as objective corotational time derivative for magnetization transport. Thermodynamic consistency of the thermally coupled system in the sense that the _total energy is conserved_ in a closed system, the _Clausius-Duhem entropy inequality_ holds, and temperature stays non-negative. The nonconservative part of the stress in the Kelvin-Voigt model containing a higher-order component reflecting the concept of nonsimple _multipolar media_ is exploited. The model allows for rigorous mathematical analysis as far as existence and certain regularity of energy-conserving weak solutions concerns. On the other hand, some simplifications are adopted: Relatively slow evolution is implicitly assumed, which allows for reducing the full Maxwell electromagnetodynamics to magneto-statics. Electric conductivity (and in particular eddy currents) is not considered. As far as the non-negativity of temperature, below we will be able to prove only that at least some solutions enjoy this attribute, although there is an intuitive belief that all possible solutions will make it and a hope that more advanced analytical techniques would rigorously prove it. The main notation used in this paper is summarized in the following table: In comparison with [47, 50], the novelty of this paper is to apply the Eulerian approach to solids in anisothermal situations, using the free energy in a reference configuration, which does not see the energy-pressure in the stress tensor and which is also more fitted with usually available experimental data. The analysis combines \(L^{1}\)-theory for the heat equation adapted to the convective time derivatives and the techniques from compressible fluid dynamics adapted for solids. For completeness, let us still mention a competitive, Lagrangian thermodynamic formulation (including also diffusion) [51] formulating the equations in a certain fixed "reference" configuration. This approach allows easily for deformation of the shape of the body and easier treatment inertial forces but a frame-indifferent viscosity and interaction with spatial gravity and magnetic forces is much more complicated. The plan is as follows: formulation of the model in the actual Eulerian configuration and its energetics and thermodynamics is presented in Section 2, recalling first the micromagnetism and Landau transition in rigid magnets in Sect. 2.1 and finite-strain kinematics of deformable continua in Sect. 2.2 before formulating the model in Section 2.3 and showing its energetics in Section 2.4. Then, in Section 3, the rigorous analysis by a suitable regularization and a (semi) Faedo-Galerkin approximation is performed, combined with theory of transport by regular velocity fields. ## 2 The thermodynamic model and its energetics It is important to distinguish carefully the referential and the actual time-evolving coordinates. Our aim is to formulate the model eventually in actual configurations, i.e. the Eulerian formulation, reflecting also the reality in many (or even most) situations (and a certain general agreement) that a reference configuration is only an artificial construction and, even if relevant in some situations, becomes successively more and more irrelevant during evolution at truly finite strains. Typical materials involve magnetic gels or elastomers or magnetic rocks which are viscoelastic on geological timescales. On the other hand, some experimental material data are related to some reference configuration - typically it concerns mass density and stored of free energies per mass (in J/kg) or per referential volume (in J/m\({}^{3}\)=Pa) as considered here. We will present briefly the fundamental concepts and formulas which can mostly be found in the monographs, as e.g. [23, Part XI] or [32, Sect. 7.2]. ### Micromagnetism and ferro-parramagnetic transition Let us briefly recall the micromagnetic model in rigid magnets and Landau's phase-transition theory [27], cf. also [29, Sec.39] or the monographs [15, 6]. The basic ingredient governing static (and later also evolution) model is the free energy \(\psi=\psi(\boldsymbol{m},\theta)\) depending on magnetization \(\boldsymbol{m}\) and temperature \(\theta\). In the micromagnetism, the free energy \(\psi\) is augmented by the exchange energy with \(\kappa\) a coefficient determining an internal length-scale, responsible for a typical fine domain structure in ferromagnets. The magnetization itself induces a magnetic field, called a self-induced _demagnetizing field_\(\mathbf{h}_{\rm dem}\). For many (or maybe most) applications, full Maxwell electro-magnetic system is considered simplified to _magnetostatics_, considering slow evolution and neglecting in particular eddy currents and even confining on electrically nonconductive media. The Maxwell system then reduces to the Ampere law \(\mathrm{curl}\mathbf{h}_{\rm dem}=\mathbf{0}\) and the Gauss law \(\mathrm{div}\,\mathbf{b}=0\) for the magnetic induction with is given by \(\mathbf{b}=\mu_{0}\mathbf{h}_{\rm dem}+\mu_{0}\mathbf{m}\) where \(\mu_{0}\) is the physical constant (vacuum permeability). The Ampere law ensures existence of a scalar-valued potential \(u\) such that \(\mathbf{h}_{\rm dem}=-\nabla u\). These equations are considered on the whole Universe \(\mathbb{R}^{d}\) while, of course, the magnetization \(\mathbf{m}\) is only in the body \(\Omega\) while outside it is considered zero, which is articulated by introducing the characteristic function \(\chi_{\Omega}\) defined as \(\chi_{\Omega}(\mathbf{x})=1\) if \(\mathbf{x}\in\Omega\) and \(\chi_{\Omega}(\mathbf{x})=0\) if \(\mathbf{x}\in\mathbb{R}^{d}\backslash\Omega\). By substitution, we obtain the equation \[\mathrm{div}(\nabla u-\chi_{\Omega}\mathbf{m})=0\quad\text{ in }\ \mathbb{R}^{d} \tag{2.1}\] to be considered in the sense of distributions. Under an external magnetic field \(\mathbf{h}_{\rm ext}\), the overall effective magnetic field \(\mathbf{h}\) is \[\mathbf{h}=\mathbf{h}_{\rm ext}-\mathbf{h}_{\rm dem}=\mathbf{h}_{\rm ext}+\nabla u\,. \tag{2.2}\] Although not directly relevant in this paper, let us anyhow remind that, for a fixed temperature \(\theta\), the standard ferro-magnetostatic theory is based on the free energy \(\mathbf{\psi}(\mathbf{m},\theta,\nabla\mathbf{m})=\widetilde{\mathbf{\psi}}(\mathbf{m},\theta)+ \frac{\kappa}{2}|\nabla\mathbf{m}|^{2}\) leading to the overall energy \[(\mathbf{m},u)\mapsto\int_{\Omega}\ \underbrace{\widetilde{\mathbf{\psi}}(\mathbf{m}, \theta)}_{\text{free}\atop\text{energy}}\ +\ \underbrace{\frac{\kappa}{2}|\nabla\mathbf{m}|^{2}}_{\text{exchange}\atop\text{ energy}}\ -\ \underbrace{\mu_{0}\,(\mathbf{h}_{\rm ext}+\nabla u)\cdot\mathbf{m}}_{\text{energy of }\mathbf{m}\text{ in the}\atop\text{magnetic field }\mathbf{h}}\ \mathrm{d}\mathbf{x}-\int_{\mathbb{R}^{d}}\underbrace{\frac{\mu_{0}}{2}|\nabla u| ^{2}\ \mathrm{d}\mathbf{x}}_{\text{energy of demag}\atop\text{netizing field}}\,. \tag{2.3}\] Notably, this functional is concave with respect to \(u\) and has a saddle-point character. The static configurations \((\mathbf{m},u)\) are standardly considered as minimizing with respect to \(\mathbf{m}\) and maximizing with respect to \(u\), i.e. a critical point or (2.3). The 1st-order optimality conditions then gives the system \[\underbrace{\widetilde{\mathbf{\psi}}^{\prime}_{\mathbf{m}}(\mathbf{m},\theta)-\mathrm{ div}(\kappa\nabla\mathbf{m})}_{=\mathbf{t}\text{ magnetic `driving force''}}=\mu_{0}\mathbf{h}\quad\text{ and }\quad(\ref{eq:1})\,. \tag{2.4}\] The mentioned saddle-point character can be eliminated by executing maximization with respect to \(u\), i.e. in fact the partial Legendre transform. This gives, when testing (2.1) by \(\mu_{0}u\), which gives \(\int_{\mathbb{R}^{d}}\mu_{0}|\nabla u|^{2}\,\mathrm{d}\mathbf{x}=\int_{\Omega} \mu_{0}\mathbf{m}\cdot\nabla u\,\mathrm{d}\mathbf{x}=-\int_{\Omega}\mu_{0}\mathbf{m} \cdot\mathbf{h}_{\rm dem}\,\mathrm{d}\mathbf{x}\). Substituting it into (2.3), the functional depending on \(\mathbf{m}\) which should be minimized by static configurations is: \[\mathbf{m}\mapsto\int_{\Omega}\ \underbrace{\widetilde{\mathbf{\psi}}(\mathbf{m},\theta)}_{ \text{free}\atop\text{energy}}\ +\ \underbrace{\frac{\kappa}{2}|\nabla\mathbf{m}|^{2}}_{\text{exchange}\atop\text{ energy}}\ -\ \underbrace{\mu_{0}\,\mathbf{h}_{\rm ext}\cdot\mathbf{m}}_{\text{Zeeman}\atop\text{ energy}}\ \mathrm{d}\mathbf{x}+\int_{\mathbb{R}^{d}}\underbrace{\frac{\mu_{0}}{2}|\nabla u_{ \mathbf{m}}|^{2}}_{\text{energy of demag}\atop\text{netizing field}}\,\mathrm{d}\mathbf{x}\,. \tag{2.5}\] In the rest of this paper, we will couple it with mechanical effects and a full thermodynamics, so that the minimization of energy will no longer be relevant. In case of time-varying \(\boldsymbol{h}_{\rm ext}\), a dynamics of \(\boldsymbol{m}\) governed by the _Landau-Lifschitz-Gilbert equation_\(\gamma^{-1}\frac{\partial}{\partial t}\boldsymbol{m}=\boldsymbol{m}\times \boldsymbol{h}_{\rm eff}\) with \(\boldsymbol{h}_{\rm eff}=\boldsymbol{h}-\boldsymbol{t}-\boldsymbol{t}_{\rm dis}\) an effective field composed from a conservative part \(\boldsymbol{t}\) arising from a free energy (2.3), cf. (2.4) or also (2.18c) below, while \(\boldsymbol{t}_{\rm dis}\) is a magnetic field counting a dissipative-processes phenomenology, and \(\boldsymbol{h}\) is from (2.2). Equivalently [7], one can write it in the Gilbert form \(\gamma^{-1}\boldsymbol{m}\times\frac{\partial}{\partial t}\boldsymbol{m}= \boldsymbol{h}_{\rm eff}\). The basic choice of \(\boldsymbol{t}_{\rm dis}\) is the magnetic "viscosity" \(\tau\frac{\partial}{\partial t}\boldsymbol{m}\) with \(\tau\) a phenomenological magnetic damping coefficient. To cover the (temperature dependent) _hysteresis_ effects due to so-called pinning mechanism, we augment it by the dry-friction term \(h_{\rm C}(\theta){\rm Dir}(\frac{\partial}{\partial t}\boldsymbol{m})\) where "\({\rm Dir}\)" denotes the set-valued monotone "direction" mapping \[{\rm Dir}(\boldsymbol{r})=\begin{cases}\{r\in\mathbb{R}^{d};\ |r|\leq 1\}& \text{if }\boldsymbol{r}=\boldsymbol{0}\,,\\ \boldsymbol{r}/|\boldsymbol{r}|&\text{if }\boldsymbol{r}\neq\boldsymbol{0}\,, \end{cases} \tag{2.6}\] cf. [52] and Remark 2.3 below. Note that \(\boldsymbol{r}\cdot{\rm Dir}(\boldsymbol{r})=|\boldsymbol{r}|\). Here, having in mind an isotropic situation, \(|\cdot|\) denotes the Euclidean norm, but in principle some other anisotropic norms on \(\mathbb{R}^{d}\) can be considered, too. Altogether, we consider the specific Gilbert equation as \[\tau\frac{\partial\boldsymbol{m}}{\partial t}+h_{\rm C}(\theta){\rm Dir}\Big{(} \frac{\partial\boldsymbol{m}}{\partial t}\Big{)}-\frac{\boldsymbol{m}}{\gamma (\theta)}\times\frac{\partial\boldsymbol{m}}{\partial t}=\mu_{0}\boldsymbol{h }-\boldsymbol{t}\,. \tag{2.7}\] The coercive force \(h_{\rm C}=h_{\rm C}(\theta)\) determines the width of hysteresis loops within slowly time-varying oscillatory external field \(\boldsymbol{h}_{\rm ext}\). The gyromagnetic term should disappear under high temperatures, i.e. \(1/\gamma(\cdot)\) going to \(0\) for temperatures around or above Curie temperature, as articulated in [34]. Let us note that (2.7) balances the terms in the physical units A/m, as standard. **Example 2.1** (Ferro-to-para-magnetic transition).: A simplest example of free energy in rigid isotropic magnetic materials is \[\widetilde{\Psi}(\boldsymbol{m},\theta)=a_{0}(\theta-\theta_{\rm C})| \boldsymbol{m}|^{2}+b_{0}|\boldsymbol{m}|^{4}+c_{0}\theta(1-{\rm ln}\theta)\,. \tag{2.8}\] In static magnetically soft ferromagnetism, the magnetization minimizes the energy. Here the minimum of \(\widetilde{\Psi}(\,\bullet,\theta)\) is attained on the orbit \(|\boldsymbol{m}|=m_{\rm s}(\theta)\) with \(m_{\rm s}(\theta)=\sqrt{a_{0}(\theta_{\rm C}-\theta)/(2b_{0})}\) if \(0\leq\theta\leq\theta_{\rm C}\) and at \(\boldsymbol{m}=0\) if \(\theta\geq\theta_{\rm C}\), cf. the solid line in Figure 1. Under an applied magnetic field \(\boldsymbol{h}_{\rm ext}\), the minimum of \(\boldsymbol{m}\mapsto\widetilde{\Psi}(\boldsymbol{m},\theta)-\boldsymbol{h}_{ \rm ext}\cdot\boldsymbol{m}\) is at some magnetization whose magnitude is slightly bigger than \(m_{\rm s}(\theta)\), cf. the dashed line in Figure 1. This ansatz can be used for a ferro-para-magnetic transition for a mechanically rigid magnets as formulated (and analyzed) in [42]. This may be quite equally interpreted as ferri-antiferro-magnetic transition, too, cf. [16]. ### Finite-strain kinematics and mass and momentum transport In finite-strain continuum mechanics, the basic geometrical concept is the time-evolving deformation \({\bf y}:\Omega\to\mathbb{R}^{d}\) as a mapping from a reference configuration of the body \(\Omega\subset\mathbb{R}^{d}\) into a physical space \(\mathbb{R}^{d}\). The "Lagrangian" space variable in the reference configuration will be denoted as \({\bf X}\in\Omega\) while in the "Eulerian" physical-space variable by \(\mathbf{x}\in\mathbb{R}^{d}\). The basic kinematic and geometrical objects are the Lagrangian velocity \({\bf v}=\frac{\partial}{\partial t}{\bf y}\) and the Lagrangian deformation gradient \({\bf F}=\nabla_{\bf X}{\bf y}\). We will be interested in deformations \(\mathbf{x}={\bf y}(t,{\bf X})\) evolving in time, which are sometimes called "motions". Further, assuming for a moment that \({\bf y}(t,\cdot)\) is invertible, we define the so-called _return_ (sometimes called also a _reference_) _mapping_\(\mathbf{\xi}:\mathbf{x}\mapsto{\bf y}^{-1}(t,{\bf X})\). The important quantities are the Eulerian velocity \(\mathbf{v}(t,\mathbf{x})={\bf v}(t,\mathbf{\xi}(t, \mathbf{x}))\) and the Eulerian deformation \(\mathbf{F}(t,\mathbf{x})={\bf F}(t,\mathbf{\xi}(t, \mathbf{x}))\). Here and thorough the whole article, having the Eulerian velocity at disposal, we use the dot-notation \((\cdot)^{\mbox{\tiny*}}=\frac{\partial}{\partial t}+\mathbf{v}\!\cdot \!\nabla_{\mathbf{x}}\) for the _convective time derivative_ applied to scalars or, component-wise, to vectors or tensors. Then the velocity gradient \(\nabla\mathbf{v}=\nabla_{\bf X}\mathbf{v}\nabla_{\bf x}{ \bf X}=\dot{\mathbf{F}}\mathbf{F}^{-1}\), where we used the chain-rule calculus and \(\mathbf{F}^{-1}=(\nabla_{\bf X}\mathbf{x})^{-1}=\nabla_{\bf x }{\bf X}\). This gives the _transport equation-and-evolution for the deformation gradient_ as \[\dot{\mathbf{F}}=(\nabla\mathbf{v})\mathbf{F}\,. \tag{2.9}\] From this, we also obtain the evolution-and-transport equation for Jacobian \(\det\mathbf{F}\) as \(\frac{\cdot}{\det\mathbf{F}}=(\det\mathbf{F}){\rm div}\, \mathbf{v}\) and its inverse as \[\frac{\cdot}{\left(\frac{1}{\det\mathbf{F}}\right)}\!=-\frac{{\rm div }\,\mathbf{v}}{\det\mathbf{F}}\,. \tag{2.10}\] The return mapping \(\xi\) satisfies the transport equation \[\dot{\mathbf{\xi}}={\bf 0}\,; \tag{2.11}\] Figure 1: Typical dependence of saturation magnetization \(m_{\mbox{\tiny S}}\) on absolute temperature under zero applied field \(h\) (solid line) and under some applied field (dashed line), cf. e.g. [6]. note that, since we confined on a spatially homogeneous material, actually \(\mathbf{\xi}\) does not explicitly occur in the formulation of the problem. As \(\mathbf{F}\) depends on \(\mathbf{x}\), (2.9)-(2.11) are equalities which hold for a.a. \(\mathbf{x}\). The same holds for (2.12)-(2.15) below. Here we will benefit from the boundary condition \(\mathbf{v}\!\cdot\!\mathbf{n}=0\) below, which causes that the shape of the actual domain \(\Omega\) does not evolve in time, i.e. \(\Omega=\Omega\). The same convention concerns temperature \(\theta\) and thus also \(\mathbf{T}\), \(\eta\), and \(\nu_{1}\) in (2.18d) and (2.20) below, which will make the problem indeed fully Eulerian. Cf. the continuum-mechanics textbooks as e.g. [23, 32]. The mass density (in kg/m\({}^{3}\)) is an extensive variable, and its transport (expressing that the conservation of mass) writes as the _continuity equation_\(\frac{\partial}{\partial t}\varrho+\mathrm{div}(\varrho\mathbf{v})=0\), or, equivalently, the _mass evolution-and-transport equation_ \[\dot{\varrho}=-\varrho\,\mathrm{div}\,\mathbf{v}\,. \tag{2.12}\] Alternatively to (2.12), we will also use an evolution-and-transport equation for the "mass sparsity" as the inverse mass density \(1/\varrho\): \[\dot{\overline{1/\varrho}}=(1/\varrho)\,\mathrm{div}\,\mathbf{v}\,. \tag{2.13}\] The flow rule for the magnetization (2.7) is now to be considered in deforming medium, and then the partial time derivative in (2.7) should be replaced by an objective time derivative. Here we use the Zaremba-Jaumann (corotational) time derivative \(\dot{\mathbf{m}}\), defined as \[\dot{\mathbf{m}}=\dot{\mathbf{m}}-\mathrm{skw}(\nabla\mathbf{v})\mathbf{m}=\frac{\partial\bm {m}}{\partial t}+(\mathbf{v}\!\cdot\!\nabla)\mathbf{m}-\mathrm{skw}(\nabla\mathbf{v})\mathbf{ m}\,, \tag{2.14}\] where \(\dot{\mathbf{m}}=\frac{\partial}{\partial t}\mathbf{m}+(\mathbf{v}\!\cdot\!\nabla)\mathbf{m}\) denotes the convective derivative of \(\mathbf{m}\). Moreover, in deforming continuum, we can (and should) consider a more general \(\gamma=\gamma(\mathbf{F},\mathbf{m},\theta)\) and \(h_{{}_{\mathrm{C}}}=h_{{}_{\mathrm{C}}}(\mathbf{F},\theta)\). Thus (2.7) turns into \[\tau\dot{\mathbf{m}}+h_{{}_{\mathrm{C}}}(\mathbf{F},\theta)\mathrm{Dir}(\dot{\mathbf{m}}) -\frac{\mathbf{m}\!\times\!\dot{\mathbf{m}}}{\gamma(\mathbf{F},\mathbf{m},\theta)}=\mu_{0}\bm {h}-\mathbf{t}\,. \tag{2.15}\] The convective derivative itself is not objective and would not be suitable in our context, except perhaps some laminar-like deformation as implicitly used in an incompressible isothermal variant in [5, 25, 53, 60] or in a nanoparticle transport in fluids [22]; for usage of \(\dot{\mathbf{m}}\) in (2.15) see Remark 2.4 below. ### Magneto-viscoelasticity and its thermodynamics The main ingredients of the model are the (volumetric) _free energy_\(\psi\) and the _dissipative stress_. The Helmholtz free energy \(\psi=\psi(\mathbf{F},\mathbf{m},\nabla\mathbf{m},\theta)\) is considered per the _referential volume_, while the free energy per actual deformed volume is \(\psi(\mathbf{F},\mathbf{m},\nabla\mathbf{m},\theta)/\mathrm{det}\,\mathbf{F}\). Considering the free energy per unit reference volume is more standard in continuum physics [23, 32] than the free energy per actual evolving volume and well corresponds to experimentally available data. Here also the anisotropy (which is typical in ferromagnets on microscopical scale) in the stored energy needs rather large strains with referential stored energy. This last benefit is related to the fact that the referential free energy does not give an energy pressure contribution to the Cauchy stress (cf. the last term in (2.33) below or [47, Rem. 2]) and allows for more easy decoupling estimation strategy decoupling the magneto-mechanical part and the thermal part of the coupled system. We will select out the temperature independent stored energy \(\varphi\) and consider the split: \[\psi(\boldsymbol{F},\boldsymbol{m},\nabla\boldsymbol{m},\theta)=\varphi( \boldsymbol{F},\boldsymbol{m})+\mathcal{L}(\boldsymbol{F},\boldsymbol{m}, \theta)+\frac{\kappa(\boldsymbol{F})}{2}|\nabla\boldsymbol{m}|^{2}\quad\text{ with}\quad\mathcal{L}(\boldsymbol{F},\boldsymbol{m},0)=0\,. \tag{2.16}\] The free energy considered per actual (not referential) volume extended by the Zeeman energy arising by an applied external actual (not referential) magnetic field \(\boldsymbol{h}_{\text{ext}}\), i.e. the Gibbs-type _actual free energy_ is thus \[\psi_{\text{G}}(t;\boldsymbol{F},\boldsymbol{m},\nabla\boldsymbol{m},\theta)= \underset{\begin{subarray}{c}\boldsymbol{\varphi}(\boldsymbol{F},\boldsymbol{ m})\\ \text{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}}{{{{}{}{{{{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{}}}}}}}{{{{\rm{{\rm{\rm{\rm{ }}}}}}{{{{\rm{\rm{{\rm{\rm{\rm{}}}}}}}{{{{\rm{\rm{{\rm{\rm{ }}}}}}}{{{\rm{{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{}}}}}}}{{{{\rm{{\rm{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}{\,\ The expected _symmetry_ of such part \(T\) of the Cauchy stress is granted by _frame indifference_ of \(\uppsi(\cdot,\cdot,\theta)\) and or \(\kappa(\cdot)\). This means that \[\forall(\mathbf{F},\mathbf{m},\theta)\in\mbox{ GL}^{+}(d)\times\mathbb{R}^{d}\times\mathbb{R},\ \ Q\in\mbox{SO}(d):\\ \upvarphi(\mathbf{F},\mathbf{m})=\upvarphi(Q \mathbf{F},Q\mathbf{m})\,,\ \ \zeta(\mathbf{F},\mathbf{m},\theta)=\zeta(Q\mathbf{F}, Q\mathbf{m},\theta)\,,\ \ \mbox{and}\ \ \kappa(\mathbf{F})=\kappa(Q\mathbf{F})\,, \tag{2.19}\] where \(Q\in\mbox{SO}(d)=\{Q\in\mathbb{R}^{d\times d};\ Q^{\top}Q=QQ^{\top}=\mathbb{I}\}\) is the special orthogonal group and \(\mbox{GL}^{+}(d)=\{F\in\mathbb{R}^{d\times d};\ \det F>0\}\) denotes the orientation-preserving general linear group. This in particular implies that the stress \(T\) is symmetric. The symmetry of the capillarity contribution \(K\) to the Cauchy stress is automatic; actually, this contribution as \(-(\nabla\mathbf{m})^{\top}[\psi_{\mbox{\tiny G}}]^{\prime}_{\nabla \mathbf{m}}(\mathbf{F},\nabla\mathbf{m})\) was devised in [7, Formula (2.27)] or [13, Formula (5.16)]. Mainly for analytical reasons, we will use also a dissipative contribution to the Cauchy stress which, together with the conservative part \(T\), will realize the _Kelvin-Voigt rheological model_ and make the system parabolic. To this goal, we consider a _dissipative_ contribution to the _Cauchy stress_ involving the standard dissipative stress depending (from the frame-invariance reason) on the symmetric velocity gradient and also a higher-order elastic _hyperstress_\(\mathscr{H}\), both isotropic for simplicity: \[\mathbf{D}-\mbox{div}\,\mathscr{H} \mbox{with}\ \ \mathbf{D}=\mathbf{D}(\mathbf{e}(\mathbf{v}))\ \ \mbox{and}\ \ \mathscr{H}=\mathscr{H}(\nabla^{2}\mathbf{v}) \tag{2.20}\] \[\mbox{for}\ \ \ \mathbf{D}(\mathbf{e})=\nu_{1}| \mathbf{e}|^{p-2}\mathbf{e}\ \ \mbox{and}\ \ \mathscr{H}(\mathbf{E})=\nu_{2}|\mathbf{E}|^{p-2}\mathbf{E}\,.\] Actually, \(\nu_{1}\) and \(\nu_{2}\) may depend on \(\det\mathbf{F}\) and \(\theta\) without causing any structural and analytical problems, but we ignore it rather for notational simplicity. The _momentum equilibrium_ equation then balances the divergence of the total Cauchy stress with the inertial and gravity force: \[\varrho\dot{\mathbf{v}}-\mbox{div}\big{(}\mathbf{T}+\mathbf{D}+\mathbf{T}_{\mbox{\scriptsize mag}}-\mbox{div}(\mathscr{H }+\mathscr{S})\big{)}=\varrho\mathbf{g}+\mathbf{f}_{\mbox{ \scriptsize mag}} \tag{2.21}\] with \(T\) from (2.18d) and \(D\) and \(\mathscr{H}\) from (2.20). Moreover, \(\mathbf{T}_{\mbox{\scriptsize mag}}\) and \(\mathbf{f}_{\mbox{\scriptsize mag}}\) are the magnetic stress and the magnetic force which balance the energetics, cf. \(\mathbf{T}_{\mbox{\scriptsize mag}}:=\mathbf{K}+\mathbf{S}\) and \(\mu_{0}(\nabla\mathbf{h})^{\top}\mathbf{m}-\mu_{0}\nabla( \mathbf{h}\cdot\mathbf{m})=:\mathbf{f}_{\mbox{ \scriptsize mag}}\) while \(\mathscr{S}:=\kappa(\mathbf{F})\mbox{Skw}(\nabla\mathbf{m} \otimes\mathbf{m})/\det\mathbf{F}\) will be a "magnetic exchange hyperstress" in (2.30b). The driving magnetic force (2.18c) enters the Landau-Lifschitz-Gilbert equation (2.15) in the previous section. The third ingredient, i.e. (2.18d), is subjected to the _entropy equation_: \[\frac{\partial\eta}{\partial t}+\mbox{div}\big{(}\mathbf{v}\,\eta \big{)}=\frac{\xi-\mbox{div}\,\mathbf{j}}{\theta}\ \ \ \ \ \mbox{with}\ \ \mathbf{j}=-\mathbf{\cal{K}}(\mathbf{F},\theta)\nabla\theta \tag{2.22}\] and with \(\xi=\xi(\mathbf{F},\theta;\mathbf{e}(\mathbf{v}), \nabla^{2}\mathbf{v},\mbox{\boldmath$\overset{\mbox{\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\mbox{\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny normal velocity \(\mathbf{v}\!\cdot\!\mathbf{n}\) vanishes across the boundary \(\varGamma\) of \(\varOmega\), we obtain the _Clausius-Duhem inequality_: \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\varOmega}\eta\,\mathrm{d}\mathbf{x}=\int_{ \varOmega}\underbrace{\frac{\xi}{\theta}+\mathcal{X}\frac{|\nabla\theta|^{2}}{ \theta^{2}}}_{\text{entropy production rate}}\,\mathrm{d}\mathbf{x}+\int_{\varGamma} \underbrace{\Big{(}\mathcal{X}\frac{\nabla\theta}{\theta}-\eta\mathbf{v}\Big{)}}_{ \text{entropy flux}}\cdot\mathbf{n}\,\mathrm{d}S\geq\int_{\varGamma}\mathcal{X}\frac{ \nabla\theta\!\cdot\!\mathbf{n}}{\theta}\,\mathrm{d}S\,. \tag{2.23}\] If the system is thermally isolated in the sense that the normal heat flux \(\mathbf{j}\!\cdot\!\mathbf{n}\) vanishes across the boundary \(\varGamma\), we recover the _2nd law of thermodynamics_, i.e. the total entropy in isolated systems is nondecreasing in time. Substituting \(\eta\) from (2.18d) into (2.22) written in the form \(\theta\dot{\mathbf{\eta}}=\xi-\operatorname{div}\mathbf{j}-\theta\eta\mathrm{div}\, \mathbf{v}\), we obtain \[c(\mathbf{F},\mathbf{m},\theta)\dot{\mathbf{\theta}} =\xi\big{(}\mathbf{F},\theta;\mathbf{e}(\mathbf{v}),\nabla^{2}\mathbf{v},\dot{\bm {m}}\big{)}+\theta\Big{(}\frac{\zeta^{\prime}_{\theta}(\mathbf{F},\mathbf{m},\theta)}{ \det\mathbf{F}}\Big{)}^{\prime}_{\mathbf{F}}\!\cdot\!\dot{\mathbf{F}}\] \[\qquad\qquad\qquad\qquad\qquad+\theta\Big{(}\frac{\zeta^{\prime }_{\theta}(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}}\Big{)}^{\prime}_{\mathbf{m}}\!\cdot\! \dot{\mathbf{m}}+\theta\,\frac{\zeta^{\prime}_{\theta}(\mathbf{F},\mathbf{m},\theta)}{ \det\mathbf{F}}\mathrm{div}\,\mathbf{v}-\operatorname{div}\mathbf{j}\] \[=\xi\big{(}\mathbf{F},\theta;\mathbf{e}(\mathbf{v}),\nabla^{2}\mathbf{v},\dot{\bm {m}}\big{)}+\theta\frac{\zeta^{\prime\prime}_{\mathbf{F}\theta}(\mathbf{F},\mathbf{m}, \theta)}{\det\mathbf{F}}\!\cdot\!\dot{\mathbf{F}}+\theta\frac{\zeta^{\prime\prime}_{ \mathbf{m}\theta}(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}}\!\cdot\!\dot{\mathbf{m}}- \operatorname{div}\mathbf{j}\] \[\qquad\qquad\qquad\qquad\qquad\text{with the heat capacity }\;c(\mathbf{F},\mathbf{m},\theta)=-\theta\,\frac{\zeta^{\prime\prime}_{\theta \theta}(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}}\,, \tag{2.24}\] which can be understood as the _heat equation_ for the temperature \(\theta\) as an intensive variable. The referential _internal energy_ is given by the _Gibbs relation_\(\uppsi+\theta\eta\). In our Eulerian formulation, we will need rather the actual internal energy, which, in view of (2.18d), equals here to \[\underbrace{\frac{\uppsi-\theta\uppsi^{\prime}_{\theta}}{\det\mathbf{F}}}_{ \begin{subarray}{c}\text{actual}\\ \text{internal energy}\end{subarray}}=\underbrace{\frac{\upvarphi(\mathbf{F},\mathbf{m})}{ \det\mathbf{F}}+\frac{\upkappa(\mathbf{F})|\nabla\mathbf{m}|^{2}}{2\det\mathbf{F}}}_{ \begin{subarray}{c}\text{actual stored and}\\ \text{exchange energy}\end{subarray}}+\underbrace{\frac{\zeta(\mathbf{F},\mathbf{m}, \theta)\!-\!\theta\zeta^{\prime}_{\theta}(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}} \cdot}_{\begin{subarray}{c}\text{the internal energy}\end{subarray}}\,. \tag{2.25}\] In terms of \(w\), the heat equation (2.24) can be written in the so-called _enthalpy formulation_: \[\frac{\partial w}{\partial t}+\operatorname{div}(\mathbf{v}w)=\xi \big{(}\mathbf{F},\theta;\mathbf{e}(\mathbf{v}),\nabla^{2}\mathbf{v},\dot{\mathbf{m}}\big{)}+ \frac{\zeta^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}}\!\cdot\!\dot{ \mathbf{F}}+\frac{\zeta^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}}\! \cdot\!\dot{\mathbf{m}}-\operatorname{div}\mathbf{j}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\text{with}\quad w=\frac{\zeta(\mathbf{F},\mathbf{m},\theta)-\theta\zeta^{ \prime}_{\theta}(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}}\,. \tag{2.26}\] Note that \(w\) is an extensive variable so that the left-hand side of (2.26) is not just a convective derivative \(\dot{w}\). For the passage from (2.24) to (2.26), we use the algebra \(F^{-1}=\operatorname{Cof}F^{\top}/\det F\) and the calculus \(\det^{\prime}(F)=\operatorname{Cof}F\) and (2.9) so that \((1/\det\mathbf{F})^{\prime}\!\cdot\!\dot{\mathbf{F}}=-(\operatorname{Cof}\mathbf{F}/\det\mathbf{F }^{2})\!\cdot\!(\nabla\mathbf{v})\mathbf{F}=-(\mathbf{F}^{-\top}\!/\det\mathbf{F})\mathbf{F}^{\top} \!\!\cdot\!\nabla\mathbf{v}=(\operatorname{div}\mathbf{v})/\det\mathbf{F}\), and thus we can calculate \[\frac{\partial w}{\partial t} +\operatorname{div}(\mathbf{v}w)=\dot{w}+w\operatorname{div}\mathbf{v}\] \[=\Big{(}\frac{\overline{\zeta(\mathbf{F},\mathbf{m},\theta)-\theta\zeta^{ \prime}_{\theta}(\mathbf{F},\mathbf{m},\theta)}}{\det\mathbf{F}}\Big{)}+\frac{\zeta(\mathbf{F}, \mathbf{m},\theta)-\theta\,\zeta^{\prime}_{\theta}(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F }}\operatorname{div}\mathbf{v}\] \[=\left(\Big{(}\frac{\zeta^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta)}{ \det\mathbf{F}}\Big{)}^{\prime}_{\mathbf{F}}-\theta\Big{(}\frac{\zeta^{\prime}_{\mathbf{G} }(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}}\Big{)}^{\prime}_{\mathbf{F}}\right)\!\!\!:\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! proposed in [58, Formula (74)]. More convective derivative for magnetization has been used in [5, 25, 53] to model rather (incompressible isothermal) fluids containing magnetic particles. **Example 2.5** (Neo-Hookean elastic magnets).: Modifying slightly the "rigid" model (2.8) and expanding it by standard neo-Hookean elastic ansatz, one obtains an example for elastic magnetic material amenable for ferro-to-paramagnetic transition and for complying with the (3.5b-e) below: \[\begin{split}\boldsymbol{\psi}(\boldsymbol{F},\boldsymbol{m}, \nabla\boldsymbol{m},\theta)&=\frac{1}{2}G\Big{(}\frac{\text{tr} (\boldsymbol{F}\boldsymbol{F}^{\top})}{(\det\boldsymbol{F})^{2/d}}-d\Big{)}+v (\det\boldsymbol{F})+\frac{\boldsymbol{\kappa}(\boldsymbol{F})}{2}|\nabla \boldsymbol{m}|^{2}\\ &+a_{0}(\det\boldsymbol{F})\Big{(}\frac{\theta}{1{+}\epsilon_{1 }\theta}{-}\frac{\theta_{\text{\tiny{c}}}}{1{+}\epsilon_{1}\theta_{\text{ \tiny{c}}}}\Big{)}\frac{|\boldsymbol{m}|^{2}}{1{+}\epsilon_{2}|\boldsymbol{m} |^{2}}+b_{0}(\det\boldsymbol{F})|\boldsymbol{m}|^{4}+c_{0}\theta(1{-}{\ln} \theta)\end{split}\] with some (referential) heat capacity \(c_{0}>0\), some \(\epsilon_{1},\epsilon_{2}>0\), some shear modulus \(G>0\), and the non-negative volumetric energy \(v\in C^{1}(\mathbb{R}^{+})\) and \(a_{0}(J)\geq\delta J\) and \(b_{0}(J)\geq\delta J\) for some \(\delta>0\) so that \(\boldsymbol{\varphi}\) fulfills (up to an irrelevant constant) the coercivity (3.5b) with \(s=4\). Note that \(\boldsymbol{\psi}(\boldsymbol{F},\boldsymbol{m},\nabla\boldsymbol{m},\cdot)\) is concave for \(c_{0},c_{1}>0\) and \(\epsilon_{1},\epsilon_{2}\geq 0\). Then, the split (2.16) uses \[\zeta(\boldsymbol{F},\boldsymbol{m},\theta)=\frac{a_{0}(\det\boldsymbol{F}) \theta|\boldsymbol{m}|^{2}}{(1{+}\epsilon_{1}\theta)(1{+}\epsilon_{2}| \boldsymbol{m}|^{2})}+c_{0}\theta(1{-}{\ln}\theta) \tag{2.27}\] so that \[\omega(\boldsymbol{F},\boldsymbol{m},\theta)=\frac{\zeta(\boldsymbol{F}, \boldsymbol{m},\theta)-\theta\zeta^{\prime}_{\theta}(\boldsymbol{F}, \boldsymbol{m},\theta)}{\det\boldsymbol{F}}=\frac{c_{0}\theta}{\det \boldsymbol{F}}+\frac{\epsilon_{1}a_{0}(\det\boldsymbol{F})\theta^{2}| \boldsymbol{m}|^{2}}{(1{+}\epsilon_{1}\theta)^{2}(1{+}\epsilon_{2}| \boldsymbol{m}|^{2})\det\boldsymbol{F}} \tag{2.28}\] and the (actual) heat capacity \(c(\boldsymbol{F},\boldsymbol{m},\theta)=-\theta\zeta^{\prime\prime}_{\theta \theta}(\boldsymbol{F},\boldsymbol{m},\theta)/\det\boldsymbol{F}\) is \[c(\boldsymbol{F},\boldsymbol{m},\theta)=\omega^{\prime}_{\theta}(\boldsymbol {F},\boldsymbol{m},\theta)=\frac{c_{0}}{\det\boldsymbol{F}}+\frac{2\epsilon_{ 1}a_{0}(\det\boldsymbol{F})\theta|\boldsymbol{m}|^{2}}{(1{+}\epsilon_{1} \theta)^{3}(1{+}\epsilon_{2}|\boldsymbol{m}|^{2})\det\boldsymbol{F}}\,. \tag{2.29}\] Note that "thermo-coupling" stress \(\zeta^{\prime}_{\boldsymbol{F}}(\boldsymbol{F},\boldsymbol{m},\theta) \boldsymbol{F}^{\top}\!/\!\det\boldsymbol{F}=a^{\prime}_{0}(\det\boldsymbol{F} )\theta|\boldsymbol{m}|^{2}\mathbb{I}/((1{+}\epsilon_{1}\theta)(1{+}\epsilon_ {2}|\boldsymbol{m}|^{2}))\) is bounded provided \(a^{\prime}_{0}\) is bounded on \(\text{GL}^{+}(d)\), so it surely complies with (3.5c) below. Also \(|\zeta^{\prime}_{\boldsymbol{m}}(\boldsymbol{F},\cdot,\cdot)/\!\det \boldsymbol{F}|\) is bounded for \(\boldsymbol{F}\) ranging over compact sets in \(\text{GL}^{+}(d)\), so it surely complies with (3.5c); here we use \(\epsilon_{1}>0\) and \(\epsilon_{2}>0\). Also this ansatz satisfies (3.5d). Moreover, \(|\omega^{\prime}_{\boldsymbol{F}}(\boldsymbol{F},\boldsymbol{m},\cdot)|\) has at most linear growth while and \(|\omega^{\prime}_{\boldsymbol{m}}(\boldsymbol{F},\boldsymbol{m},\cdot)|\) is even bounded as well as \(|\omega^{\prime\prime}_{\boldsymbol{F}\theta}|\) and \(|\omega^{\prime\prime}_{\boldsymbol{m}\theta}|\) for \(\boldsymbol{F}\) ranging over compact sets in \(\text{GL}^{+}(d)\), so that (3.5d) is satisfied, too. ### The thermo-magneto-mechanical system and its energetics Let us summarize the thermodynamically coupled system composed of six partial differential equations for \(\varrho\), \(\boldsymbol{v}\), \(\boldsymbol{F}\), \(\boldsymbol{m}\), \(u\), and \(\theta\). More specifically, it is composed from the mass continuity equation for \(\varrho\), the momentum equation written in terms of velocity \(\boldsymbol{v}\), the evolution-and-transport of the deformation-gradient tensor \(\boldsymbol{F}\), a flow rule (as an inclusion) for the magnetization \(\boldsymbol{m}\), the Poisson equation for the demagnetizing-field potential \(u\), and the heat-transfer equation for temperature \(\theta\). Altogether, merging (2.1), (2.9), (2.12), (2.15), (2.21), and (2.26) with (2.18), we obtain a system of six equations for \((\varrho,\mathbf{v},\mathbf{F},\mathbf{m},u,w)\), respectively also for \(\theta\): \[\frac{\partial\varrho}{\partial t}=-\,\mbox{div}(\varrho\mathbf{v})\,,\] (2.30a) \[\frac{\partial}{\partial t}(\varrho\mathbf{v})=\mbox{ div}\Big{(}\mathbf{T}+\mathbf{K}+\mathbf{S}+\mathbf{D}-\mbox{div}(\mathscr{H}+\mathscr{S})-\varrho\mathbf{v$ }\otimes\mbox{\boldmath$v}\Big{)}+\mu_{0}(\nabla\mathbf{h})^{ \top}\mathbf{m}-\mu_{0}\nabla(\mathbf{h}\cdot\mathbf{m})+\varrho\mathbf{g}\] \[\mbox{with}\ \ \mathbf{T}=\Big{(}\frac{\varphi^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m})+\zeta^{\prime}_{ \mathbf{F}}(\mathbf{F},\mathbf{m},\theta)}{\det \mathbf{F}}+\frac{|\nabla\mathbf{m}|^{2}\kappa^{\prime}( \mathbf{F})}{2\det\mathbf{F}}\,\Big{)}\mathbf{F}^{ \top}\,,\quad\mathbf{h}=\mathbf{h}_{\rm ext}+\nabla u\,,\] \[\mathbf{K}=\frac{\kappa(\mathbf{F})}{\det\mathbf{F}}\nabla\mathbf{m}\otimes\nabla\mathbf{m}\,, \ \ \mathbf{D}=\nu_{1}|\mathbf{e}(\mathbf{v})|^{p-2}\mathbf{e}(\mathbf{v})\,,\quad\ \mathscr{H}=\nu_{2}|\nabla^{2}\mathbf{v}|^{p-2}\nabla^{2}\mathbf{v}\,,\] \[\mathbf{S}=\mbox{skw}\Big{(}\Big{(}\mu_{0}\mathbf{h}-\frac{\Psi^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}}\Big{)}\otimes\mathbf{m}\Big{)}\,,\ \ \mbox{and}\ \ \mathscr{S}=\frac{\kappa(\mathbf{F})}{\det\mathbf{F}}\mbox{Skw}\big{(}\mathbf{m}\otimes\nabla\mathbf{m}\big{)},\] (2.30b) \[\frac{\partial\mathbf{F}}{\partial t}=(\nabla\mathbf{v})\mathbf{F}-(\mathbf{v}\cdot\nabla)\mathbf{F}\,,\] (2.30c) \[\tau\mathbf{\dot{m}}+h_{\mbox{\tiny c}}(\mathbf{F},\theta)\mbox{Dir}(\mathbf{\dot{m}})-\frac{\mathbf{m}\times\mathbf{\dot{m}}^{\mbox{\tiny g}}_{\mathbf{m}}}{\gamma(\mathbf{F},\mathbf{m},\theta)}\ni\mu_{0 }\mathbf{h}-\frac{\Psi^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}}+\mbox{ div}\Big{(}\frac{\kappa(\mathbf{F})}{\det\mathbf{F}}\nabla\mathbf{m}\Big{)}\,,\] (2.30d) \[\Delta u=\mbox{div}(\chi_{\Omega}\mathbf{m})\quad\mbox{ on }\mathbb{R}^{d}\,,\] (2.30e) \[\frac{\partial w}{\partial t}=\xi(\mathbf{F},\theta;\mbox {\boldmath$e$}(\mathbf{v}),\nabla^{2}\mathbf{v},\mathbf{\dot{m}})+\frac{\zeta^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta)\mathbf{F}^{\top}}{\det \mathbf{F}}\mbox{:}\mathbf{e}(\mathbf{v})+\frac{ \zeta^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m}, \theta)}{\det\mathbf{F}}\cdot\dot{\mathbf{m}}-\mbox{div} \big{(}\mathbf{j}+w\mathbf{v}\big{)}\] \[\mbox{with}\ \ \xi(\mathbf{F},\theta;\mathbf{e},\mathbf{G},\mathbf{r})=\nu_{1}|\mathbf{e}|^{p}+\nu_{2}| \mathbf{G}|^{p}+\mu_{0}\tau|\mathbf{r}|^{2}+\mu_{0}h_{\mbox{ \tiny c}}(\mathbf{F},\theta)|\mathbf{r}|\] \[\mbox{and}\ \ \mathbf{j}=-\mbox{\footnotesize$\mbox{\footnotesize$ \mbox{\footnotesize$\mbox{\footnotesize$\mbox{\footnotesize$\mbox{\footnotesize$\mbox{ \footnotesize$\mbox{\footnotesize$\mbox{\footnotesize$\mbox{\footnotesize$\mbox{\footnotesize$ \mbox{\footnotesize$\mbox{\footnotesize$\mbox{\mbox{\mbox{\mbox{\mbox \[\nabla^{2}\mathbf{v}\!:\!\!(\mathbf{n}\!\otimes\!\mathbf{n})=\mathbf{0}\,,\quad(\mathbf{n}\!\cdot\!\nabla) \mathbf{m}=\mathbf{0}\,,\quad u(\infty)=0\,,\quad\mbox{ and}\quad\mathbf{j}\!\cdot\!\mathbf{n}=h(\theta)+\frac{\nu_{ \flat}}{2}|\mathbf{v}|^{p} \tag{2.32b}\] with \(\nu_{\flat}>0\) a boundary viscosity coefficient and with \([\,\cdot\,]_{\mbox{\tiny T}}\) a tangential part of a vector and with \(\mbox{div}_{\mbox{\tiny S}}=\mbox{tr}(\nabla_{\mbox{\tiny S}})\) denoting the \((d{-}1)\)-dimensional surface divergence with \(\mbox{tr}(\cdot)\) being the trace of a \((d{-}1){\times}(d{-}1)\)-matrix and \(\nabla_{\mbox{\tiny S}}v=\nabla v-\frac{\partial v}{\partial\mbox{\boldmath $n$}}\) being the surface gradient of \(v\). Naturally, \(\mathbf{k}\!\cdot\!\mathbf{n}=0\) is to be assumed if we want to recover the boundary conditions (2.32a) in the classical form, otherwise the weak form does not directly need it. The first condition (i.e. normal velocity zero) expresses nonpenetrability of the boundary was used already for (2.23) and is most frequently adopted in literature for Eulerian formulation. This simplifying assumption fixes the shape of \(\varOmega\) in its referential configuration allows also for considering fixed boundary even for such time-evolving Eulerian description. The latter condition in (2.32a) involving a boundary viscosity comes from the Navier boundary condition largely used in fluid dynamics and is here connected with the technique used below, which is based on the total energy balance as the departing point and which, unfortunately, does not allow to cope with \(\nu_{\flat}=0\) and simultaneously \(\mathbf{k}\neq 0\). This boundary viscosity naturally may contribute to the heat production on the boundary as well as to the outflow of the heat energy to the outer space. For notational simplicity, we consider that it is just equally distributed, one part remaining on the boundary of \(\varOmega\) and the other part leaving outside, which is related with the coefficient \(1/2\) in the last condition in (2.32b). The condition \(u(\infty)=0\) in (2.32b) expresses shortly that \(\lim_{|\mathbf{x}|\to 0}u(\mathbf{x})=0\). The magnetization flow rule (2.30d) with the corotational derivative \(\stackrel{{\circ}}{{\mathbf{m}}}\) in see also [7, 33] where it is articulated that the magnetization is "frozen" in the deforming medium if \(\stackrel{{\circ}}{{\mathbf{m}}}=0\) which then means that the magnetization is transported and rotates at the same local rate as the deforming medium; this is the situation below the blocking temperature \(\theta_{\flat}\) and when the total driving magnetic field has small magnitude. For the capillarity-like stress \(K\) and and the skew-symmetric stress \(S\) see also [7, 12]. The magneto-mechanical energy balance of the model can be seen when testing the momentum equation (2.30b) by \(v\) while using the continuity equation (2.30a) tested by \(|\mathbf{v}|^{2}/2\) and the evolution-and-transport equation (2.30c) for \(F\) tested by the stress \([\mathbf{\varphi}(\mathbf{F},\mathbf{m})/\mbox{ det}\,\mathbf{F}]^{\prime}_{\mbox{\scriptsize$F$}}\), the magnetic Landau-Lifshitz-Gilberg equation (2.30d) by \(\stackrel{{\circ}}{{\mathbf{m}}}\), and the (rest from the) Maxwell system (2.30e) by \(\mu_{0}\frac{\partial u}{\partial t}\). Let us first make the test of (2.30b) by \(v\). Using again the algebra \(F^{-1}=\mbox{Cof}\,F^{\top}\!/\mbox{det}\,\,F\) and the calculus \(\mbox{det}^{\prime}(F)=\mbox{Cof}\,F\), we can write the part of the Cauchy stress arising from the stored energy as \[\frac{\mathbf{\varphi}^{\prime}_{\mbox{\scriptsize$F$}} (\mathbf{F},\mathbf{m})}{\mbox{det}\,\mathbf{F}} \mathbf{F}^{\top}=\frac{\mathbf{\varphi}^{\prime}_{\mbox{ \scriptsize$F$}}(\mathbf{F},\mathbf{m})-\mathbf{ \varphi}(\mathbf{F},\mathbf{m})\mathbf{F}^{-\top} }{\mbox{det}\,\mathbf{F}}\mathbf{F}^{\top}+\frac{\mathbf{\varphi}(\mathbf{F},\mathbf{m})}{\mbox{det}\, \mathbf{F}}\mathbb{I}\] \[\quad=\left(\frac{\mathbf{\varphi}^{\prime}_{\mbox{ \scriptsize$F$}}(\mathbf{F},\mathbf{m})}{\mbox{det}\,\mathbf{F}}-\frac{\mathbf{\varphi}(\mathbf{F},\mathbf{m})\mbox{Cof}\,\mathbf{F}}{(\mbox{det}\,\mathbf{F} )^{2}}\right)\mathbf{F}^{\top}\!+\frac{\mathbf{\varphi}( \mathbf{F},\mathbf{m})}{\mbox{det}\,\mathbf{F}} \mathbb{I}\Big{[}\frac{\mathbf{\varphi}(\mathbf{F},\mathbf{m})}{\mbox{det}\,\mathbf{F}}\Big{]}^{\prime}_{\mbox{ \scriptsize$F$}}\mathbf{F}^{\top}\!\!+\frac{\mathbf{\varphi}( \mathbf{F},\mathbf{m})}{\mbox{det}\,\mathbf{F}} \mathbb{I}\,. \tag{2.33}\] Let us recall that \(\mathbf{\varphi}(\mathbf{F},\mathbf{m})/\mbox{det}\, \mathbf{F}\) in (2.33) is the stored energy per actual (not referential) volume. Using the calculus (2.33), we obtain \[\int_{\varOmega}\mbox{div}\,\mathbf{T}\!\cdot\!\mathbf{v}\,\mbox{d}\mathbf{x}=\!\int_{\varGamma}(\mbox{\boldmath $T$}\mathbf{n})\!\cdot\!\mathbf{v}\,\mbox{d}S-\!\int_{ \varOmega}\!\!\mathbf{T}\!\cdot\!\mathbf{e}(\mathbf{v})\,\mbox{d}\mathbf{x}\quad\mbox{ with}\] \[\int_{\Omega}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[-\!\int_{\Omega}\!\!\nu_{1}|\mathbf{e}(\mathbf{v})|^{p}+\nu_{2}|\nabla^{2}\mathbf{v}|^{p}\, \mathrm{d}\mathbf{x}\,, \tag{2.37}\] where we used the decomposition of \(\nabla\mathbf{v}\) into its normal component \(\partial_{\mathbf{n}}\mathbf{v}\) and the tangential component, i.e. written componentwise \(\nabla\mathbf{v}_{i}=(\mathbf{n}\!\cdot\!\nabla\mathbf{v}_{i})\mathbf{n}+\nabla\!_{\!\mathrm{s} }\mathbf{v}_{i}\). Furthermore, the inertial force \(\frac{\partial}{\partial t}(\varrho\mathbf{v})+\mathrm{div}(\varrho\mathbf{v}{\otimes }\mathbf{v})\) in (2.30b) tested by \(\mathbf{v}\) gives the rate of kinetic energy \(\varrho|\mathbf{v}|^{2}/2\) integrating over \(\Omega\). Here we use the continuity equation (2.12) tested by \(|\mathbf{v}|^{2}/2\) and the Green formula with the boundary condition \(\mathbf{v}\!\cdot\!\mathbf{n}=0\): \[\int_{\Omega}\Big{(}\frac{\partial}{\partial t}(\varrho\mathbf{v})+\mathrm{div}( \varrho\mathbf{v}{\otimes}\mathbf{v})\Big{)}\!\cdot\!\mathbf{v}\,\mathrm{d}\mathbf{x}=\int_{ \Omega}\varrho\dot{\mathbf{v}}\!\cdot\!\mathbf{v}\,\mathrm{d}\mathbf{x}=\frac{\mathrm{d}} {\mathrm{d}t}\int_{\Omega}\frac{\varrho}{2}|\mathbf{v}|^{2}\,\mathrm{d}\mathbf{x}+\! \int_{\Gamma}\varrho|\mathbf{v}|^{2}\underbrace{\mathbf{v}\!\cdot\!\mathbf{n}}_{=\,0} \mathrm{d}S\,. \tag{2.38}\] The test of (2.30d) by \(\mathbf{\hat{m}}\) is quite technical. The exchange-energy term \(\mathrm{div}(\kappa(\mathbf{F})\nabla\mathbf{m})/\mathrm{det}\,\mathbf{F})\) tested by \((\mathbf{v}\!\cdot\!\nabla)\mathbf{m}\) is to be handled by using Green's formula twice. Namely, \[\int_{\Omega}\!\!\mathrm{div}\Big{(}\frac{\kappa(\mathbf{F})\nabla \mathbf{m}}{\mathrm{det}\,\mathbf{F}}\Big{)}\!\cdot\!(\mathbf{v}\!\cdot\!\nabla)\mathbf{m}\, \mathrm{d}\mathbf{x}=\!\int_{\Gamma}\frac{\kappa(\mathbf{F})(\mathbf{n}\!\cdot\!\nabla)\bm {m}}{\mathrm{det}\,\mathbf{F}}\!\cdot\!(\mathbf{v}\!\cdot\!\nabla)\mathbf{m}\,\mathrm{d}S\] \[-\!\int_{\Omega}\frac{\kappa(\mathbf{F})\nabla^{2}\mathbf{m}}{\mathrm{det }\,\mathbf{F}}\!\cdot\!(\mathbf{v}{\otimes}\nabla\mathbf{m})+\frac{\kappa(\mathbf{F})(\nabla \mathbf{m}{\otimes}\nabla\mathbf{m})}{\mathrm{det}\,\mathbf{F}}\!\cdot\!\mathbf{e}(\mathbf{v})\, \mathrm{d}\mathbf{x}\] \[=\!\int_{\Gamma}\frac{\kappa(\mathbf{F})(\mathbf{n}\!\cdot\!\nabla)\mathbf{ m}}{\mathrm{det}\,\mathbf{F}}\!\cdot\!\big{(}(\mathbf{v}\!\cdot\!\nabla)\mathbf{m}\big{)}- \frac{\kappa(\mathbf{F})|\nabla\mathbf{m}|^{2}}{2\det\mathbf{F}}\mathbf{v}\!\cdot\!\mathbf{n}\, \mathrm{d}S\] \[+\!\int_{\Omega}\frac{|\nabla\mathbf{m}|^{2}}{2}\nabla\Big{(}\frac{ \kappa(\mathbf{F})}{\mathrm{det}\,\mathbf{F}}\Big{)}\!\cdot\!\mathbf{v}+\frac{\kappa(\mathbf{ F})|\nabla\mathbf{m}|^{2}}{2\det\mathbf{F}}\mathrm{div}\,\mathbf{v}-\underbrace{\frac{\kappa(\mathbf{F})( \nabla\mathbf{m}{\otimes}\nabla\mathbf{m})}{\mathrm{det}\,\mathbf{F}}\!\cdot\!\mathbf{e}(\mathbf{ v})\,\mathrm{d}\mathbf{x}\,, \tag{2.39}\] where the boundary integral vanishes due to the boundary conditions \((\mathbf{n}\!\cdot\!\nabla)\mathbf{m}=\mathbf{0}\) and \(\mathbf{v}\!\cdot\!\mathbf{n}=0\). The latter equality in (2.39) follows by the calculus and the Green formula: \[\int_{\Omega}\frac{\kappa(\mathbf{F})\nabla^{2}\mathbf{m}}{\mathrm{det}\, \mathbf{F}}\!:\!(\mathbf{v}{\otimes}\nabla\mathbf{m})\,\mathrm{d}\mathbf{x}=\int_{\Gamma} \frac{\kappa(\mathbf{F})(\mathbf{n}\!\cdot\!\nabla)\mathbf{m}}{\mathrm{det}\,\mathbf{F}}\! \cdot\!\big{(}(\mathbf{v}\!\cdot\!\nabla)\mathbf{m}\big{)}\,\mathrm{d}S\] \[-\!\int_{\Omega}\frac{\kappa(\mathbf{F})\nabla\mathbf{m}{\otimes}\mathbf{v}}{ \mathrm{det}\,\mathbf{F}}\!:\!\nabla^{2}\mathbf{m}+|\nabla\mathbf{m}|^{2}\Big{(}\frac{\kappa (\mathbf{F})}{\mathrm{det}\,\mathbf{F}}\!\mathrm{div}\,\mathbf{v}+\nabla\Big{(}\frac{\kappa (\mathbf{F})}{\mathrm{det}\,\mathbf{F}}\Big{)}\!\cdot\!\mathbf{v}\Big{)}\,\mathrm{d}\mathbf{x}\] \[=-\!\!\int_{\Omega}\frac{\kappa(\mathbf{F})|\nabla\mathbf{m}|^{2}}{2\det \mathbf{F}}\mathrm{div}\,\mathbf{v}+\frac{|\nabla\mathbf{m}|^{2}}{2}\nabla\Big{(}\frac{ \kappa(\mathbf{F})}{\mathrm{det}\,\mathbf{F}}\Big{)}\!\cdot\!\mathbf{v}\,\mathrm{d}\mathbf{x}\,, \tag{2.40}\] where again \((\mathbf{n}\!\cdot\!\nabla)\mathbf{m}=\mathbf{0}\) was used. For the last term, we can still use the calculus \[\nabla\Big{(}\frac{\kappa(\mathbf{F})}{\mathrm{det}\,\mathbf{F}}\Big{)}=\Big{(}\frac{ \kappa(\mathbf{F})}{\mathrm{det}\,\mathbf{F}}\Big{)}^{\prime}\!\!:\!\nabla\mathbf{F}=\Big{(} \frac{\kappa^{\prime}(\mathbf{F})}{\mathrm{det}\,\mathbf{F}}-\frac{\kappa(\mathbf{F}) \mathrm{Cof}\,\mathbf{F}}{\mathrm{det}\,\mathbf{F}^{2}}\Big{)}\!\!:\!\nabla\mathbf{F}=\frac{ \kappa^{\prime}(\mathbf{F})-\kappa(\mathbf{F})\mathbf{F}^{-\top}}{\mathrm{det}\,\mathbf{F}} \!:\!\nabla\mathbf{F}, \tag{2.41}\] where we again used the algebra \(F^{-\top}=\mathrm{Cof}F/\mathrm{det}\,F\) and the calculus \(\mathrm{det}^{\prime}=\mathrm{Cof}\). Thus (2.39) can be written as: \[\int_{\Omega}\!\!\mathrm{div}\Big{(}\frac{\kappa(\mathbf{F})\nabla\mathbf{ m}}{\mathrm{det}\,\mathbf{F}}\Big{)}\!\cdot\!(\mathbf{v}\!\cdot\!\nabla)\mathbf{m}\,\mathrm{d}\mathbf{x}=\!\int_{\Omega} \!\bigg{(}|\nabla\mathbf{m}|^{2}\frac{\kappa^{\prime}(\mathbf{F})\!-\!\kappa(\mathbf{F}) \mathbf{F}^{-\top}}{2\det\mathbf{F}}\!\cdot\!(\mathbf{v}\!\cdot\!\nabla)\mathbf{F}\] \[+\frac{\kappa(\mathbf{F})|\nabla\mathbf{m}|^{2}}{2\det\mathbf{F}}\mathrm{div}\, \mathbf{v}-\mathbf{K}\!\cdot\!e(\mathbf{v})\bigg{)}\,\mathrm{d}\mathbf{x}\,, \tag{2.42}\] Similarly, this exchange-energy term \(\mathrm{div}(\kappa(\mathbf{F})\nabla\mathbf{m}/\mathrm{det}\,\mathbf{F})\) tested by \(\frac{\partial}{\partial t}\mathbf{m}\) is to be handled by using Green's formula once: \[\int_{\Omega}\!\mathrm{div}\Big{(}\frac{\kappa(\mathbf{F})\nabla\mathbf{m}} {\mathrm{det}\,\mathbf{F}}\Big{)}\!\cdot\!\frac{\partial\mathbf{m}}{\partial t}\, \mathrm{d}\mathbf{x}=\!\int_{\Gamma}\!\frac{\kappa(\mathbf{F})(\mathbf{n}\!\cdot\!\nabla) \mathbf{m}}{\mathrm{det}\,\mathbf{F}}\!\cdot\!\frac{\partial\mathbf{m}}{\partial t}\, \mathrm{d}S-\!\int_{\Omega}\!\frac{\kappa(\mathbf{F})\nabla\mathbf{m}}{\mathrm{det}\, \mathbf{F}}\!\cdot\!\nabla\frac{\partial\mathbf{m}}{\partial t}\,\mathrm{d}\mathbf{x}\] \[\qquad\qquad\qquad=-\frac{\mathrm{d}}{\mathrm{d}t}\!\int_{ \Omega}\!\frac{\kappa(\mathbf{F})|\nabla\mathbf{m}|^{2}}{2\det\mathbf{F}}\mathrm{d}\mathbf{x} +\int_{\Omega}\!\frac{|\nabla\mathbf{m}|^{2}}{2}\Big{(}\frac{\kappa(\mathbf{F})}{ \mathrm{det}\,\mathbf{F}}\Big{)}^{\prime}\!\!:\!\frac{\partial\mathbf{F}}{\partial t} \,\mathrm{d}\mathbf{x}\] \[\qquad\qquad\qquad=-\frac{\mathrm{d}}{\mathrm{d}t}\!\int_{ \Omega}\!\frac{\kappa(\mathbf{F})|\nabla\mathbf{m}|^{2}}{2\det\mathbf{F}}\mathrm{d}\mathbf{x} +\int_{\Omega}\!|\nabla\mathbf{m}|^{2}\frac{\kappa^{\prime}(\mathbf{F})\!-\!\kappa( \mathbf{F})\mathbf{F}^{-\!\top}}{2\det\mathbf{F}}\!\cdot\!\frac{\partial\mathbf{F}}{\partial t }\,\mathrm{d}\mathbf{x}\,, \tag{2.43}\] where we again used \((\mathbf{n}\!\cdot\!\nabla)\mathbf{m}=\mathbf{0}\) on \(\Gamma\) and, where we again used, as in (2.41), that \((\kappa(\mathbf{F})/\mathrm{det}\,\mathbf{F})^{\prime}=(\kappa^{\prime}(\mathbf{F})\!-\! \kappa(\mathbf{F})\mathbf{F}^{-\!\top})/\mathrm{det}\,\mathbf{F}\). To merge (2.42) and (2.43), we use (2.30c) and also the calculus \[|\nabla\mathbf{m}|^{2}\frac{\kappa^{\prime}(\mathbf{F})\!-\!\kappa(\mathbf{F })\mathbf{F}^{-\!\top}}{2\det\mathbf{F}}\!\cdot\!\frac{\partial\mathbf{F}}{\partial t}+| \nabla\mathbf{m}|^{2}\frac{\kappa^{\prime}(\mathbf{F})\!-\!\kappa(\mathbf{F})\mathbf{F}^{-\! \top}}{2\det\mathbf{F}}\!\cdot\!(\mathbf{v}\!\cdot\!\nabla)\mathbf{F}\] \[=|\nabla\mathbf{m}|^{2}\frac{\kappa^{\prime}(\mathbf{F})\!-\!\kappa(\mathbf{F })\mathbf{F}^{-\!\top}}{2\det\mathbf{F}}\!\cdot\!(\nabla\mathbf{v})\mathbf{F}=|\nabla\mathbf{m}|^{ 2}\frac{\kappa^{\prime}(\mathbf{F})\mathbf{F}^{\!\top}\!-\!\kappa(\mathbf{F})\mathbb{I}}{2 \det\mathbf{F}}\!\cdot\!\mathbf{e}(\mathbf{v})\] \[=|\nabla\mathbf{m}|^{2}\frac{\kappa^{\prime}(\mathbf{F})\mathbf{F}^{\!\top}}{ 2\det\mathbf{F}}\!\cdot\!\mathbf{e}(\mathbf{v})-\frac{|\nabla\mathbf{m}|^{2}\kappa(\mathbf{F})}{2 \det\mathbf{F}}\mathrm{div}\,\mathbf{v} \tag{2.44}\] when the frame indifference of \(\kappa\) is assumed. Noticing that the last term in (2.44) cancels with the same pressure term in (2.42), we obtain \[\int_{\Omega}\!\mathrm{div}\Big{(}\frac{\kappa(\mathbf{F})\nabla\mathbf{m}}{\mathrm{det }\,\mathbf{F}}\Big{)}\!\cdot\!\dot{\mathbf{m}}\,\mathrm{d}\mathbf{x}=\int_{\Omega}\!\Big{(} \frac{\kappa^{\prime}(\mathbf{F})|\nabla\mathbf{m}|^{2}\mathbf{F}^{\!\top}}{2\det\mathbf{F}}\! -\mathbf{K}\Big{)}\!\cdot\!\mathbf{e}(\mathbf{v})\,\mathrm{d}x-\frac{\mathrm{d}}{\mathrm{d }t}\!\int_{\Omega}\!\frac{\kappa(\mathbf{F})|\nabla\mathbf{m}|^{2}}{2\det\mathbf{F}}\! \mathrm{d}\mathbf{x}\,. \tag{2.45}\] Moreover, we use the Green theorem also for the driving magnetic field \(\mathbf{h}=\mathbf{h}_{\mathrm{ext}}\!-\!\mathbf{h}_{\mathrm{dem}}\) with the demagnetizing field \(\mathbf{h}_{\mathrm{dem}}=-\nabla u\): \[\int_{\Omega}\mu_{0}\mathbf{h}\!\cdot\!\dot{\mathbf{m}}\,\mathrm{d}\mathbf{x} =\int_{\Omega}\mu_{0}\mathbf{h}_{\mathrm{ext}}\!\cdot\!\frac{\partial\mathbf{m}}{ \partial t}+\mu_{0}\mathbf{h}\!\cdot\!(\mathbf{v}\!\cdot\!\nabla)\mathbf{m}-\mu_{0}\mathbf{h}_ {\mathrm{dem}}\!\cdot\!\frac{\partial\mathbf{m}}{\partial t}\,\mathrm{d}\mathbf{x}\] \[=\int_{\Omega}\frac{\partial}{\partial t}\big{(}\mu_{0}\mathbf{h}_{ \mathrm{ext}}\!\cdot\!\mathbf{m}\big{)}-\mu_{0}\frac{\partial\mathbf{h}_{\mathrm{ext}} }{\partial t}\!\cdot\!\mathbf{m}-\mu_{0}(\nabla\mathbf{h})^{\top}\mathbf{m}\!\cdot\!\mathbf{v}+ \mu_{0}\nabla(\mathbf{h}\!\cdot\!\mathbf{m})\,\mathbf{v}-\mu_{0}\mathbf{h}_{\mathrm{dem}}\! \cdot\!\frac{\partial\mathbf{m}}{\partial t}\,\mathrm{d}\mathbf{x}\,.\] Altogether, this is used to handle the right-hand side of (2.30d) tested by \(\dot{\mathbf{m}}\): \[\int_{\Omega}\!\Big{(}\mu_{0}\mathbf{h}-\frac{\Psi^{\prime}_{\mathbf{m}}( \mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}}+\mathrm{div}\Big{(}\frac{\kappa(\mathbf{F}) \nabla\mathbf{m}}{\det\mathbf{F}}\Big{)}\Big{)}\!\cdot\!\dot{\mathbf{m}}\,\mathrm{d}\mathbf{x}\] \[=\!\int_{\Omega}\!\Big{(}\mu_{0}\mathbf{h}-\frac{\Psi^{\prime}_{\mathbf{m }}(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}}+\mathrm{div}\Big{(}\frac{\kappa(\mathbf{F}) \nabla\mathbf{m}}{\det\mathbf{F}}\Big{)}\Big{)}\!\cdot\!\dot{\mathbf{m}}-(\mu_{0}\mathbf{h}\!- \!\mathbf{t})\!\cdot\!\mathrm{skw}(\nabla\mathbf{v})\mathbf{m}\,\mathrm{d}\mathbf{x}\] \[=\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\!\mu_{0}\mathbf{h}_{ \mathrm{ext}}\!\cdot\!\mathbf{m}-\frac{\kappa(\mathbf{F})|\nabla\mathbf{m}|^{2}}{2\det\mathbf{F}} \,\mathrm{d}\mathbf{x}+\!\int_{\Omega}\!\Big{(}\!\Big{(}\frac{\kappa^{\prime}(\mathbf{F}) |\nabla\mathbf{m}|^{2}\mathbf{F}^{\!\top}}{2\det\mathbf{F}}-\mathbf{K}\Big{)}\!\cdot\!\mathbf{e}( \mathbf{v})-\mu_{0}\frac{\partial\mathbf{h}_{\mathrm{ext}}}{\partial t}\!\cdot\!\mathbf{m}\] \[\quad-\Big{(}\frac{\varphi^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m})}{\det \mathbf{F}}+\frac{\zeta^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m},\theta)}{\det\mathbf{F}} \Big{)}\!\cdot\!\dot{\mathbf{m}}-\big{(}\underbrace{\big{(}\mu_{0}(\nabla\mathbf{h})^{ \top}\mathbf{m}-\mu_{0}\nabla(\mathbf{h}\!\cdot\!\mathbf{m})\big{)}\!\cdot\!\mathbf{v}-\mu_{0} \mathbf{h}_{\mathrm{dem}}\!}\!\cdot\!\frac{\partial\mathbf{m}}{\partial t}\] \[-\underbrace{\mathrm{skw}\big{(}(\mu_{0}\mathbf{h}{-}\!\!\psi^{\prime}_{ \mathbf{m}}(\mathbf{F},\mathbf{m},\theta)){\otimes}\mathbf{m}\big{)}}_{=\mathbf{S}\ \text{from (\ref{eq:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:s:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:s:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:s:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:som:s:som:som:s:om:som:som:som:s:som:som:som:som:som:som:som:s:som:som:som:som:som:som:som:som:som:som:s:som:som: **Proposition 2.7** (Total energy balance).: _Any smooth solution of the evolution boundary-value problem (2.30)-(2.32) satisfies the identity_ \[\begin{split}&\frac{\mathrm{d}}{\mathrm{d}t}\bigg{(}\int_{ \Omega}\underbrace{\frac{\varrho}{2}|\mathbf{v}|^{2}}_{\text{kinetic}}+ \underbrace{\frac{\upomega(\mathbf{F},\mathbf{m})}{\det\mathbf{F}}}_{\text{stored}}+ \underbrace{\frac{\upomega(\mathbf{F})|\nabla\mathbf{m}|^{2}}{2\det\mathbf{F}}}_{\text{ exchange}}-\underbrace{\mu_{0}\mathbf{h}\mathbf{\cdot}\mathbf{m}}_{\text{energy}}\, \mathrm{d}\mathbf{x}+\underbrace{\omega(\mathbf{F},\mathbf{m},\theta)}_{\text{heat}}\\ &+\int_{\mathbb{R}^{d}}\underbrace{\frac{\mu_{0}}{2}|\nabla u|^{ 2}}_{\text{energy of de}}+\!\!\int_{\Gamma}\underbrace{\frac{\nu_{2}}{2}|\mathbf{v}| ^{p}}_{\text{board}}\,\mathrm{d}S=\int_{\Omega}\underbrace{\underline{\varrho} \mathbf{g}\mathbf{\cdot}\mathbf{v}}_{\text{power of}}\,\underbrace{\mu_{0}\frac{\partial \mathbf{h}_{\text{ext}}}{\partial t}\mathbf{\cdot}\mathbf{m}}_{\text{power of}}\,\mathrm{d} \mathbf{x}+\!\int_{\Gamma}\underbrace{\underline{k}\mathbf{\cdot}\mathbf{v}}_{\text{power of}}\,\underbrace{h(\theta)}_{\text{ traction}}\,\mathrm{d}S\,.\end{split} \tag{2.50}\] Another aspect important both thermodynamically and also for mathematical analysis is non-negativity of temperature, related with the 3rd law of thermodynamics. This will be demonstrated later when we will exploit some information about the quality of the velocity field extracted from (2.49), cf. (3.57) below. **Remark 2.8** (Exchange hyper-stress).: In principle, to balance the energetics, the magnetic exchange driving force \(\mathrm{div}(\upkappa(\mathbf{F})\nabla\mathbf{m}/\det\mathbf{F})\) in (2.18c) may contribute either directly to the skew-symmetric magnetic stress \(\mathbf{S}\) by \(\mathrm{skw}(\mathrm{div}(\frac{\upkappa(\mathbf{F})\nabla\mathbf{m}}{\det\mathbf{F}}) \otimes\mathbf{m})\), as considered in [48], or to the skew-symmetric hyperstress as \(\mathscr{S}\). Physically it is rather questionable which option is more relevant. The former case would bring analytical troubles in the argumentation (3.52) below due to lack of compactness of \(\nabla\mathbf{v}\) as \(\frac{\partial}{\partial t}\mathbf{v}\) is not estimated, in contrast to [48] where the inertial force was handled in a simplified "semi-compressible" way. This have led us to adopt the latter option here, which seems also more physical and a similar skew-symmetric hyperstress can be found in [57]. **Remark 2.9** (Isotropic magnets).: Let us note that, when \(\uppsi(\mathbf{F},\cdot,\theta)\) is isotropic as in Example 2.5, the skew-symmetric magnetic stress \(\mathbf{S}\) simplifies to \(\mu_{0}\mathrm{skw}(\mathbf{h}\otimes\mathbf{m})\) because \(\uppsi^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m},\theta)=k(\mathbf{F},\mathbf{m},\theta)\mathbf{m}\) for some scalar-valued coefficient \(k=k(\mathbf{F},\mathbf{m},\theta)\) so that \(\mathrm{skw}(\uppsi^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m},\theta)\otimes\mathbf{m})=k( \mathbf{F},\mathbf{m},\theta)\mathrm{skw}(\mathbf{m}\otimes\mathbf{m})=\mathbf{0}\). ## 3 The analysis - weak solutions of (2.30) We will provide a proof of existence and certain regularity of weak solutions. To this aim, the concept of multipolar viscosity is essential but, anyhow, still quite nontrivial and carefully ordered arguments will be needed. The peculiarities are that the inertial term in Eulerian setting involves varying mass density requiring sophisticated techniques from compressible fluid dynamics, the momentum equation is very geometrically nonlinear, and the heat equation has an \(L^{1}\)-structure with \(\mathbf{F}\)-dependent heat capacity and with the convective time derivative and ever-troubling adiabatic effects due to necessarily general coupling of mechanical and thermal effect in the deforming configuration in compressible media. Usual analysis is made by some approximation, a-priori estimates, and limit passage possibly in several steps. The mentioned strong nonlinearity makes time discretization problematic. On the other hand, the space discretization by a (conformal) Faedo-Galerkin method is also not straightforward because of several "nonlinear" tests leading to the basic energy balances in Section 2.4, being confronted in particular with the Lavrentiev phenomenon as occurring already in static nonlinear elasticity [1, 2, 19]. Anyhow, careful suitably regularized "semi-Galerkin" discretization allowing estimation of the magneto-mechanical part separately from the thermal part and a successive limit passage will work. ### Definition of weak solutions and the main results We will use the standard notation concerning the Lebesgue and the Sobolev spaces, namely \(L^{p}(\varOmega;\mathbb{R}^{n})\) for Lebesgue measurable functions \(\varOmega\to\mathbb{R}^{n}\) whose Euclidean norm is integrable with \(p\)-power, and \(W^{k,p}(\varOmega;\mathbb{R}^{n})\) for functions from \(L^{p}(\varOmega;\mathbb{R}^{n})\) whose all derivative up to the order \(k\) have their Euclidean norm integrable with \(p\)-power. We also write briefly \(H^{k}=W^{k,2}\). The notation \(p^{*}\) will denote the exponent from the embedding \(W^{1,p}(\varOmega)\subset L^{p^{*}}(\varOmega)\), i.e. \(p^{*}=dp/(d{-}p)\) for \(p<d\) while \(p^{*}\geq 1\) arbitrary for \(p=d\) or \(p^{*}=+\infty\) for \(p>d\). Moreover, for a Banach space \(X\) and for \(I=[0,T]\), we will use the notation \(L^{p}(I;X)\) for the Bochner space of Bochner measurable functions \(I\to X\) whose norm is in \(L^{p}(I)\) while \(W^{1,p}(I;X)\) stands for functions \(I\to X\) whose distributional derivative is in \(L^{p}(I;X)\). Also, \(C(\cdot)\) and \(C^{1}(\cdot)\) will denote spaces of continuous and continuously differentiable functions. Moreover, as usual, we will use \(C\) for a generic constant which may vary from estimate to estimate. We will consider an initial-value problem, prescribing the initial conditions \[\varrho(0)=\varrho_{0}\,,\hskip 14.226378pt\boldsymbol{v}(0)=\boldsymbol{v}_{0 }\,,\hskip 14.226378pt\boldsymbol{F}(0)=\boldsymbol{F}_{0}\,,\hskip 14.226378pt \boldsymbol{m}(0)=\boldsymbol{m}_{0}\,,\hskip 14.226378pt\text{and} \hskip 14.226378pt\theta(0)=\theta_{0}\,; \tag{3.1}\] here and in what follows, we will use the short-hand notation as \([\varrho(t)](\boldsymbol{x})=\varrho(t,\boldsymbol{x})\). Referring to the referential mass density \(\varrho\), the initial conditions should satisfy \(\varrho_{0}=\varrho/\text{det}\,\boldsymbol{F}_{0}\). To devise a weak formulation of the initial-boundary-value problem (2.32) and (3.1) for the system (2.30), we use the by-part integration in time and the Green formula for the inertial force. The nonsmoothness of \(\text{Dir}(\cdot)\) applied on \(\boldsymbol{\dot{m}}\) leads to a variational inequality, arising by a standard definition of the convex subdifferential of the convex potential of the monotone set-valued mapping \(\boldsymbol{r}\mapsto\tau\boldsymbol{r}+h_{\text{\tiny C}}(\boldsymbol{F}, \theta)\text{Dir}(\boldsymbol{r})\), let us denote it as \(D(\boldsymbol{F},\theta;\boldsymbol{r})=\tau|\boldsymbol{r}|^{2}/2+h_{\text{ \tiny C}}(\boldsymbol{F},\theta)|\boldsymbol{r}|\). Then (2.30d) has the form \(\partial_{\,\boldsymbol{\dot{m}}}D(\boldsymbol{F},\theta;\boldsymbol{\dot{m} })\ni\mu_{0}\boldsymbol{h}_{\text{\tiny ext}}+\mu_{0}\nabla u-\boldsymbol{t}+ \boldsymbol{m}{\times}\boldsymbol{\dot{m}}/\gamma(\boldsymbol{F},\boldsymbol{ m},\theta)\) with \(\boldsymbol{t}\) from (2.18c), from which we obtain a variational inequality by taking into account the standard definition of the (partial) convex subdifferential \(\partial_{\,\boldsymbol{\dot{m}}}\). This involves \(\boldsymbol{t}{\cdot}\boldsymbol{\dot{m}}\) which contains the product of \(\text{div}(\mathsf{x}^{\prime}_{\nabla\boldsymbol{m}}(\boldsymbol{F},\nabla \boldsymbol{m})/\text{det}\,\boldsymbol{F})\) with \(\boldsymbol{\dot{m}}\). This product would cause troubles in convergence of approximate solutions, so we will better avoid it in the weak formulation by a substitution using (2.45) integrated over \(I\), i.e. \[\int_{0}^{T}\!\!\!\int_{\varOmega}\!\!\text{div}\Big{(}\frac{ \mathsf{\kappa}(\boldsymbol{F})\nabla\boldsymbol{m}}{\det\,\boldsymbol{F}} \Big{)}{\cdot}\boldsymbol{\dot{m}}\,\text{d}\boldsymbol{x}\text{d}t= \int_{\varOmega}\!\frac{\mathsf{\kappa}(\boldsymbol{F}_{0})| \nabla\boldsymbol{m}_{0}|^{2}}{2\det\boldsymbol{F}_{0}}-\frac{\mathsf{\kappa} (\boldsymbol{F}(\boldsymbol{F}))|\nabla\boldsymbol{m}(\boldsymbol{T})|^{2}}{ 2\det\boldsymbol{F}(\boldsymbol{T})}\,\text{d}\boldsymbol{x}\] \[+\int_{0}^{T}\!\!\!\int_{\varOmega}\!\Big{(}\frac{\mathsf{\kappa} ^{\prime}(\boldsymbol{F})|\nabla\boldsymbol{m}|^{2}\boldsymbol{F}^{\top}}{2 \det\boldsymbol{F}}-\boldsymbol{K}\Big{)}{\cdot}\boldsymbol{e}(\boldsymbol{v}) \,\text{d}\boldsymbol{x}\text{d}t\,. \tag{3.2}\] Also we use the orthogonality \((\mathbf{m}{\times}\!\!\mathbf{m}){\cdot}\!\!\mathbf{\widetilde{\mathbf{m}}}=0\), which eliminates this (otherwise not integrable) term and which altogether gives the variational inequality (3.4b) below. **Definition 3.1** (Weak solutions to (2.30)).: _For \(p\in[1,\infty)\), a six-tuple \((\varrho,\mathbf{v},\mathbf{F},\mathbf{m},u,\theta)\) with \(\varrho\in H^{1}(I{\times}\Omega)\), \(\mathbf{v}\in L^{p}(I;W^{2,p}(\Omega;\mathbb{R}^{d}))\), \(\mathbf{F}\in H^{1}(I{\times}\Omega;\mathbb{R}^{d\times d})\), \(\mathbf{m}\in H^{1}(I;L^{2}(\Omega;\mathbb{R}^{d}))\cap L^{\infty}(I;H^{1}(\Omega; \mathbb{R}^{d}))\), \(u\in L^{\infty}(I;H^{1}(\mathbb{R}^{d}))\), and \(\theta\in L^{1}(I;W^{1,1}(\Omega))\) will be called a weak solution to the system (2.30) with the boundary conditions (2.32) and the initial condition (3.1) if_ \[\frac{\uppsi_{\mathbf{F}}^{\prime}(\mathbf{F},\mathbf{m},\theta)\mathbf{F}^{ \top}}{\det\mathbf{F}}\in L^{1}(I{\times}\Omega;\mathbb{R}_{\mathrm{sym}}^{d\times d })\,, \tag{3.3a}\] \[\frac{\zeta_{\mathbf{F}}^{\prime}(\mathbf{F},\mathbf{m},\theta)\mathbf{F}^{\top}} {\det\mathbf{F}}\in L^{q^{\prime}}(I{\times}\Omega;\mathbb{R}_{\mathrm{sym}}^{d \times d})\,,\] (3.3b) \[\mathrm{div}\Big{(}\frac{\upkappa(\mathbf{F})|\nabla\mathbf{m}|^{2}}{\det \mathbf{F}}\Big{)}\in L^{2}(I{\times}\Omega;\mathbb{R}^{d}) \tag{3.3c}\] _with \(\det\mathbf{F}>0\) a.e. such that the integral identities_ \[\int_{0}^{T}\!\!\!\int_{\Omega}\left(\Big{(}\frac{\uppsi_{\mathbf{F}} ^{\prime}(\mathbf{F},\mathbf{m},\theta)\mathbf{F}^{\top}}{\det\mathbf{F}}+\frac{|\nabla\mathbf{m}|^ {2}\upkappa^{\prime}(\mathbf{F})\mathbf{F}^{\top}}{2\det\mathbf{F}}+\mathbf{K}+\nu_{1}|\mathbf{e}( \mathbf{v})|^{p-2}\mathbf{e}(\mathbf{v})-\varrho\mathbf{v}{\otimes}\mathbf{v}\Big{)}{:}\mathbf{e}( \widetilde{\mathbf{v}})\right.\\ +\mathbf{S}{:}\mathrm{skw}(\nabla\widetilde{\mathbf{v}})+\mu_{0}(\mathbf{h}{ \cdot}\mathbf{m})\mathrm{div}\widetilde{\mathbf{v}}-\mu_{0}(\nabla\mathbf{h})^{\top}{:}( \mathbf{m}{\otimes}\widetilde{\mathbf{v}})+\big{(}\nu_{2}|\nabla^{2}\mathbf{v}|^{p-2}\nabla ^{2}\mathbf{v}{+}\mathscr{S}\big{)}{:}\nabla^{2}\widetilde{\mathbf{v}}\\ -\varrho\mathbf{v}{\cdot}\frac{\partial\widetilde{\mathbf{v}}}{\partial t }\right)\mathrm{d}\mathbf{x}\mathrm{d}t{=}\!\int_{0}^{T}\!\!\!\int_{\Omega} \varrho\mathbf{g}{\cdot}\widetilde{\mathbf{v}}\,\mathrm{d}\mathbf{x}\mathrm{d}t{+}\!\int_{ 0}^{T}\!\!\!\int_{\Gamma}(\mathbf{k}{-}\nu_{\sharp}|\mathbf{v}|^{p-2}\mathbf{v}){\cdot} \widetilde{\mathbf{v}}\,\mathrm{d}S\mathrm{d}t{+}\!\int_{\Omega}\!\!\varrho_{0}\bm {v}_{0}{\cdot}\widetilde{\mathbf{v}}(0)\,\mathrm{d}\mathbf{x}\] (3.4a) _with \[\mathbf{h}=\mathbf{h}_{\mathrm{ext}}{+}\nabla u\], \[\mathbf{K}\], \[\mathbf{S}\], and \[\mathscr{S}\] from ( 2.30b ) holds for any \[\widetilde{\mathbf{v}}\] smooth with \[\widetilde{\mathbf{v}}{\cdot}\mathbf{n}=\mathbf{0}\] and \[\widetilde{\mathbf{v}}(T)=0\], and \[\int_{0}^{T}\!\!\!\int_{\Omega}\!\!\left(\frac{\tau}{2}| \widetilde{\mathbf{r}}|^{2}+h_{\mathrm{c}}(\mathbf{F},\theta)|\widetilde{\mathbf{r}}|- \mathrm{div}\frac{\upkappa(\mathbf{F})\nabla\mathbf{m}}{\det\mathbf{F}}{\cdot}\mathrm{skw }(\nabla\mathbf{v})\mathbf{m}-\Big{(}\frac{\upkappa^{\prime}(\mathbf{F})|\nabla\mathbf{m}|^{2} \mathbf{F}^{\top}}{2\det\mathbf{F}}-\mathbf{K}\Big{)}{:}\mathbf{e}(\mathbf{v})\right.\\ +\frac{\upkappa(\mathbf{F})\nabla\mathbf{m}}{\det\mathbf{F}}{\cdot}\nabla \widetilde{\mathbf{r}}-\Big{(}\mu_{0}\mathbf{h}-\frac{\uppsi_{\mathbf{m}}^{\prime}(\mathbf{F}, \mathbf{m},\theta)}{\det\mathbf{F}}{\Big{)}{\cdot}(\widetilde{\mathbf{r}}-\hat{\mathbf{m}}) +\frac{\mathbf{m}{\times}\hat{\mathbf{m}}}{\gamma(\mathbf{F},\mathbf{m},\theta)}{\cdot} \widetilde{\mathbf{r}}\Big{)}\,\mathrm{d}\mathbf{x}\mathrm{d}t\\ \geq\int_{0}^{T}\!\!\!\int_{\Omega}\!\frac{\tau}{2}|\mathbf{\widetilde {m}}|^{2}+h_{\mathrm{c}}(\mathbf{F},\theta)|\mathbf{\widetilde{m}}|\,\mathrm{d}\mathbf{x} \mathrm{d}t{+}\!\int_{\Omega}\!\frac{\upkappa(\mathbf{F}(T))|\nabla\mathbf{m}(T)|^{2}}{2 \det\mathbf{F}(T)}-\frac{\upkappa(\mathbf{F}_{0})|\nabla\mathbf{m}_{0}|^{2}}{2\det\mathbf{F} _{0}}\mathrm{d}\mathbf{x} \tag{3.4b}\] _for any \(\widetilde{\mathbf{r}}\in L^{2}(I;H^{1}(\Omega;\mathbb{R}^{d}))\), and further_ \[\int_{\mathbb{R}^{3}}\!\!\nabla u(t){\cdot}\nabla\widetilde{u}\, \mathrm{d}\mathbf{x}=\int_{\Omega}\!\mathbf{m}(t){\cdot}\nabla\widetilde{u}\,\mathrm{ d}\mathbf{x} \tag{3.4c}\] _holds for any \(\widetilde{u}\in H^{1}(\mathbb{R}^{d})\) and for a.a. \(t\in I\), and_ \[\int_{0}^{T}\!\!\!\int_{\Omega}\!\!\left(\!\omega(\mathbf{F},\mathbf{m}, \theta)\frac{\partial\widetilde{\theta}}{\partial t}+\big{(}\omega(\mathbf{F},\mathbf{m}, \theta)\mathbf{v}{-}\up _with \(\xi(\boldsymbol{F},\theta;\cdot,\cdot)\) and \(\omega(\cdot,\cdot,\cdot)\) from (2.30f) holds for any \(\widetilde{\theta}\) smooth with \(\widetilde{\theta}(T)=0\), and the equations (2.30a) and (2.30c) hold a.e. on \(I{\times}\Omega\) with \(\boldsymbol{v}(0)=\boldsymbol{v}_{0}\) and \(\boldsymbol{F}(0)=\boldsymbol{F}_{0}\) a.e. on \(\Omega\), and also \(\boldsymbol{m}(0)=\boldsymbol{m}_{0}\) is to hold a.e. on \(\Omega\)._ Before stating the main analytical result, let us summarize the data qualification which will be fitted to the motivating Example 2.5. For some \(\delta>0\) and \(s>0\), we assume: \[\Omega\ \ \mbox{a smooth bounded domain of}\ \mathbb{R}^{d},\ \ d=2,3, \tag{3.5a}\] \[\varphi\in C^{1}(\mathrm{GL}^{+}(d){\times}\mathbb{R}^{d}),\ \forall\boldsymbol{F}\in\mathrm{GL}^{+}(d):\quad\varphi_{\boldsymbol{F}}( \boldsymbol{F},\boldsymbol{m})\geq\delta\big{(}1+|\boldsymbol{m}|^{s}\det \boldsymbol{F}\big{)},\] \[\exists\,C\in C(\mathrm{GL}^{+}(d)),\ \forall\boldsymbol{F}\in \mathrm{GL}^{+}(d),\ \boldsymbol{m}\in\mathbb{R}^{d}:\ |\varphi_{\boldsymbol{m}}^{ \prime}(\boldsymbol{F},\boldsymbol{m})|\leq C(\boldsymbol{F})(1{+}| \boldsymbol{m}|^{1+2^{*}/2}),\] (3.5b) \[\zeta\in C^{2}(\mathrm{GL}^{+}(d){\times}\mathbb{R}^{d}{\times} \mathbb{R}^{+}),\ \forall(\boldsymbol{F},\boldsymbol{m},\theta)\in\mathrm{GL}^{+}(d){\times} \mathbb{R}^{d}{\times}\mathbb{R}^{+}:\ \ \ \zeta_{\theta\theta}^{\prime }(\boldsymbol{F},\boldsymbol{m},\theta)\leq\frac{-\delta}{\det\boldsymbol{F}}\,,\] \[\Big{|}\frac{\zeta_{\boldsymbol{F}}^{\prime}(\boldsymbol{F}, \boldsymbol{m},\theta)\boldsymbol{F}^{\top}}{\det\boldsymbol{F}}\Big{|}+ \Big{|}\frac{\zeta_{\boldsymbol{m}}^{\prime}(\boldsymbol{F},\boldsymbol{m}, \theta)}{\det\boldsymbol{F}}\Big{|}^{2}\leq C\Big{(}1{+}\frac{\varphi_{ \boldsymbol{G}}(\boldsymbol{F},\boldsymbol{m}){+}\theta}{\det\boldsymbol{F}} \Big{)},\] (3.5c) \[\forall K{\subset}\,\mathrm{GL}^{+}(d)\ \mbox{compact}\ \,\exists\,C_{K}<\infty\ \forall( \boldsymbol{F},\boldsymbol{m},\theta)\in K{\times}\mathbb{R}^{d}{\times} \mathbb{R}^{+}:\ \ |\omega_{\boldsymbol{F}}^{\prime}(\boldsymbol{F}, \boldsymbol{m},\theta)|\leq C_{K}(1{+}\theta),\] \[\omega_{\theta}^{\prime}(\boldsymbol{F},\boldsymbol{m},\theta)+| \omega_{\boldsymbol{m}}^{\prime}(\boldsymbol{F},\boldsymbol{m},\theta)|+| \omega_{\boldsymbol{F}\theta}^{\prime\prime}(\boldsymbol{F},\boldsymbol{m}, \theta)|+|\omega_{\boldsymbol{m}\theta}^{\prime\prime}(\boldsymbol{F}, \boldsymbol{m},\theta)|\leq C_{K},\] (3.5d) \[\nu_{1}>0,\ \nu_{2}>0,\ \nu_{b}>0,\] (3.5e) \[\gamma\in C(\mathrm{GL}^{+}(d){\times}\mathbb{R}^{d}{\times} \mathbb{R}^{+})\ \ \mbox{positive},\ \ \forall K{\subset}\,\mathrm{GL}^{+}(d)\ \mbox{compact}\] \[\exists\,C_{K}<\infty\ \ \forall(\boldsymbol{F}, \boldsymbol{m},\theta)\in K{\times}\mathbb{R}^{d}{\times}\mathbb{R}^{+}:\ \ \ \frac{|\boldsymbol{m}|}{\gamma(\boldsymbol{F},\boldsymbol{m},\theta)}\leq C_{K}\,,\] (3.5f) \[\kappa\in C^{1}(\mathrm{GL}^{+}(d))\,,\quad\inf_{\boldsymbol{F} \in\mathrm{GL}^{+}(d)}\kappa(\boldsymbol{F})>0\,,\] (3.5g) \[\mathcal{K}\in C(\mathrm{GL}^{+}(d){\times}\mathbb{R}^{+})\ \ \mbox{bounded},\quad\inf_{\boldsymbol{F}\in\mathrm{GL}^{+}(d),\theta\in \mathbb{R}^{+}}\mathcal{K}(\boldsymbol{F},\theta)>0\,,\] (3.5h) \[h:I{\times}I{\times}\mathbb{R}^{+}\to\mathbb{R}\ \ \mbox{ Caratheodory function},\quad 0\leq\theta\,h(t,\boldsymbol{x},\theta)\leq C(1{+}\theta^{2})\quad\mbox{and}\] \[h(t,\boldsymbol{x},\theta)\leq h_{\max}(t,\boldsymbol{x})\quad \mbox{for some}\ \ h_{\max}\in L^{1}(I{\times}I)\,,\] (3.5i) \[\boldsymbol{g}\in L^{1}(I;L^{\infty}(\Omega;\mathbb{R}^{d}))\,,\ \ \boldsymbol{h}_{\mathrm{ext}}\in W^{1,1}(I;L^{s^{\prime}}(\Omega;\mathbb{R}^{d}) )\,,\ \ \boldsymbol{k}\in L^{2}(I{\times}I;\mathbb{R}^{d}),\ \ \ \boldsymbol{k}\cdot\boldsymbol{n}=0\,,\] (3.5j) \[\boldsymbol{v}_{0}\in L^{2}(\Omega;\mathbb{R}^{d})\,,\quad \boldsymbol{F}_{0}\in W^{1,r}(\Omega;\mathbb{R}^{d\times d})\,,\ \ r>d\,,\quad\mbox{ with }\quad\min_{\widetilde{\Omega}}\det \boldsymbol{F}_{0}>0\,,\] (3.5k) \[\boldsymbol{\rho}\in L^{\infty}(\Omega)\cap W^{1,r}(\Omega)\,,\ \ r>d\,,\quad\mbox{with}\quad\min_{\widetilde{\Omega}}\!\rho>0\,,\] (3.5l) \[\boldsymbol{m}_{0}\in H^{1}(\Omega;\mathbb{R}^{d})\,,\quad\theta _{0}\in L^{1}(\Omega),\quad\theta_{0}\geq 0\ \ \mbox{a.e. on}\ \ \Omega\,, \tag{3.5m}\] where \(\omega\) in (3.5d) is from (2.30f). Let us note that the first condition in (3.5c) is just a condition on the heat capacity \(c=c(\boldsymbol{F},\boldsymbol{m},\theta)=\omega_{\theta}^{\prime}(\boldsymbol{F},\boldsymbol{m},\theta)\) and implies the coercivity \(\omega(\boldsymbol{F},\boldsymbol{m},\theta)\geq\delta\theta/\mathrm{det} \,\boldsymbol{F}\) since \(\omega(\boldsymbol{F},\boldsymbol{m},0)=0\). One should note that the referential stored energy (in contrast to the actual stored energy) enters the model only through its derivatives and can be modified without loss of generality by adding a constant, so that (3.5b) could be understood simply as coercivity \(\varphi(\boldsymbol{F},\boldsymbol{m})/\mathrm{det}\,\boldsymbol{F}\geq\delta| \boldsymbol{m}|^{s}\). Independently, the natural blow-up under compression, i.e. \(\varphi(\boldsymbol{F},\boldsymbol{m})\to\infty\) if \(\det\boldsymbol{F}\to 0{+}\), is allowed in (3.5b). The condition (3.5i) is well fitted with the standard situation that the boundary flux is \(h(\theta)=f(\theta_{\mathrm{ext}})-f(\theta)\) with an increasing function \(f\) and with \(\theta_{\mathrm{ext}}\geq 0\) a prescribed external temperature, so that one can choose \(h_{\max}=f(\theta_{\mathrm{ext}})\) provided we prove that \(\theta\geq 0\). Also the condition \(\theta\,h(t,\boldsymbol{x},\theta)\leq C(1{+}\theta^{2})\) is well compatible with this ansatz provided \(f(0)=0\) and \(f(\theta_{\mathrm{ext}})\in L^{2}(I{\times}I)\). **Theorem 3.2** (Existence and regularity of weak solutions).: _Let \(p>d\) and \(s\geq 2p/(p{-}2)\), and the assumptions (2.19) and (3.5) hold. Then:_ _there exist a weak solution_ \((\varrho,\mathbf{v},\mathbf{F},\mathbf{m},u,\theta)\) _according Definition_ 3.1 _with a non-negative mass density_ \(\varrho\in L^{\infty}(I;W^{1,r}(\Omega))\) _such that_ \(\frac{\partial}{\partial t}\varrho\in L^{\sigma}(I;L^{r\sigma/(r+\sigma)}( \Omega))\) _with_ \(3\leq\sigma<p(pd+4p-2d)/(4p-2d)\)_, and a non-negative temperature_ \(\theta\in L^{\infty}(I;L^{1}(\Omega))\)__\(\cap L^{\mu}(I;W^{1,\mu}(\Omega))\) _with_ \(1\leq\mu<(d+2)/(d+1)\)_, and further_ \(\frac{\partial}{\partial t}\mathbf{F}\in L^{p}(I;L^{r}(\Omega;\mathbb{R}^{d\times d }))\) _and_ \(\nabla\mathbf{F}\in L^{\infty}(I;L^{r}(\Omega;\mathbb{R}^{d\times d}))\)_, and_ \(\frac{\partial}{\partial t}\mathbf{m}\in L^{2}(I\times\Omega;\mathbb{R}^{d})\) _and_ \(\nabla\mathbf{m}\in L^{2}(I\times\Omega;\mathbb{R}^{d\times d\times d})\)_, and_ \(\nabla^{2}\mathbf{m}\in L^{2}(I\times\Omega;\mathbb{R}^{d\times d\times d})\)_._ 2. _Moreover, this solution complies with energetics in the sense that the energy dissipation balance (_2.49_) as well as the total energy balance (_2.50_) integrated over time interval_ \([0,t]\) _with the initial conditions (_3.1_) hold._ ### Some auxiliary results and formal a-priori estimates Let us first formulate two auxiliary assertions: **Lemma 3.3** (See [49]).: _Given \(\mathbf{v}\in L^{1}(I;W^{2,p}(\Omega;\mathbb{R}^{d}))\) with \(p>d\) and \(\mathbf{v}\cdot\mathbf{n}=0\) on \(I\times\Gamma\) and \(\varrho_{0}\in W^{1,r}(\Omega)\), (2.30a) has a unique weak solution \(\varrho\in C_{\rm w}(I;W^{1,r}(\Omega))\cap W^{1,1}(I;L^{r}(\Omega))\) which satisfies it a.e. on \(I\times\Omega\) and also the estimate holds:_ \[\left\|\varrho\right\|_{L^{\infty}(I;W^{1,r}(\Omega))\cap W^{1,1}(I;L^{r}( \Omega))}\leq\mathfrak{C}\Big{(}\|\nabla\mathbf{v}\|_{L^{1}(I;W^{1,p}(\Omega; \mathbb{R}^{d\times d}))}\,,\,\|\varrho_{0}\|_{W^{1,r}(\Omega)}\Big{)} \tag{3.6}\] _holds with some \(\mathfrak{C}\in C(\mathbb{R}^{2})\). Moreover, the mapping_ \[\mathbf{v}\mapsto\varrho:L^{1}(I;W^{2,p}(\Omega;\mathbb{R}^{d}))\to L^{\infty}(I; W^{1,r}(\Omega)) \tag{3.7}\] _is (weak,weak*)-continuous. The analogous assertion holds for (2.10), assuming \(1/\det\mathbf{F}_{0}\in W^{1,r}(\Omega)\), and for (2.13), assuming \(1/\varrho_{0}\in W^{1,r}(\Omega)\). Eventually, it holds \(\mathbb{R}^{d\times d}\)-valued also for (2.30c), assuming \(\mathbf{F}_{0}\in W^{1,r}(\Omega;\mathbb{R}^{d\times d})\)._ For the approximation method in the proof below, we will still need a modification of Lemma 3.3 for a non-homogeneous evolution-and-transport equation (3.25e), whose proof is a straightforward modification (partly simplification) of [49, Sect.4]: **Lemma 3.4**.: _Given \(\mathbf{v}\in L^{p}(I;W^{2,p}(\Omega;\mathbb{R}^{d}))\) with \(p>d\) and \(\mathbf{r}\in L^{1}(I;L^{2}(\Omega;\mathbb{R}^{d}))\), the equation \(\frac{\partial}{\partial t}\mathbf{m}+(\mathbf{v}\cdot\nabla)\mathbf{m}-\mathrm{skw}( \nabla\mathbf{v})\mathbf{m}=\mathbf{r}\) with the initial condition \(\mathbf{m}_{0}\in L^{2}(\Omega;\mathbb{R}^{d})\) and has a unique weak solution \(\mathbf{m}\in C_{\rm w}(I;L^{2}(\Omega;\mathbb{R}^{d}))\cap W^{1,1}(I;H^{1}( \Omega;\mathbb{R}^{d})^{*})\) and also the estimate holds:_ \[\left\|\mathbf{m}\right\|_{L^{\infty}(I;L^{2}(\Omega))\cap W^{1,1}(I;H^{1}(\Omega; \mathbb{R}^{d})^{*})}\leq\mathfrak{C}\Big{(}\|\nabla\mathbf{v}\|_{L^{1}(I;W^{1,p}( \Omega;\mathbb{R}^{d\times d}))}\,,\,\|\mathbf{m}_{0}\|_{L^{2}(\Omega;\mathbb{R}^{ d})}\,,\,\|\mathbf{r}\|_{L^{1}(I;L^{2}(\Omega;\mathbb{R}^{d}))}\Big{)} \tag{3.8}\] _holds with some \(\mathfrak{C}\in C(\mathbb{R}^{3})\). Moreover, the mapping_ \[(\mathbf{v},\mathbf{r})\mapsto\mathbf{m}:L^{1}(I;W^{2,p}(\Omega;\mathbb{R}^{d}))\times L^{ 1}(I;L^{2}(\Omega;\mathbb{R}^{d}))\to L^{\infty}(I;L^{2}(\Omega;\mathbb{R}^{ d})) \tag{3.9}\] _is (weak,weak*)-continuous._ Formally, the assumptions (3.5) yield some a-priori bounds which can be obtained from the total energy balance (2.50) and the mechanical energy-dissipation balance (2.49) for any sufficiently regular solution \((\varrho,\boldsymbol{v},\boldsymbol{F},\boldsymbol{m},u,\theta)\) with \(\theta\geq 0\) a.e. in \(I{\times}\Omega\). Later, we will prove existence of such solutions, but unfortunately we are not able to claim that every weak solution has \(\theta\) non-negative. For the approximation method used in the proof below, we assume the data \(\boldsymbol{\psi}\), \(\mathscr{X}\), and \(h\) to be defined also for the negative temperature by extending them as \[\begin{split}&\boldsymbol{\psi}(\boldsymbol{F},\boldsymbol{m}, \nabla\boldsymbol{m},\theta):=\boldsymbol{\varphi}(\boldsymbol{F},\boldsymbol {m})+\theta\big{(}{\ln}(-\theta){-}1\big{)}+\kappa(\boldsymbol{F})|\nabla \boldsymbol{m}|^{2}/2\,,\\ &\mathscr{X}(\boldsymbol{F},\theta):=\mathscr{X}(\boldsymbol{F},-\theta)\,,\quad\text{ and }\quad h(t,\boldsymbol{x},\theta):=h(t,\boldsymbol{x},- \theta)\quad\text{ for }\quad\theta<0\end{split} \tag{3.10}\] with \(\boldsymbol{\varphi}\) and \(\zeta\) from the split (2.17). This definition makes \(\boldsymbol{\psi}:\text{GL}^{+}(d){\times}\mathbb{R}^{d}{\times}\mathbb{R}^{ d}{\times}\mathbb{R}\to\mathbb{R}\) continuous and implies that \(\omega(\boldsymbol{F},\boldsymbol{m},\cdot)\) as well as \(\zeta_{\boldsymbol{F}}(\boldsymbol{F},\boldsymbol{m},\cdot)\) and \(\zeta_{\boldsymbol{m}}(\boldsymbol{F},\boldsymbol{m},\cdot)\) continuous; note that \(\omega(\boldsymbol{F},\boldsymbol{m},\theta)=\theta/\text{det}\,\boldsymbol {F}\), \(\zeta^{\prime}_{\boldsymbol{F}}(\boldsymbol{F},\boldsymbol{m},\theta)= \boldsymbol{0}\), and \(\zeta^{\prime}_{\boldsymbol{m}}(\boldsymbol{F},\boldsymbol{m},\theta)= \boldsymbol{0}\) for \(\theta\) negative. First, we use the total energy balance (2.50) integrated over a time interval \([0,t]\). At this point, we must now assume (while being later proved at least for some solution) that \(\theta\geq 0\), and similarly we now assume \(\text{det}\,\boldsymbol{F}>0\). In particular, we have also \(\omega(\boldsymbol{F},\boldsymbol{m},\theta)\geq 0\) and thus we are "only" to estimate the right-hand side in (2.50) together with the Zeeman energy. For the bulk term \(\varrho\boldsymbol{g}{\cdot}\boldsymbol{v}\) and the boundary terms \(\boldsymbol{k}{\cdot}\boldsymbol{v}+h(\theta)\) we refer to [49]. The gravity force \(\varrho\boldsymbol{g}\) tested by the velocity \(\boldsymbol{v}\) can be estimated by the Holder/Young inequality as \[\int_{\Omega}\varrho\boldsymbol{g}{\cdot}\boldsymbol{v}\, \text{d}\boldsymbol{x} =\int_{\Omega}\sqrt{\frac{\boldsymbol{\rho}}{\text{det}\, \boldsymbol{F}}}\sqrt{\varrho}\boldsymbol{v}{\cdot}\boldsymbol{g}\,\text{d} \boldsymbol{x}\leq\Big{\|}\sqrt{\frac{\boldsymbol{\rho}}{\text{det}\, \boldsymbol{F}}}\Big{\|}_{L^{2}(\Omega)}\big{\|}\sqrt{\varrho}\boldsymbol{v} \big{\|}_{L^{2}(\Omega;\mathbb{R}^{d})}\big{\|}\boldsymbol{g}\big{\|}_{L^{ \infty}(\Omega;\mathbb{R}^{d})}\] \[=\big{\|}\boldsymbol{g}\big{\|}_{L^{\infty}(\Omega;\mathbb{R}^{d} )}\int_{\Omega}\frac{\boldsymbol{\rho}}{2\,\text{det}\,\boldsymbol{F}}+\frac{ \boldsymbol{\rho}}{2}|\boldsymbol{v}|^{2}\,\text{d}\boldsymbol{x}\] \[\leq\big{\|}\boldsymbol{g}\big{\|}_{L^{\infty}(\Omega;\mathbb{R} ^{d})}\bigg{(}\frac{\max\boldsymbol{\rho}(\overline{\Omega})}{2\inf\varphi( \text{GL}^{+}(d){\times}\mathbb{R}^{d})}\!\int_{\Omega}\frac{\boldsymbol{ \varphi}(\boldsymbol{F},\boldsymbol{m})}{\text{det}\,\boldsymbol{F}}\,\text{d} \boldsymbol{x}+\int_{\Omega}\frac{\boldsymbol{\rho}}{2}|\boldsymbol{v}|^{2} \,\text{d}\boldsymbol{x}\bigg{)}\,. \tag{3.11}\] The integral on the right-hand side of (3.11) can then be treated by the Gronwall lemma. In order to apply the Gronwall lemma one needs the qualification (3.5j) for \(\boldsymbol{g}\). The boundary terms in (2.50) can be estimated, at current time instant \(t\in I\), as \[\int_{\Gamma}\!\boldsymbol{k}{\cdot}\boldsymbol{v}+h(\theta)\, \text{d}S\leq\Big{(}\frac{2}{\nu_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\leq C_{\mu_{0},\delta}\|\mathbf{h}_{\text{ext}}(t)\|_{L^{s^{\prime}}( \Omega;\mathbb{R}^{d})}^{s}+\frac{\delta}{2}\|\mathbf{m}(t)\|_{L^{s}(\Omega;\mathbb{ R}^{d})}^{s}\] \[\quad+\mu_{0}{\int_{0}^{t}}\left\|\frac{\partial\mathbf{h}_{\text{ ext}}}{\partial t}\right\|_{L^{s^{\prime}}(\Omega;\mathbb{R}^{d})}\!\left(1+\|\mathbf{m} \|_{L^{s}(\Omega;\mathbb{R}^{d})}^{s}\right)\text{d}t+\mu_{0}\|\mathbf{h}_{\text{ ext}}(0)\!\cdot\!\mathbf{m}_{0}\|_{L^{1}(\Omega)} \tag{3.13}\] with some \(C_{\mu_{0},\delta}\) depending on \(\mu_{0}\) and \(\delta\) chosen according to the assumption (3.5b). This assumption is then to be exploited for the stored energy on the left-hand side of (2.50) and, together with the qualification (3.5j) of \(\mathbf{h}_{\text{ext}}\), used for the Gronwall inequality. As a result, since \(\det\mathbf{F}>0\), we obtain the (formal) a-priori estimates \[\left\|\sqrt{\varrho}\mathbf{v}\right\|_{L^{\infty}(I;L^{2}(\Omega; \mathbb{R}^{d}))}\leq C\,, \tag{3.14a}\] \[\left\|\frac{\varrho(\mathbf{F},\mathbf{m})}{\det\mathbf{F}}\right\|_{L^{ \infty}(I;L^{1}(\Omega))}\leq C\,,\] (3.14b) \[\left\|\frac{\kappa(\mathbf{F})|\nabla\mathbf{m}|^{2}}{\det\mathbf{F}} \right\|_{L^{\infty}(I;L^{1}(\Omega))}\leq C\,,\] (3.14c) \[\left\|\nabla u\right\|_{L^{\infty}(I;L^{2}(\mathbb{R}^{d}; \mathbb{R}^{d}))}\leq C\,,\quad\text{and}\] (3.14d) \[\left\|\frac{\theta}{\det\mathbf{F}}\right\|_{L^{\infty}(I;L^{1}( \Omega))}\leq C\,. \tag{3.14e}\] From (3.14b), using (3.5b), we also obtain \[\left\|\mathbf{m}\right\|_{L^{\infty}(I;L^{s}(\Omega;\mathbb{R}^{d}))}\leq C\,. \tag{3.14f}\] Now we come to (2.49); here we used the assumed frame indifference of \(\zeta(\cdot,\mathbf{m},\theta)\) so that \(\zeta^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta)\mathbf{F}^{\top}\) is symmetric and thus \(\zeta^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta)\mathbf{F}^{\top}\!\!\!\!\!\!\!\!: \nabla\mathbf{v}=\zeta^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta)\mathbf{F}^{\top}\!\!\!\! \!\!:\!\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!:\!\!\!:\!\!\!:\!\!\!:\! \!:\!\!\!:\!\!\!:\!\!:\!\!\!:\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!:\!\!\!:\!\!:\! \!\!:\!\!\!:\!\!:\!\!\!:\!\!\!:\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!:\!\!\!:\!\!:\! \!\!:\!\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!\! \:\!\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!\!:\!\!\!\:\!\!\!:\!\!\!:\!\!\!:\!\!\!:\! \!\!\!:\!\!\!\!:\!\!\!\!:\!\!\!:\!\!\!\!:\!\!\!\!:\!\!\!\!:\!\!\!:\!\!\!:\!\! \!\!\!:\!\!\!:\!\!\!\!:\!\!\!\!:\!\!\!\!:\!\!\!\!\!:\!\!\!\!:\!\!\!\!:\!\!\! \!\!\!:\!\!\!\!\!:\!\!\!\!\!:\!\!\!\!\!\!:\!\!\!\!\!\!\:\!\!\!\!\!\!\!:\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\|\nabla\mathbf{m}\|_{L^{\infty}(I;L^{2}(\Omega;\mathbb{R}^{d\times d}))} \leq\Big{\|}\frac{\det\mathbf{F}}{\kappa(\mathbf{F})}\Big{\|}_{L^{\infty}(I \times\Omega)}^{1/2}\Big{\|}\frac{\mathsf{\kappa}(\mathbf{F})|\nabla\mathbf{m}|^{2}}{ \det\mathbf{F}}\Big{\|}_{L^{\infty}(I;L^{1}(\Omega))}^{1/2}\leq C\,. \tag{3.19d}\] Furthermore, having \(\mathbf{\hat{m}}\) estimated in (3.18) and by using the calculus \(\text{div}(\mathsf{\kappa}(\mathbf{F})\nabla\mathbf{m}/\det\mathbf{F})\)\(=\)\(\mathsf{\kappa}(\mathbf{F})\Delta\mathbf{m}/\det\mathbf{F})\)\(+(\mathsf{\kappa}^{\prime}(\mathbf{F})/\text{det}\,\mathbf{F}-\mathsf{\kappa}(\mathbf{F}) \text{Cof}\mathbf{F}/\text{det}\,\mathbf{F}^{2})\)\(:\)\((\nabla\mathbf{F}\otimes\nabla\mathbf{m})\), we can exploit (2.30d) in the form \[\Delta\mathbf{m}\in\frac{\det\mathbf{F}}{\mathsf{\kappa}(\mathbf{F})}\bigg{(}\tau\mathbf{\hat {m}}+h_{\text{\tiny C}}(\mathbf{F},\theta)\text{Dir}(\mathbf{\hat{m}})-\frac{\mathbf{m} \times\mathbf{\hat{m}}}{\gamma(\mathbf{F},\mathbf{m},\theta)}-\mathbf{h}_{\text{ext}}-\nabla u\] \[+\frac{\varphi^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m})+\bar{c}^{\prime}_{\mathbf{m}}(\mathbf{F}, \mathbf{m},\theta)}{\det\mathbf{F}}-\left(\frac{\kappa^{\prime}(\mathbf{F})}{\det\mathbf{F}}- \frac{\kappa(\mathbf{F})\text{Cof}\mathbf{F}}{\det\mathbf{F}^{2}}\right):\!(\nabla\mathbf{F} \otimes\nabla\mathbf{m})\right) \tag{3.20}\] to estimate \(\nabla^{2}\mathbf{m}\) by the \(H^{2}\)-regularity of the Laplacean with the homogeneous Neumann boundary conditions, as available on smooth or convex domains. Here we use (3.19a) with \(r>d\) and (3.19d), we have \(\nabla\mathbf{F}\otimes\nabla\mathbf{m}\) bounded in \(L^{\infty}(I;L^{2r/(r+2)}(\varOmega;\mathbb{R}^{d\times d\times d\times d}))\). By (3.5f) and by (3.18), we can still see that \(\mathbf{m}\times\mathbf{\dot{m}}/\gamma(\mathbf{F},\mathbf{m},\theta)\in L^{2}(I\!\times\! \varOmega;\mathbb{R}^{d\times d})\). By (3.14d), \(\nabla u|_{\varOmega}\in L^{2}(I\!\times\!\varOmega;\mathbb{R}^{d})\). Moreover, by (3.5b), \(\varphi^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m})\in L^{\infty}(I;L^{2^{*}2/(2^{*}+2)} (\varOmega;\mathbb{R}^{d}))\) and by (3.5c) \(\bar{c}^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m},\theta)\in L^{\infty}(I;L^{2}( \varOmega;\mathbb{R}^{d}))\). By comparison and the mentioned \(H^{2}\)-regularity, from (3.20) we obtain \[\left\|\nabla^{2}\mathbf{m}\right\|_{L^{2}(I\times\varOmega;\mathbb{R}^{d\times d \times d})}\leq C\,. \tag{3.21}\] Noting that the embedding \(H^{1}(\varOmega)\subset L^{2}(\varOmega)\) is compact for \(r>d\), (3.21) gives always a certain additional information about \(\nabla\mathbf{m}\) in comparison with (3.19d). ### Proof of Theorem 3.2 For clarity, we will divide the proof into ten steps. The inertial term and the continuity equation (2.30a) are treated as in [49] and we thus sketch the proof in these aspects. Let us outline main technical difficulties: The time discretization (Rothe's method) standardly needs convexity of \(\varphi\) (which is not a realistic assumption in finite-strain mechanics) possibly weakened if there is some viscosity in \(\mathbf{F}\) (which is not directly considered here, however). Also the conformal space discretization (i.e. the Faedo-Galerkin method) is difficult since it cannot directly copy the energetics because the "nonlinear" test of (2.30c) by \([\varphi/\det]^{\prime}_{\mathbf{F}}(\mathbf{F})\) needed in (2.34) is problematic in this approximation as \([\varphi/\det]^{\prime}_{\mathbf{F}}(\mathbf{F})\) is not in the respective finite-dimensional space in general and similarly also the tests of (2.30a) by \(|\mathbf{v}|^{2}\) and of (2.30d) by \(\mathbf{\dot{m}}=\frac{\partial}{\partial t}\mathbf{m}+(\mathbf{v}\cdot\nabla)\mathbf{m}- \text{skw}(\nabla\mathbf{v})\mathbf{m}\) are problematic. _Step 1: a regularization_. Referring to the formal estimates (3.19a), we can choose \(\lambda>0\) so small that, for any possible sufficiently regular solution, it holds \[\det\mathbf{F}>\lambda\quad\text{ and }\quad|\mathbf{F}|<\frac{1}{\lambda}\quad\text{a.e. on }\,I\!\times\!\varOmega\,. \tag{3.22}\] We first regularize the stress \(\mathbf{T}\) and the other nonlinearities in (2.30) by considering a smooth cut-off \(\pi_{\lambda}\in C^{1}(\mathbb{R}^{d\times d})\) defined as \[\pi_{\lambda}(\mathbf{F}):=\begin{cases}\phantom{-}1&\text{for }\det\mathbf{F}\geq \lambda\text{ and }|\mathbf{F}|\leq 1/\lambda,\\ \phantom{-}0&\text{for }\det\mathbf{F}\leq\lambda/2\text{ or }|\mathbf{F}|\geq 2/ \lambda,\\ \left(\frac{3}{\lambda^{2}}\big{(}2\det\mathbf{F}-\lambda\big{)}^{2}-\frac{2}{ \lambda^{3}}\big{(}2\det\mathbf{F}-\lambda\big{)}^{3}\right)\times\\ \phantom{-}\times\left(3(\lambda|\mathbf{F}|-1)^{2}-2(\lambda|\mathbf{F}|-1)^{3}\, \right)&\text{otherwise}.\end{cases} \tag{3.23}\] Here \(|\cdot|\) stands for the Frobenius norm \(|\mathbf{F}|=(\sum_{i,j=1}^{d}F_{ij}^{2})^{1/2}\) for \(\mathbf{F}=[F_{ij}]\), which makes \(\pi_{\lambda}\) frame indifferent. Thus we can regularize in a smooth way the singular nonlinearity \(1/\det(\cdot)\) and also extend \(\kappa(\cdot)/\!\det(\cdot)\) and \(\mathcal{K}\) and also \(\omega\): \[\det_{\lambda}(\mathbf{F}):=\pi_{\lambda}(\mathbf{F})\det\mathbf{F}+1-\pi_{ \lambda}(\mathbf{F})\,, \tag{3.24a}\] \[\kappa_{\lambda}(\mathbf{F}):=\pi_{\lambda}(\mathbf{F})\kappa(\mathbf{F})+(1 {-}\pi_{\lambda}(\mathbf{F}))\det\mathbf{F}\,,\quad\text{and}\] (3.24b) \[\mathcal{K}_{\lambda}(\mathbf{F},\theta):=\pi_{\lambda}(\mathbf{F}) \mathcal{K}(\mathbf{F},\theta)+1-\pi_{\lambda}(\mathbf{F})\,. \tag{3.24c}\] Using the operator \(\Delta^{-1}\text{div}:L^{2}(\Omega;\mathbb{R}^{d})\to H^{1}(\mathbb{R}^{d})\) defined by \(u=[\Delta^{-1}\text{div}](\mathbf{m})\) as a unique weak solution to (2.1) with the "boundary" condition \(u(\infty)=0\), we can eliminate \(u\); actually, this is rather the scenario for \(d=3\) otherwise it is formally possible because only \(\nabla u\) but not \(u\) itself occurs in the system and its energetics. Altogether, for the above chosen \(\lambda\) and for any \(\varepsilon>0\), we consider the regularized system \[\frac{\partial\varrho}{\partial t}=-\,\text{div}(\varrho\mathbf{v})\,, \tag{3.25a}\] \[\frac{\partial}{\partial t}(\varrho\mathbf{v})=\text{div}\Big{(} \mathbf{T}_{\lambda,\varepsilon}(\mathbf{F},\mathbf{m},\theta){+}\mathbf{K}_{\lambda}(\mathbf{F},\nabla\mathbf{m}){+}\mathbf{S}_{\lambda,\varepsilon}(\mathbf{F},\mathbf{m},\theta){+}\nu_{ 1}|\mathbf{e}(\mathbf{v})|^{p-2}\mathbf{e}(\mathbf{v})-\varrho\mathbf{v}{\otimes}\mathbf{v}\] \[-\text{div}\big{(}\mathscr{H}{+}\mathscr{S}_{\lambda}(\mathbf{F},\bm {m},\nabla\mathbf{m})\big{)}\Big{)}+\mu_{0}(\nabla\mathbf{h})^{\top}\mathbf{m}-\mu_{0} \nabla(\mathbf{h}\cdot\mathbf{m})+\sqrt{\frac{\varrho\varrho}{\det_{\lambda}(\mathbf{F})}} \mathbf{g}\] \[\text{with}\ \ \mathbf{T}_{\lambda,\varepsilon}(\mathbf{F},\mathbf{m},\theta)=\Big{(} \frac{[\pi_{\lambda}\mathbf{v}]^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m})}{\det\mathbf{F}}+ \frac{\pi_{\lambda}(\mathbf{F})\zeta^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta)}{(1{+ }\varepsilon|\theta|)\det\mathbf{F}}+\frac{|\nabla\mathbf{m}|^{2}\kappa^{\prime}_{ \lambda}(\mathbf{F})}{2\det\mathbf{F}}\Big{)}\mathbf{F}^{\top}\,,\] \[\mathbf{h}=\mathbf{h}_{\text{ext}}{+}\nabla\Delta^{-1}\text{div}(\mathbf{m}) \,,\quad\ \mathbf{K}_{\lambda}(\mathbf{F},\nabla\mathbf{m})=\frac{\kappa_{\lambda}(\mathbf{F})}{\det\mathbf{ F}}\nabla\mathbf{m}{\otimes}\nabla\mathbf{m}\,,\] \[\mathbf{S}_{\lambda,\varepsilon}(\mathbf{F},\mathbf{m},\theta)=\text{skw} \big{(}\big{(}\mu_{0}{h-}\widehat{\mathbf{t}}_{\lambda,\varepsilon}(\mathbf{F},\mathbf{m}, \theta)\big{)}{\otimes}\mathbf{m}\big{)}\,,\quad\mathscr{H}=\nu_{2}|\nabla^{2} \mathbf{v}|^{p-2}\nabla^{2}\mathbf{v}\,,\] \[\mathscr{S}_{\lambda}(\mathbf{F},\mathbf{m},\nabla\mathbf{m})=\frac{\kappa_{ \lambda}(\mathbf{F})}{\det\mathbf{F}}\text{Skw}(\nabla\mathbf{m}{\otimes}\mathbf{m})\,,\] (3.25b) \[\frac{\partial\mathbf{F}}{\partial t}=(\nabla\mathbf{v})\mathbf{F}-(\mathbf{v}{ \cdot}\nabla)\mathbf{F}\,,\] (3.25c) \[\tau\mathbf{r}+h_{\text{\tiny{C}}}(\mathbf{F},\theta)\text{Dir}(\mathbf{r})- \frac{\mathbf{m}{\times}\mathbf{r}}{\gamma(\mathbf{F},\mathbf{m},\theta)}\ni\mu_{0}\mathbf{h}- \widehat{\mathbf{t}}_{\lambda,\varepsilon}(\mathbf{F},\mathbf{m},\theta)-\text{div}\Big{(} \frac{\kappa_{\lambda}(\mathbf{F})\nabla\mathbf{m}}{\det\mathbf{F}}\Big{)}\] \[\text{with}\quad\widehat{\mathbf{t}}_{\lambda,\varepsilon}(\mathbf{F}, \mathbf{m},\theta)=\frac{\pi_{\lambda}(\mathbf{F})\varphi^{\prime}_{\mathbf{m}}(\mathbf{F},\bm {m})}{\det\mathbf{F}}+\frac{\pi_{\lambda}(\mathbf{F})\zeta^{\prime}_{\mathbf{m}}(\mathbf{F}, \mathbf{m},\theta)}{(1{+}\varepsilon|\theta|^{1/2})\det\mathbf{F}}\,,\] (3.25d) \[\frac{\partial\mathbf{m}}{\partial t}=\text{skw}(\nabla\mathbf{v})\mathbf{m}-( \mathbf{v}{\cdot}\nabla)\mathbf{m}+\mathbf{r}\,,\] (3.25e) \[\frac{\partial w}{\partial t}=\xi_{\varepsilon}(\mathbf{F},\theta;\bm {e}(\mathbf{v}),\nabla^{2}\mathbf{v},\mathbf{r})+\text{div}\big{(}\mathcal{K}(\mathbf{F}, \theta)\nabla\theta-w\mathbf{v}\big{)}+\frac{\pi_{\lambda}(\mathbf{F})\zeta^{\prime}_{ \mathbf{F}}(\mathbf{F},\mathbf{m},\theta)\mathbf{F}^{\top}}{(1{+}\varepsilon|\theta|)\det\bm {F}}{\cdot}{\mathbf{e}}(\mathbf{v})\] \[+\frac{\pi_{\lambda}(\mathbf{F})\zeta^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m },\theta)}{(1{+}\varepsilon|\theta|^{1/2})\det\mathbf{F}}{\cdot}\big{(}\mathbf{r}{-} \text{skw}(\nabla\mathbf{v})\mathbf{m}\big{)}\qquad\text{with}\quad w=\omega(\mathbf{F}, \mathbf{m},\theta)\] \[\text{and}\ \ \ \xi_{\varepsilon}(\mathbf{F},\theta;\mathbf{e},\mathbf{G},\mathbf{r}):= \frac{\nu_{1}|\mathbf{e}|^{p}{+}\tau|\mathbf{r}|^{2}{+}h_{\text{\tiny{C}}}(\mathbf{F},\theta )|\mathbf{r}|{+}\nu_{2}|\mathbf{G}|^{p}}{1{+}\varepsilon|\mathbf{e}|^{p}{+}\varepsilon| \mathbf{G}|^{p}{+}\varepsilon|\mathbf{r}|^{2}}\,, \tag{3.25f}\] where \(\omega(\cdot,\cdot)\) is from (2.30f). We complete this system with the correspondingly regularized boundary conditions on \(I{\times}\Gamma\): \[\Big{[}\big{(}\mathbf{T}_{\lambda,\varepsilon}(\mathbf{F},\mathbf{m},\theta){ +}\mathbf{K}_{\lambda}(\mathbf{F},\nabla\mathbf{m}){+}\mathbf{S}_{\lambda,\varepsilon}(\mathbf{F}, \mathbf{m},\theta){+}\nu_{1}|\mathbf{e}(\mathbf{v})|^{p-2}\mathbf{e}(\mathbf{v})\] \[-\text{div}(\mathscr{H}{+}\mathscr{S}_{\lambda}(\mathbf{F},\mathbf{m}, \nabla\mathbf{m}))\big{)}\mathbf{n}{-}\text{div}_{\text{\tiny{S}}}\big{(}\mathscr{H} {\mathbf{n}}{+}\mathscr{S}_{\lambda}(\mathbf{F},\mathbf{m},\nabla\mathbf{m})\mathbf{n}\big{)}\Big{]}{ }_{\text{\tiny{T}}}{+}\nu_{5}|\mathbf{v}|^{p-2}\mathbf{v}=\mathbf{k}\,, \tag{3.26a}\] \[\boldsymbol{v}\cdot\boldsymbol{n}=0,\quad\quad\nabla^{2} \boldsymbol{v}\cdot(\boldsymbol{n}\otimes\boldsymbol{n})=\boldsymbol{0}\,,\quad \frac{\boldsymbol{\kappa}_{\lambda}(\boldsymbol{F})(\boldsymbol{n}\cdot \nabla)\boldsymbol{m}}{\det\boldsymbol{F}}=\boldsymbol{0}\,,\quad\text{ and}\] (3.26b) \[\times(\boldsymbol{F},\theta)\nabla\theta\cdot\boldsymbol{n}- \frac{\nu_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\frac{\partial\mathbf{F}_{\varepsilon k}}{\partial t}=(\nabla\mathbf{v}_{ \varepsilon k})\mathbf{F}_{\varepsilon k}-(\mathbf{v}_{\varepsilon k}\!\cdot\!\nabla)\bm {F}_{\varepsilon k}\quad\text{ in the }L^{1}(I\!\times\!\Omega;\mathbb{R}^{d\times d})\text{- sense, and} \tag{3.27b}\] \[\frac{\partial\mathbf{m}_{\varepsilon k}}{\partial t}=\mathrm{skw}( \nabla\mathbf{v}_{\varepsilon k})\mathbf{m}_{\varepsilon k}-(\mathbf{v}_{\varepsilon k} \!\cdot\!\nabla)\mathbf{m}_{\varepsilon k}+\mathbf{r}_{\varepsilon k}\quad\text{ in the weak sense,} \tag{3.27c}\] relying on \(\varrho_{\varepsilon k}\in W^{1,1}(I\!\times\!\Omega)\) and \(\mathbf{F}_{\varepsilon k}\in W^{1,1}(I\!\times\!\Omega;\mathbb{R}^{d\times d})\) which will be indeed proved later, together with the following integral identities \[\int_{0}^{T}\!\!\!\int_{\Omega}\biggl{(}\Bigl{(}\mathbf{T}_{\lambda, \varepsilon}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{ \varepsilon k})\!+\!\mathbf{K}_{\lambda}(\mathbf{F}_{\varepsilon k},\nabla\mathbf{m}_{ \varepsilon k})\!-\!\varrho_{\varepsilon k}\mathbf{v}_{\varepsilon k}\!\otimes\! \mathbf{v}_{\varepsilon k}\!+\!\nu_{1}|\mathbf{e}(\mathbf{v}_{\varepsilon k})|^{p-2}\mathbf{e }(\mathbf{v}_{\varepsilon k})\Bigr{)}\!\cdot\!\mathbf{e}(\widetilde{\mathbf{v}})\\ +\mathbf{S}_{\lambda,\varepsilon}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{ \varepsilon k},\theta_{\varepsilon k})\!\cdot\!\mathrm{skw}(\widetilde{\mathbf{v}} )-\mu_{0}\bigl{(}\nabla\mathbf{h}_{\varepsilon k}\bigr{)}\!\cdot\!(\mathbf{m}_{ \varepsilon k}\!\otimes\!\widetilde{\mathbf{v}})-\mu_{0}\mathbf{h}_{\varepsilon k}\! \cdot\!\mathbf{m}_{\varepsilon k}(\mathrm{div}\,\widetilde{\mathbf{v}})-\varrho_{ \varepsilon k}\mathbf{v}_{\varepsilon k}\!\cdot\!\frac{\partial\widetilde{\mathbf{v}} }{\partial t}\\ +\bigl{(}\nu_{2}|\nabla^{2}\mathbf{v}_{\varepsilon k}|^{p-2}\nabla^{2 }\mathbf{v}_{\varepsilon k}\!+\!\mathscr{S}_{\lambda}(\mathbf{F}_{\varepsilon k}, \mathbf{m}_{\varepsilon k},\nabla\mathbf{m}_{\varepsilon k})\bigr{)}\!\cdot\!\nabla^{2 }\widetilde{\mathbf{v}}\Bigr{)}\,\mathrm{d}\mathbf{x}\mathrm{d}t=\!\int_{\Omega}\varrho _{0}\mathbf{v}_{0}\!\cdot\!\widetilde{\mathbf{v}}(0)\,\mathrm{d}\mathbf{x}\\ +\int_{0}^{T}\!\!\!\int_{\Omega}\sqrt{\frac{\uprho_{\varepsilon k }}{\updet_{\lambda}(\mathbf{F}_{\varepsilon k})}}\mathbf{g}\!\cdot\!\widetilde{\mathbf{v }}\,\mathrm{d}\mathbf{x}\mathrm{d}t+\!\int_{0}^{T}\!\!\!\int_{\Gamma}(\mathbf{k}-\nu_{ \flat}|\mathbf{v}_{\varepsilon k}|^{p-2}\mathbf{v}_{\varepsilon k})\!\cdot\!\widetilde{ \mathbf{v}}\,\mathrm{d}S\mathrm{d}t \tag{3.27d}\] with \(\mathbf{T}_{\lambda,\varepsilon}\), \(\mathbf{K}_{\lambda}\), \(\mathbf{S}_{\lambda,\varepsilon}\), and \(\mathscr{S}_{\lambda}\) from (3.25b) and \(\mathbf{h}_{\varepsilon k}=\mathbf{h}_{\mathrm{ext}}\!+\!\nabla^{2}\Delta^{-1}\mathrm{ div}(\mathbf{m}_{\varepsilon k})\) for any \(\widetilde{\mathbf{v}}\in L^{\infty}(I;V_{k})\) with \(\widetilde{\mathbf{v}}\!\cdot\!\mathbf{n}=0\) on \(I\!\times\!\Gamma\) and \(\widetilde{\mathbf{v}}(T)=\mathbf{0}\), and \[\int_{0}^{T}\!\!\!\int_{\Omega}\biggl{(}\frac{\tau}{2}|\widetilde{ \mathbf{r}}|^{2}+h_{\mbox{\tiny c}}(\mathbf{F}_{\varepsilon k},\theta_{\varepsilon k} )|\widetilde{\mathbf{r}}|-\mathrm{div}\frac{\upkappa_{\lambda}(\mathbf{F}_{\varepsilon k })\nabla\mathbf{m}_{\varepsilon k}}{\det\mathbf{F}_{\varepsilon k}}\!\cdot\!\mathrm{skw} (\nabla\mathbf{v}_{\varepsilon k})\mathbf{m}_{\varepsilon k}+\frac{\upkappa_{\lambda}( \mathbf{F}_{\varepsilon k})\nabla\mathbf{m}_{\varepsilon k}}{\det\mathbf{F}_{\varepsilon k }}\!\cdot\!\nabla\widetilde{\mathbf{r}}\\ -\bigl{(}\mu_{0}\mathbf{h}-\widehat{\mathbf{t}}_{\lambda,\varepsilon}( \mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})\bigr{)} \!\cdot\!(\widetilde{\mathbf{r}}-\mathbf{r}_{\varepsilon k})+\frac{\mathbf{m}_{\varepsilon k }\!\times\!\mathbf{r}_{\varepsilon k}}{\upgamma(\mathbf{F}_{\varepsilon k},\mathbf{m}_{ \varepsilon k},\theta_{\varepsilon k})}\!\cdot\!\widetilde{\mathbf{r}}+\Bigl{(}\mathbf{K}_ {\lambda}(\mathbf{F}_{\varepsilon k},\nabla\mathbf{m}_{\varepsilon k})\\ -\frac{\upkappa_{\lambda}^{\prime}(\mathbf{F}_{\varepsilon k})|\nabla \mathbf{m}_{\varepsilon k}|^{2}\mathbf{F}_{\varepsilon k}^{\top}}{2\det\mathbf{F}_{ \varepsilon k}}\!\biggr{)}\!\cdot\!\mathbf{e}(\mathbf{v}_{\varepsilon k})\biggr{)}\, \mathrm{d}\mathbf{x}\mathrm{d}t\geq\int_{0}^{T}\!\!\!\int_{\Omega}\!\frac{\tau}{2}| \mathbf{r}_{\varepsilon k}|^{2}+h_{\mbox{\tiny c}}(\mathbf{F},\theta)|\mathbf{r}_{ \varepsilon k}|\,\mathrm{d}\mathbf{x}\mathrm{d}t\\ +\!\int_{\Omega}\!\frac{\upkappa_{\lambda}(\mathbf{F}_{\varepsilon k }(T))|\nabla\mathbf{m}_{\varepsilon k}(T)|^{2}}{2\det\mathbf{F}_{\varepsilon k}(T)}- \frac{\upkappa_{\lambda}(\mathbf{F}_{0})|\nabla\mathbf{m}_{0}|^{2}}{2\det\mathbf{F}_{0}} \mathrm{d}\mathbf{x} \tag{3.27e}\] holding for any \(\widetilde{\mathbf{r}}\in Z_{k}^{d}\) and for a.a. \(t\in I\), and further \[\int_{0}^{T}\!\!\!\int_{\Omega}\bigg{(}w_{\varepsilon k}\frac{ \partial\widetilde{\theta}}{\partial t}+\bigl{(}w_{\varepsilon k}\mathbf{v}_{ \varepsilon k}\!-\!\mathscr{K}(\mathbf{F}_{\varepsilon k},\theta_{\varepsilon k}) \nabla\theta_{\varepsilon k}\bigr{)}\!\cdot\!\nabla\widetilde{\theta}+\xi_{ \varepsilon}(\mathbf{F}_{\varepsilon k},\theta_{\varepsilon k};\mathbf{e}(\mathbf{v}_{ \varepsilon k}),\nabla^{2}\mathbf{v}_{\varepsilon k},\mathbf{r}_{\varepsilon k}) \widetilde{\theta}\\ +\pi_{\lambda}(\mathbf{F}_{\varepsilon k})\Bigl{(}\frac{\upzeta_{ \mathbf{r}}^{\prime}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{ \varepsilon k})\mathbf{F}_{\varepsilon k}^{\top}}{(1\!+\!\varepsilon|\theta_{ \varepsilon k}|)\det\mathbf{F}_{\varepsilon k}}\!\cdot\!\mathbf{e}(\mathbf{v}_{\varepsilon k})+ \frac{\upzeta_{\mathbf{m}}^{\prime}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k}, \theta_{\varepsilon k})\!\cdot\!\bigl{(}\mathbf{r}_{\varepsilon k}\!-\!\mathrm{skw} (\nabla\mathbf{v}_{\varepsilon k})\mathbf{m}_{\varepsilon k}\bigr{)}}{(1\!+\! \varepsilon|\theta_{\varepsilon k}|^{1/2})\det\mathbf{F}_{\varepsilon k}}\Bigr{)} \widetilde{\theta}\,\biggr{)}\mathrm{d}\mathbf{x}\mathrm{d}t\\ +\!\int_{\Omega}\!\omega(\mathbf{F}_{0},\theta_{0,\varepsilon}) \widetilde{\theta}(0)\,\mathrm{d}\mathbf{x}+\!\!\int_{0}^{T}\!\!\!\int_{\Gamma} \Bigl{(}h_{\varepsilon}(\theta_{\varepsilon k})\!+\!\frac{\nu_{\flat}|\mathbf{v}_{ \varepsilon k}|^{p}}{2\!+\!\varepsilon|\mathbf{v}_{\varepsilon k}|^{p}}\Bigr{)} \widetilde{\theta}\,\mathrm{d}S\mathrm{d}t=0\\ \text{ with }\quad w_{\varepsilon k}=\omega(\mathbf{F}_{\varepsilon k},\mathbf{m}_{ \varepsilon k},\theta_{\varepsilon k}) \tag{3.27f}\] holds for any \(\widetilde{\theta}\in C^{1}(I;Z_{k})\) with \(\widetilde{\theta}(T)=0\). Existence of this solution is based on the standard theory of systems of ordinary differential equations first locally in time combined here with the abstract \(W^{1,r}(\Omega)\)- and \(L^{2}(\Omega)\) valued differential equations based on Lemmas 3.3 and 3.4 for the scalar, the vector, and the tensor transport equations (3.27a-c) and then by successive prolongation on the whole time interval based on the \(L^{\infty}\)-estimates below. Usage of Lemmas 3.3 and 3.4 with the fixed initial conditions \(\varrho_{0}\), \(\boldsymbol{F}_{0}\), and \(\boldsymbol{m}_{0}\) defines the nonlinear operators \(\mathfrak{R}:I\times L^{p}(I;W^{2,p}(\Omega;\mathbb{R}^{d}))\to W^{1,r}(\Omega)\), \(\mathfrak{F}:I\times L^{p}(I;W^{2,p}(\Omega;\mathbb{R}^{d}))\to W^{1,r}( \Omega;\mathbb{R}^{\mathrm{d}\times d})\), and \(\mathfrak{M}:I\times L^{p}(I;W^{2,p}(\Omega;\mathbb{R}^{d}))\times L^{2}(I \times\Omega;\mathbb{R}^{d})\to L^{2}(\Omega;\mathbb{R}^{d})\) by \[\varrho_{\varepsilon k}(t)=\mathfrak{R}\big{(}t,\boldsymbol{v}_{\varepsilon k }\big{)}\,,\quad\boldsymbol{F}_{\varepsilon k}(t)=\mathfrak{F}\big{(}t, \boldsymbol{v}_{\varepsilon k}\big{)}\,,\ \ \text{and}\ \ \boldsymbol{m}_{\varepsilon k}(t)=\mathfrak{M}\big{(}t, \boldsymbol{v}_{\varepsilon k},\boldsymbol{r}_{\varepsilon k}\big{)}\,. \tag{3.28}\] _Step 3: first a priori estimates._ In the Galerkin approximation, it is legitimate to use \(\widetilde{\boldsymbol{v}}=\boldsymbol{v}_{\varepsilon k}\) for (3.27d) and \(\widetilde{\theta}=\theta_{\varepsilon k}\) for (3.27f). We take the benefit from having the transport equations (3.27a,b) non-discretized and thus we can test them by the nonlinearities \(|\boldsymbol{v}_{\varepsilon k}|^{2}/2\) and \([\pi_{\lambda}\varphi(\boldsymbol{F}_{\varepsilon k},\boldsymbol{m}_{ \varepsilon k})/\mathrm{det}\,\boldsymbol{F}_{\varepsilon k}]^{\prime}_{ \boldsymbol{F}}\), respectively. In particular, we can use the calculus (2.34) which holds also for \(\pi_{\lambda}\varphi\) instead of \(\varphi\) and the calculus (2.38) also for the semi-Galerkin approximate solution. Also we can use the calculus (2.39)-(2.46) with \(\pi_{\lambda}\zeta^{\prime}_{\boldsymbol{F}}/(1+|\theta|)\) and \(\pi_{\lambda}\zeta^{\prime}_{\boldsymbol{m}}/(1+|\theta|^{1/2})\) and \(\kappa_{\lambda}\) instead of \(\zeta^{\prime}_{\boldsymbol{F}}\) and \(\zeta^{\prime}_{\boldsymbol{m}}\) and \(\kappa\), respectively, using also that we have the nondiscretized equation (3.27c) at disposal. The philosophy of the regularization (3.25) is that, for this estimation procedure, the system decouples to the magneto-mechanical part and the thermal part which allows for basic estimates independent of \(\boldsymbol{v}_{\varepsilon k}\), \(\boldsymbol{m}_{\varepsilon k}\), and \(\boldsymbol{r}_{\varepsilon k}\). Specifically, from (3.27d) tested by \(\boldsymbol{v}_{\varepsilon k}\) and (3.27e) tested by \(\boldsymbol{0}\), like (2.49) we obtain the inequality \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\!\frac{\varrho_{ \varepsilon k}}{2}|\boldsymbol{v}_{\varepsilon k}|^{2}+\frac{\pi_{\lambda}( \boldsymbol{F}_{\varepsilon k})\varphi(\boldsymbol{F}_{\varepsilon k}, \boldsymbol{m}_{\varepsilon k})}{\mathrm{det}\,\boldsymbol{F}_{\varepsilon k} }+\frac{\kappa_{\lambda}(\boldsymbol{F}_{\varepsilon k})}{2\,\mathrm{det}\, \boldsymbol{F}_{\varepsilon k}}|\nabla\boldsymbol{m}_{\varepsilon k}|^{2}-\mu_ {0}\boldsymbol{h}_{\mathrm{ext}}\!\cdot\!\boldsymbol{m}_{\varepsilon k}\, \mathrm{d}\boldsymbol{x}\] \[\qquad+\!\int_{\Omega}\!\xi_{\varepsilon}(\boldsymbol{F}_{ \varepsilon k},\theta_{\varepsilon k};\boldsymbol{e}(\boldsymbol{v}_{ \varepsilon k}),\nabla^{2}\boldsymbol{v}_{\varepsilon k},\boldsymbol{r}_{ \varepsilon k})\,\mathrm{d}\boldsymbol{x}+\!\int_{\Gamma}\!\nu_{\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! with \(C_{\varepsilon}\) depending on \(\varepsilon>0\) considered fixed in this Step. The former estimate in (3.31a) relies also on the Navier-type boundary conditions and allows us to use Lemma 3.3 to obtain the estimate \[\|\mathbf{F}_{\varepsilon k}\|_{L^{\infty}(I;W^{1,r}(\varOmega;\mathbb{R }^{d\times d}))}\leq C_{r,\varepsilon}\quad\text{with}\quad\Big{\|}\frac{1}{ \det\mathbf{F}_{\varepsilon k}}\Big{\|}_{L^{\infty}(I;W^{1,r}(\varOmega))}\leq C_{r,\varepsilon}\,, \tag{3.31c}\] \[\|\varrho_{\varepsilon k}\|_{L^{\infty}(I;W^{1,r}(\varOmega))} \leq C_{r,\varepsilon}\quad\text{ with}\quad\Big{\|}\frac{1}{\varrho_{\varepsilon k}}\Big{\|}_{L^{ \infty}(I;W^{1,r}(\varOmega))}\leq C_{r,\varepsilon}\,,\] (3.31d) \[\|\mathbf{v}_{\varepsilon k}\|_{L^{\infty}(I;L^{2}(\varOmega; \mathbb{R}^{d}))}\leq C_{\varepsilon}\,,\quad\text{ and}\] (3.31e) \[\|\mathbf{m}_{\varepsilon k}\|_{L^{\infty}(I;H^{1}(\varOmega; \mathbb{R}^{d}))}\leq C_{\varepsilon}\,; \tag{3.31f}\] for (3.31e) and (3.31f) we used argumentation like in (3.19c) and (3.31f), respectively. For \(\lambda\) and \(\varepsilon\) fixed, it is important that these estimates can be made independently of \(\theta\) since \(\mathbf{T}_{\lambda,\varepsilon}\) is a-priori bounded. It is also important that, due to the latter estimate in (3.31c), the singularity in \(\zeta(\cdot,\mathbf{m},\theta)\) is not active and \(\omega(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})\) is well defined. Also, we have the time derivatives estimated by comparison from (3.27) as \[\Big{\|}\frac{\partial\varrho_{\varepsilon k}}{\partial t}\Big{\|}_{L^{p}(I; L^{r}(\varOmega))}\leq C_{\varepsilon},\quad\Big{\|}\frac{\partial\mathbf{F}_{ \varepsilon k}}{\partial t}\Big{\|}_{L^{p}(I;L^{r}(\varOmega;\mathbb{R}^{d \times d}))}\leq C_{\varepsilon},\quad\Big{\|}\frac{\partial\mathbf{m}_{ \varepsilon k}}{\partial t}\Big{\|}_{L^{2}(I\times\varOmega;\mathbb{R}^{d})} \leq C_{\varepsilon}. \tag{3.32}\] The further estimates can be obtained by testing the Galerkin approximation of (3.27f) by \(\widetilde{\theta}=\theta_{\varepsilon k}\), This is to be made carefully not to see terms as \(\theta_{\varepsilon k}\omega^{\prime}_{\mathbf{F}}(\mathbf{F}_{\varepsilon k},\mathbf{m}_ {\varepsilon k},\theta_{\varepsilon k})\):\((\mathbf{v}_{\varepsilon k}\cdot\nabla)\mathbf{F}_{\varepsilon k}\) which is not integrable. To this goal, we consider the convective-derivative form \(\frac{\partial}{\partial t}w+\operatorname{div}(w\mathbf{v})=\dot{w}+w\operatorname {div}\mathbf{v}\) in (3.25f). We denote by \(\widehat{\omega}(\mathbf{F},\mathbf{m},\theta)\) a primitive function to \(\theta\mapsto\theta\omega^{\prime}_{\theta}(\mathbf{F},\mathbf{m},\theta)\) depending smoothly on \(\mathbf{F}\) and on \(\mathbf{m}\), specifically \[\widehat{\omega}_{\lambda}(\mathbf{F},\mathbf{m},\theta)=\int_{0}^{1}\!r\theta^{2} \omega^{\prime}_{\theta}(\mathbf{F},\mathbf{m},r\theta)\,\mathrm{d}r\,. \tag{3.33}\] For \(w_{\varepsilon k}=\omega(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta _{\varepsilon k})\) and using (3.27b), the mentioned test by \(\theta_{\varepsilon k}\) then gives \[\theta_{\varepsilon k}\dot{\mathbf{w}}_{\varepsilon k} =\theta_{\varepsilon k}\omega^{\prime}_{\mathbf{F}}(\mathbf{F}_{ \varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})\text{:}\dot{\mathbf{ F}}_{\varepsilon k}\] \[\quad+\theta_{\varepsilon k}\omega^{\prime}_{\mathbf{m}}(\mathbf{F}_{ \varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})\text{:}\dot{\mathbf{ m}}_{\varepsilon k}+\theta_{\varepsilon k}\omega^{\prime}_{\theta}(\mathbf{F}_{ \varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})\dot{\theta}_{ \varepsilon k}\] \[=\Big{(}\theta_{\varepsilon k}\omega^{\prime}_{\mathbf{F}}(\mathbf{F}_{ \varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})-\widehat{\omega}^{ \prime}_{\mathbf{F}}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{ \varepsilon k})\Big{)}\text{:}(\nabla\mathbf{v}_{\varepsilon k})\mathbf{F}_{\varepsilon k}\] \[\quad+\Big{(}\theta_{\varepsilon k}\omega^{\prime}_{\mathbf{m}}(\mathbf{F} _{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})\text{-}\widehat{ \omega}^{\prime}_{\mathbf{m}}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{ \varepsilon k})\Big{)}\text{:}\dot{\mathbf{m}}_{\varepsilon k}\text{+}\,\widehat{ \omega}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})\,. \tag{3.34}\] Integrating the last term over \(\varOmega\) gives, by the Green formula, \(\int_{\varOmega}\widehat{\widehat{\omega}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{ \varepsilon k},\theta_{\varepsilon k})}\,\mathrm{d}\mathbf{x}=\frac{\mathrm{d}}{ \mathrm{d}t}\int_{\varOmega}\widehat{\omega}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{ \varepsilon k},\theta_{\varepsilon k})\,\mathrm{d}\mathbf{x}-\int_{\varOmega} \widehat{\omega}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{ \varepsilon k})\mathrm{div}\mathbf{v}_{\varepsilon k}\,\mathrm{d}\mathbf{x}+\int_{ \varGamma}\widehat{\omega}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k}, \theta_{\varepsilon k})\mathbf{v}_{\varepsilon k}\text{:}\mathbf{n}\,\mathrm{d}S\). Thus, we obtain: \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\varOmega}\widehat{\omega}( \mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})\, \mathrm{d}\mathbf{x}+\int_{\varOmega}\mathscr{X}(\mathbf{F}_{\varepsilon k},\theta_{ \varepsilon k})|\nabla\theta_{\varepsilon k}|^{2}\,\mathrm{d}\mathbf{x}\] \[\quad=\int_{\varOmega}\biggl{(}\Big{(}\xi_{\varepsilon}(\mathbf{F}_{ \varepsilon k},\theta_{\varepsilon k};\mathbf{e}(\mathbf{v}_{\varepsilon k}),\nabla^{2} \mathbf{v}_{\varepsilon k},\mathbf{r}_{\varepsilon k})+\frac{\pi_{\lambda}(\mathbf{F}_{ \varepsilon k})\zeta^{\prime}_{\mathbf{F}}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{ \varepsilon k},\theta_{\varepsilon k})\mathbf{F}^{\top}_{\varepsilon k}}{(1+ \varepsilon|\theta_{\varepsilon k}|)\det\mathbf{F}_{\varepsilon k}}\text{:}\mathbf{e}(\mathbf{v}_ {\varepsilon k})\] \[\Big{|}\int_{\Omega}\Bigl{(}\widehat{\omega}_{\mathbf{r}}^{\prime}(\mathbf{F}_ {\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})-\theta_{\varepsilon k }\omega_{\mathbf{r}}^{\prime}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_ {\varepsilon k})\Bigr{)}\!\cdot\!\!\big{(}\mathbf{r}_{\varepsilon k}\!-\!\mathrm{ skw}(\nabla\mathbf{v}_{\varepsilon k})\mathbf{m}_{\varepsilon k}\big{)}\,\mathrm{d}\mathbf{x} \bigg{|}\] \[\leq 2C^{2}\bigl{(}|\Omega|+\|\theta_{\varepsilon k}\|_{L^{2}( \Omega)}^{2}\bigr{)}\!+\bigl{\|}(\mathbf{r}_{\varepsilon k}\!-\!\mathrm{skw}( \nabla\mathbf{v}_{\varepsilon k})\mathbf{m}_{\varepsilon k}\bigr{\|}_{L^{2}(\Omega; \mathbb{R}^{d})}^{2}\,. \tag{3.37}\] Using (3.5d) together with \(\omega(\mathbf{F},\mathbf{m},0)=0\) so that \(|\omega(\mathbf{F},\mathbf{m},\theta|\leq C_{K}|\theta|\), the convective terms \(\omega(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})( \mathrm{div}\mathbf{v}_{\varepsilon k})\theta_{\varepsilon k}\) and \(\widehat{\omega}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{ \varepsilon k})\mathrm{div}\,\mathbf{v}_{\varepsilon k}\) in (3.35) can be estimated as \[\bigg{|}\int_{\Omega}\!\!\Bigl{(}\widehat{\omega}(\mathbf{F}_{\varepsilon k},\mathbf{ m}_{\varepsilon k},\theta_{\varepsilon k})\!-\!\theta_{\varepsilon k }\omega(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k}) \Bigr{)}\!\mathrm{div}\mathbf{v}_{\varepsilon k}\mathrm{d}\mathbf{x}\bigg{|}\leq \frac{C_{K}}{2}\bigl{\|}\theta_{\varepsilon k}\bigr{\|}_{L^{2}(\Omega)}^{2} \bigl{\|}\mathrm{div}\,\mathbf{v}_{\varepsilon k}\bigr{\|}_{L^{\infty}(\Omega)}. \tag{3.38}\] The terms \(\|\theta_{\varepsilon k}\|_{L^{2}(\Omega)}^{2}\) in (3.36) and in (3.37) and in (3.38)are to be treated by the Gronwall inequality. The boundary term in (3.35) can be estimated by (3.5i), taking also into the account the extension (3.10), as \[\int_{\varGamma}\Big{(}h_{\varepsilon}(\theta_{\varepsilon k})- \frac{\nu_{\flat}|\mathbf{v}_{\varepsilon k}|^{p}}{2\!+\!\varepsilon|\mathbf{v}_{ \varepsilon k}|^{p}}\Big{)}\theta_{\varepsilon k}\,\mathrm{d}S \leq C_{\varepsilon,\nu_{\flat},a}+a\|\theta_{\varepsilon k}\|_{ L^{2}(\varGamma)}^{2}\] \[\leq C_{\varepsilon,\nu_{\flat},a}+aN^{2}\bigl{(}\|\theta_{ \varepsilon k}\|_{L^{2}(\varGamma)}^{2}+\|\nabla\theta_{\varepsilon k}\|_{L^{2 }(\varGamma;\mathbb{R}^{d})}^{2}\bigr{)}\,, \tag{3.39}\] where \(C_{\varepsilon,\nu_{\flat},\delta}\) depends also on \(C\) from (3.5i), \(N\) is the norm of the trace operator \(H^{1}(\varOmega)\to L^{2}(\varGamma)\). For \(a>0\) in (3.39) sufficiently small, the last term can be absorbed in the left-hand side of (3.35). Exploiting again the bound (3.31a,b), we eventually obtain the estimate \[\|\theta_{\varepsilon k}\|_{L^{\infty}(I;L^{2}(\varOmega)\cap L^{ 2}(I;H^{1}(\varOmega))}\leq C\quad\text{and also} \tag{3.40a}\] \[\|w_{\varepsilon k}\|_{L^{\infty}(I;L^{2}(\varOmega))\cap L^{2}( I;H^{1}(\varOmega))}\leq C. \tag{3.40b}\] For (3.40b), we used the calculus \[\nabla w_{\varepsilon k}=[\omega_{\lambda}]^{\prime}_{\theta}( \boldsymbol{F}_{\varepsilon k},\boldsymbol{m}_{\varepsilon k},\theta_{ \varepsilon k})\nabla\theta_{\varepsilon k}+[\omega_{\lambda}]^{\prime}_{ \boldsymbol{F}}(\boldsymbol{F}_{\varepsilon k},\boldsymbol{m}_{\varepsilon k },\theta_{\varepsilon k})\nabla\boldsymbol{F}_{\varepsilon k}\] \[+[\omega_{\lambda}]^{\prime}_{\boldsymbol{m}}(\boldsymbol{F}_{ \varepsilon k},\boldsymbol{m}_{\varepsilon k},\theta_{\varepsilon k})\nabla \boldsymbol{m}_{\varepsilon k}\in L^{2}(I{\times}\Omega;\mathbb{R}^{d}) \tag{3.41}\] together with the already proved information that \(|[\omega_{\lambda}]^{\prime}_{\boldsymbol{F}}(\boldsymbol{F}_{\varepsilon k}, \boldsymbol{m}_{\varepsilon k},\theta_{\varepsilon k})|\) is bounded in \(L^{2}(I;L^{2^{*}}(\Omega))\) and \(|[\omega_{\lambda}]^{\prime}_{\boldsymbol{m}}(\boldsymbol{F}_{\varepsilon k },\boldsymbol{m}_{\varepsilon k},\theta_{\varepsilon k})|\) is bounded in \(L^{\infty}(I{\times}\Omega)\) due to (3.5d), while \(|\nabla\boldsymbol{F}_{\varepsilon k}|\) is bounded in \(L^{\infty}(I;L^{r}(\Omega))\) and \(|\nabla\boldsymbol{m}_{\varepsilon k}|\) is bounded in \(L^{\infty}(I;L^{2}(\Omega))\). _Step 4: Limit passage in the magneto-mechanical part for \(k\to\infty\)._ Using the Banach selection principle, we can extract some subsequence of \(\{(\varrho_{\varepsilon k},\boldsymbol{v}_{\varepsilon k},\boldsymbol{F}_{ \varepsilon k},\boldsymbol{m}_{\varepsilon k},\boldsymbol{r}_{\varepsilon k}, w_{\varepsilon k})\}_{k\in\mathbb{N}}\) and its limit \((\varrho_{\varepsilon},\boldsymbol{v}_{\varepsilon},\boldsymbol{F}_{ \varepsilon},\boldsymbol{m}_{\varepsilon},\boldsymbol{r}_{\varepsilon},w_{ \varepsilon}):I\to W^{1,r}(\Omega)\times L^{2}(\Omega;\mathbb{R}^{d})\times W ^{1,r}(\Omega;\mathbb{R}^{d\times d})\times H^{1}(\Omega;\mathbb{R}^{d}) \times L^{2}(\Omega;\mathbb{R}^{d})\times L^{2}(\Omega)\) such that \[\varrho_{\varepsilon k}\to\varrho_{\varepsilon}\] weakly* in \[L^{\infty}(I;W^{1,r}(\Omega))\,, \tag{3.42a}\] \[\boldsymbol{v}_{\varepsilon k}\to\boldsymbol{v}_{\varepsilon}\] weakly* in \[L^{\infty}(I;L^{2}(\Omega;\mathbb{R}^{d}))\cap L^{p}(I;W^{2,p}( \Omega;\mathbb{R}^{d})),\] (3.42b) \[\boldsymbol{F}_{\varepsilon k}\to\boldsymbol{F}_{\varepsilon}\] weakly* in \[L^{\infty}(I;W^{1,r}(\Omega;\mathbb{R}^{d\times d})),\] (3.42c) \[\boldsymbol{m}_{\varepsilon k}\to\boldsymbol{m}_{\varepsilon}\] weakly* in \[L^{\infty}(I;H^{1}(\Omega;\mathbb{R}^{d})),\] (3.42d) \[\boldsymbol{r}_{\varepsilon k}\to\boldsymbol{r}_{\varepsilon}\] weakly in \[L^{2}(I{\times}\Omega;\mathbb{R}^{d}),\] (3.42e) \[w_{\varepsilon k}\to w_{\varepsilon}\] weakly* in \[L^{\infty}(I;L^{2}(\Omega))\,\cap\,L^{2}(I;H^{1}(\Omega)). \tag{3.42f}\] Relying on the assumption \(r>d\) and on estimates (3.32) on \(\frac{\partial}{\partial t}\varrho_{\varepsilon k},\,\frac{\partial}{ \partial t}\boldsymbol{F}_{\varepsilon k},\) and \(\frac{\partial}{\partial t}\boldsymbol{m}_{\varepsilon k},\) by the Aubin-Lions lemma we also have that \[\varrho_{\varepsilon k}\to\varrho_{\varepsilon}\] strongly in \[C(I{\times}\overline{\Omega})\,, \tag{3.43a}\] \[\boldsymbol{F}_{\varepsilon k}\to\boldsymbol{F}_{\varepsilon}\] strongly in \[C(I{\times}\overline{\Omega};\mathbb{R}^{d\times d}),\] and (3.43b) \[\boldsymbol{m}_{\varepsilon k}\to\boldsymbol{m}_{\varepsilon}\] strongly in \[C(I{\times}\overline{\Omega};\mathbb{R}^{d})\,. \tag{3.43c}\] This already allows for the limit passage in the evolution-and-transport equations (3.27), cf. (3.7) and (3.9). Further, by comparison in the equation (3.25f) with the boundary condition (3.26c) in its Galerkin approximation, we obtain a bound on \(\frac{\partial}{\partial t}w_{\varepsilon k}\) in seminorms \(|\cdot|_{l}\) on \(L^{2}(I;H^{1}(\Omega)^{*})\) arising from this Galerkin approximation, defined as \(|f|_{l}:=\sup\{\int_{0}^{T}\!\!\int_{\Omega}f\widetilde{\theta}\,\mathrm{d} \boldsymbol{x}\mathrm{d}t;\ \|\widetilde{\theta}\|_{L^{2}(I;H^{1}(\Omega))}\leq 1,\ \widetilde{ \theta}(t)\in Z_{l}\ \text{for}\ t\in I\}\). More specifically, for any \(k\geq l\), we can estimate \[\Big{|}\frac{\partial w_{\varepsilon k}}{\partial t}\Big{|}_{l} =\sup_{\begin{subarray}{c}\widetilde{\theta}(t)\in Z_{l}\ \text{for}\ t\in I \end{subarray}}\ \int_{0}^{T}\!\!\!\int_{\Omega}\left(-\mathscr{K}(\boldsymbol{F}_{ \varepsilon k},\theta_{\varepsilon k})\nabla\theta_{\varepsilon k}{\cdot} \nabla\widetilde{\theta}+\Big{(}\xi_{\varepsilon}(\boldsymbol{F}_{\varepsilon k },\theta_{\varepsilon k};\boldsymbol{e}(\boldsymbol{v}_{\varepsilon k}),\nabla^{2} \boldsymbol{v}_{\varepsilon k},\boldsymbol{r}_{\varepsilon k})\] \[\qquad\qquad\qquad\qquad+\,\frac{\pi_{\lambda}(\boldsymbol{F}_{ \varepsilon k})\zeta^{\prime}_{\boldsymbol{F}}(\boldsymbol{F}_{\varepsilon k}, \boldsymbol{m}_{\varepsilon k},\theta_{\varepsilon k})\boldsymbol{F}_{\varepsilon k }^{\top}}{(1{+}\varepsilon|\theta_{\varepsilon k}|)\det\boldsymbol{F}_{ \varepsilon k}}{\cdot}\boldsymbol{e}(\boldsymbol{v}_{\varepsilon k})\] \[\qquad\qquad\qquad\qquad\qquad+\,\frac{\pi_{\lambda}(\boldsymbol{F}_ {\varepsilon k})\zeta^{\prime}_{\boldsymbol{m}}(\boldsymbol{F}_{\varepsilon k}, \boldsymbol{m}_{\varepsilon k},\theta_{\varepsilon k})}{(1{+}\varepsilon|\theta_{ \varepsilon k}|^{1/2})\det\boldsymbol{F}_{\varepsilon k}}{\cdot}\big{(} \boldsymbol{r}_{\varepsilon k}{-}\mathrm{skw}(\nabla\boldsymbol{v}_{\varepsilon k })\boldsymbol{m}_{\varepsilon k}\big{)}\Big{)}\widetilde{\theta}\right)\!\mathrm{d} \boldsymbol{x}\mathrm{d}t\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ with some \(C\) depending on the estimates (3.31a,b) and (3.40a) but independent on \(l\in N\). Thus, by (3.42f) and by a generalized Aubin-Lions theorem [46, Ch.8], we obtain \[w_{\varepsilon k}\to w_{\varepsilon}\] strongly in \[L^{s}(I{\times}\Omega)\] for \[1\leq s<2+4/d\] . (3.45a) Since \[\omega_{\lambda}(\boldsymbol{F}_{\varepsilon k},\boldsymbol{m}_{\varepsilon k },\cdot)\] is increasing, we can write \[\theta_{\varepsilon k}=[\omega_{\lambda}(\boldsymbol{F}_{\varepsilon k}, \boldsymbol{m}_{\varepsilon k},\cdot)]^{-1}(w_{\varepsilon k})\] . Thanks to the continuity of \[(\boldsymbol{F},\boldsymbol{m},w)\mapsto[\omega_{\lambda}(\boldsymbol{F}, \boldsymbol{m},\cdot)]^{-1}(w):\mathbb{R}^{d\times d}\times\mathbb{R}^{d} \times\mathbb{R}\to\mathbb{R}\] and the at most linear growth with respect to \[w\] uniformly with respect to \[\boldsymbol{F}\] from any compact \[K\subset\mathrm{GL}^{+}(d)\] , cf. ( 3.5c ), we have also \[\theta_{\varepsilon k}\to\theta_{\varepsilon}=[\omega_{\lambda}( \boldsymbol{F}_{\varepsilon},\boldsymbol{m}_{\varepsilon k},\cdot)]^{-1}(w_ {\varepsilon})\] strongly in \[L^{s}(I{\times}\Omega)\] for \[1\leq s<2+4/d\] ; ( 3.45b ) actually, (3.45a,b) results from interpolation of (3.40). Note that we do not have any direct information about \[\frac{\partial}{\partial t}\theta_{\varepsilon k}\] so that we could not use the Aubin-Lions arguments straight for \[\{\theta_{\varepsilon k}\}_{k\in\mathbb{N}}\] . Thus, by the continuity of the corresponding Nemytskii (or here simply superposition) mappings, also the conservative part of the regularized Cauchy stress as well as the heat part of the internal energy, namely \[\boldsymbol{T}_{\lambda,\varepsilon}(\boldsymbol{F}_{\varepsilon k },\boldsymbol{m}_{\varepsilon k},\theta_{\varepsilon k})\to\boldsymbol{T}_{ \lambda,\varepsilon}(\boldsymbol{F}_{\varepsilon},\boldsymbol{m}_{\varepsilon },\theta_{\varepsilon})\] strongly in \[L^{c}(I{\times}\Omega;\mathbb{R}^{d\times d}_{\mathrm{sym}})\] , \[1\leq c<\infty\] , ( 3.45c ) \[\frac{\pi_{\lambda}(\boldsymbol{F}_{\varepsilon k})\zeta^{\prime} _{\boldsymbol{F}}(\boldsymbol{F}_{\varepsilon k},\boldsymbol{m}_{\varepsilon k },\theta_{\varepsilon k})\boldsymbol{F}^{\top}_{\varepsilon k}}{(1{+} \varepsilon[\theta_{\varepsilon k}])\det\boldsymbol{F}_{\varepsilon k}}\] strongly in \[L^{c}(I{\times}\Omega;\mathbb{R}^{d\times d}_{\mathrm{sym}})\] , \[1\leq c<\infty\] , ( 3.45d ) \[\boldsymbol{S}_{\lambda,\varepsilon}(\boldsymbol{F}_{\varepsilon k },\boldsymbol{m}_{\varepsilon k},\theta_{\varepsilon k})\to\boldsymbol{S}_{ \lambda,\varepsilon}(\boldsymbol{F}_{\varepsilon},\boldsymbol{m}_{\varepsilon },\theta_{\varepsilon})\] strongly in \[L^{2}(I;L^{2}(\Omega;\mathbb{R}^{d\times d}_{\mathrm{skew}}))\] , ( 3.45e ) \[\mathscr{S}_{\lambda}(\boldsymbol{F}_{\varepsilon k},\boldsymbol{m }_{\varepsilon k},\nabla\boldsymbol{m}_{\varepsilon k})\] \[\to\mathscr{S}_{\lambda}(\boldsymbol{F}_{\varepsilon},\boldsymbol{ m}_{\varepsilon},\nabla\boldsymbol{m}_{\varepsilon})\] weakly* in \[L^{\infty}(I;L^{2^{\star}2/(2^{\star}+2)}(\Omega;\mathbb{R}^{d\times d\times d}))\] , ( 3.45f ) \[\widehat{\boldsymbol{t}}_{\lambda}(\boldsymbol{F}_{\varepsilon k}, \boldsymbol{m}_{\varepsilon k},\theta_{\varepsilon k})\to\widehat{\boldsymbol{ t}}_{\lambda}(\boldsymbol{F}_{\varepsilon},\boldsymbol{m}_{\varepsilon},\theta_{ \varepsilon})\] strongly in \[L^{c}(I{\times}\Omega;\mathbb{R}^{d})\] , \[1\leq c<2+4/d\] . ( 3.45h ) It is important to notice that \[\nabla(\varrho_{\varepsilon k}\boldsymbol{v}_{\varepsilon k})=\nabla\varrho _{\varepsilon k}\otimes\boldsymbol{v}_{\varepsilon k}+\varrho_{\varepsilon k} \nabla\boldsymbol{v}_{\varepsilon k}\] is bounded in \[L^{\infty}(I;L^{r}(\Omega;\mathbb{R}^{d\times d}))\] due to the already obtained bounds (3.31a,c,d). Therefore, \[\varrho_{\varepsilon k}\boldsymbol{v}_{\varepsilon k}\] converges weakly* in \[L^{\infty}(I;W^{1,r}(\Omega;\mathbb{R}^{d}))\] . The limit of \[\varrho_{\varepsilon k}\boldsymbol{v}_{\varepsilon k}\] can be identified as \[\varrho_{\varepsilon}\boldsymbol{v}_{\varepsilon}\] because we already showed that \[\varrho_{\varepsilon k}\] converges strongly in \[(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq with \(C\) independent of \(k\). This yields a bound for \(\frac{\partial}{\partial t}(\varrho_{\varepsilon k}\mathbf{v}_{\varepsilon k})\) in a seminorm on \(L^{1}(I;L^{2}(\varOmega;\mathbb{R}^{d}))\)\(+L^{p^{\prime}}(I;W^{2,p}(\varOmega;\mathbb{R}^{d})^{*})\) induced by the Galerkin discretization by \(V_{k}\), and by any \(V_{l}\) with \(l\leq k\) with \(C\) in (3.46) independent of \(k\). Here we used in particular that \(\mathbf{K}_{\lambda}(\mathbf{F}_{\varepsilon k},\nabla\mathbf{m}_{\varepsilon k})\) and \(\mathbf{S}_{\lambda,\varepsilon}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k}, \varrho_{\varepsilon k})\) are bounded in \(L^{\infty}(I;L^{1}(\varOmega;\mathbb{R}^{d\times d}))\) which is surely in duality with \(L^{p}(I;W^{1,p}(\varOmega;\mathbb{R}^{d\times d}))\) and that \(\mathscr{S}_{\lambda}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\nabla \mathbf{m}_{\varepsilon k})\) is bounded in \(L^{\infty}(I;L^{2*2/(2^{+}2)}(\varOmega;\mathbb{R}^{d\times d\times d}))\) which is in duality with \(L^{p}(I\!\times\!\varOmega;\mathbb{R}^{d\times d\times d})\) if \(p>d\). By a generalization of the Aubin-Lions compact-embedding theorem, cf. [46, Lemma 7.7], we then obtain \[\varrho_{\varepsilon k}\,\mathbf{v}_{\varepsilon k}\to\varrho_{k}\mathbf{v}_{ \varepsilon}\qquad\quad\text{strongly in }L^{c}(I\!\times\!\varOmega;\mathbb{R}^{d})\quad\text{for any }1\leq c<4\,.\] (3.47a) Since obviously \[\mathbf{v}_{\varepsilon k}=(\varrho_{\varepsilon k}\mathbf{v}_{\varepsilon k})(1/ \varrho_{\varepsilon k})\], thanks to ( 3.43a ) and ( 3.47a ), we also have that \[\mathbf{v}_{\varepsilon k}\to\mathbf{v}_{\varepsilon}\qquad\qquad\quad\text{strongly in }L^{c}(I\!\times\!\varOmega;\mathbb{R}^{d})\quad\text{with any }1\leq c<4\,. \tag{3.47b}\] For the limit passage in the momentum equation, one uses the monotonicity of the dissipative stress, i.e., the monotonicity of the quasilinear operator \(\mathbf{v}\mapsto\text{div}\big{(}\text{div}(\nu_{2}|\nabla^{2}\mathbf{v}|^{p-2}\nabla ^{2}\mathbf{v})-\nu_{1}|\mathbf{e}(\mathbf{v})|^{p-2}\mathbf{e}(\mathbf{v})\big{)}\), as well as of the time-derivative operator. One could use the already obtained weak convergences and the so-called Minty trick but, later, we will need a strong convergence of \(\mathbf{e}(\mathbf{v}_{\varepsilon k})\) to pass to the limit in the heat equation. Thus we first prove this strong convergence, which then allows for the limit passage in the momentum equation directly. We will use the weak convergence of the inertial force \[\int_{0}^{T}\!\!\!\int_{\varOmega}\Big{(}\frac{\partial(\varrho_{\varepsilon k }\mathbf{v}_{\varepsilon k})}{\partial t}+\text{div}(\varrho_{\varepsilon k}\mathbf{v }_{\varepsilon k}\!\otimes\!\mathbf{v}_{\varepsilon k})\!\Big{)}\!\cdot\!\widetilde {\mathbf{v}}\,\text{d}\mathbf{x}\text{d}t\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[=\int_{0}^{T}\!\!\!\int_{\Omega}\Big{(}\widehat{\mathbf{t}}_{\lambda, \varepsilon}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k })-\mathbf{h}_{\varepsilon k}+\tau\mathbf{r}_{\varepsilon k}+h_{{}_{\rm C}}(\mathbf{F}_{ \varepsilon k},\theta_{\varepsilon k})\mathbf{d}_{\varepsilon k}-\frac{\mathbf{m}_{ \varepsilon k}\!\times\!\mathbf{r}_{\varepsilon k}}{\gamma(\mathbf{F}_{\varepsilon k },\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})}\Big{)}\cdot(\mathbf{m}_{ \varepsilon k}-\widetilde{\mathbf{m}}_{k})\] \[\qquad\qquad\qquad\qquad+\frac{\mathsf{\kappa}_{\lambda}(\mathbf{F}_ {\varepsilon k})\nabla\mathbf{m}_{\varepsilon k}}{\det\mathbf{F}_{\varepsilon k}}\! \cdot\!\nabla(\mathbf{m}_{\varepsilon}\!-\!\widetilde{\mathbf{m}}_{k})-\frac{\mathsf{ \kappa}_{\lambda}(\mathbf{F}_{\varepsilon k})\nabla\mathbf{m}_{\varepsilon}}{\det \mathbf{F}_{\varepsilon k}}\!\cdot\!\nabla(\mathbf{m}_{\varepsilon k}\!-\!\mathbf{m}_{ \varepsilon})\,\mathrm{d}\mathbf{x}\mathrm{d}t\stackrel{{ k\to \infty}}{{\to}}0\] with some \(\mathbf{d}_{\varepsilon k}\in\mathrm{Dir}(\mathbf{r}_{\varepsilon k})\), where \(c_{\mathsf{\kappa},\lambda}:=\inf_{F\in\mathbb{R}^{d\times d}}\mathsf{\kappa} _{\lambda}(F)/\det F\) is positive thanks to our definition (3.24b). This convergence to \(0\) for \(k\to\infty\) is due to (3.24d,e). Therefore, \[\mathbf{m}_{\varepsilon k}\to\mathbf{m}_{\varepsilon}\qquad\qquad\text{ strongly in }L^{2}(I;H^{1}(\Omega;\mathbb{R}^{d}))\,,\] (3.51a) which also improves the weak*-convergence in ( 3.45f ) and ensures convergence \[\mathbf{K}_{\lambda}(\mathbf{F}_{\varepsilon k},\nabla\mathbf{m}_{\varepsilon k})\] \[\to\mathbf{K}_{\lambda}(\mathbf{F}_{\varepsilon},\nabla\mathbf{m}_{ \varepsilon})\] in \[L^{1}(I\!\times\!\Omega;\mathbb{R}^{d\times d}_{\rm sym})\]. Moreover, by interpolation with ( 3.24d ), \[\nabla\mathbf{m}_{\varepsilon k}\to\nabla\mathbf{m}_{\varepsilon}\qquad\text{ strongly in }L^{c}(I;L^{2}(\Omega;\mathbb{R}^{d\times d}))\text{ for any }1\leq c<\infty. \tag{3.51b}\] Thus \(|\nabla\mathbf{m}_{\varepsilon k}|^{2}\to|\nabla\mathbf{m}_{\varepsilon}|^{2}\) and \(\nabla\mathbf{m}_{\varepsilon k}\!\otimes\!\nabla\mathbf{m}_{\varepsilon k}\to\nabla \mathbf{m}_{\varepsilon}\!\otimes\!\nabla\mathbf{m}_{\varepsilon}\) strongly in \(L^{c}(I;L^{1}(\Omega;\mathbb{R}^{d\times d}))\) for any \(1\leq c<\infty\), which is needed for the convergence in (3.27e) when multiplied by \(\mathbf{e}(\mathbf{v}_{\varepsilon k})\in L^{p}(I;L^{\infty}(\Omega;\mathbb{R}^{d \times d}_{\rm sym}))\). We now use the Galerkin approximation of the regularized momentum equation (3.27d) tested by \(\widetilde{\mathbf{v}}=\mathbf{v}_{\varepsilon k}-\widetilde{\mathbf{v}}_{k}\) with \(\widetilde{\mathbf{v}}_{k}:I\to V_{k}\) an approximation of \(\mathbf{v}_{\varepsilon}\) in the sense that \(\widetilde{\mathbf{v}}_{k}\to\mathbf{v}_{\varepsilon}\) strongly in \(L^{\infty}(I;L^{2}(\Omega;\mathbb{R}^{d}))\) and \(\nabla^{2}\widetilde{\mathbf{v}}_{k}\to\nabla^{2}\mathbf{v}_{\varepsilon}\) for \(k\to\infty\) strongly in \(L^{p}(I\!\times\!\Omega;\mathbb{R}^{d\times d\times d})\) for \(k\to\infty\). Using also the first inequality in (3.22) and (3.49), we can estimate \[\frac{1}{2C_{r,\varepsilon}}\big{\|}\mathbf{v}_{\varepsilon k}(T)\!- \!\mathbf{v}_{\varepsilon}(T)\big{\|}_{L^{2}(\Omega;\mathbb{R}^{d})}^{2}\!\!+\,\nu_ {1}c_{p}\|\mathbf{e}(\mathbf{v}_{\varepsilon k}\!-\!\mathbf{v}_{\varepsilon})\|_{L^{p}(I \times\Omega;\mathbb{R}^{d\times d})}^{p}\!\!+\,\nu_{2}c_{p}\|\nabla^{2}(\mathbf{ v}_{\varepsilon k}\!-\!\mathbf{v}_{\varepsilon})\|_{L^{p}(I\times\Omega;\mathbb{R}^{d \times d})}^{p}\] \[\leq\int_{\Omega}\frac{\varrho_{\varepsilon k}(T)}{2}\big{|}\mathbf{v} _{\varepsilon k}(T)\!-\!\mathbf{v}_{\varepsilon}(T)\big{|}^{2}\,\mathrm{d}\mathbf{x}+ \int_{0}^{T}\!\!\!\int_{\Gamma}\nu_{\flat}|\mathbf{v}_{\varepsilon k}\!-\!\mathbf{v}_{ \varepsilon}|^{p}\,\mathrm{d}S\mathrm{d}t\] \[\qquad\qquad\qquad+\nu_{2}\big{(}|\nabla^{2}\mathbf{v}_{\varepsilon k }|^{p-2}\nabla^{2}\mathbf{v}_{\varepsilon k}-|\nabla^{2}\mathbf{v}_{\varepsilon}|^{p-2 }\nabla^{2}\mathbf{v}_{\varepsilon}\big{)}\!\cdot\!\nabla^{2}(\mathbf{v}_{\varepsilon k }\!-\!\mathbf{v}_{\varepsilon})\bigg{)}\,\mathrm{d}\mathbf{x}\mathrm{d}t\] \[=\int_{0}^{T}\!\!\!\int_{\Omega}\bigg{(}\Big{(}\sqrt{\frac{\uprho \varrho_{\varepsilon k}}{\det\mathbf{F}_{\varepsilon k}}}\,\mathbf{g}+\mu_{0}(\nabla \mathbf{h}_{\varepsilon k})^{\top}\mathbf{m}_{\varepsilon k}\Big{)}\!\cdot\!(\mathbf{v}_{ \varepsilon k}\!-\!\widetilde{\mathbf{v}}_{k})+\mu_{0}\mathbf{h}_{\varepsilon k}\!\cdot\! \mathbf{m}_{\varepsilon k}\mathrm{div}(\mathbf{v}_{\varepsilon k}\!-\!\widetilde{\mathbf{v} }_{k})\] \[\qquad\quad-\big{(}\mathbf{T}_{\lambda,\varepsilon}(\mathbf{F}_{ \varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{\varepsilon k})\!+\!\mathbf{K}_{ \lambda}(\mathbf{F}_{\varepsilon k},\nabla\mathbf{m}_{\varepsilon k})\!+\!\mathbf{S}_{ \lambda,\varepsilon}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta_{ \varepsilon k})\big{)}\!\cdot\!\mathbf{e}(\mathbf{v}_{\varepsilon k}\!-\!\widetilde{\mathbf{v }}_{k})\] \[\qquad\quad-\nu_{1}|\mathbf{e}(\widetilde{\mathbf{v}}_{k})|^{p-2}\mathbf{e}( \widetilde{\mathbf{v}}_{k})\!\cdot\!\mathbf{e}(\mathbf{v}_{\varepsilon k}\!-\!\widetilde{ \mathbf{v}}_{k})-\nu_{2}\big{(}|\nabla^{2}\widetilde{\mathbf{v}}_{k}|^{p-2}\nabla^{2} \widetilde{\mathbf{v}}_{k}\big{)}\!\cdot\!\nabla^{2}(\mathbf{v}_{\varepsilon k}\!-\! \widetilde{\mathbf{v}}_{k})\] \[\qquad\quad+\Big{(}\frac{\partial}{\partial t}(\varrho_{\varepsilon k }\mathbf{v}_{\varepsilon k})+\mathrm{div}(\varrho_{\varepsilon k}\mathbf{v}_{ \varepsilon k}\!\otimes\!\mathbf{v}_{\varepsilon k})\Big{)}\!\cdot\!\widetilde{\mathbf{v}}_{k} -\mathscr{S}_{\lambda}(\mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\nabla\mathbf{m}_{ \varepsilon k})\!\cdot\!\nabla^{2}(\mathbf{v}_{\varepsilon k}\!-\!\widetilde{\mathbf{v}}_{k} )\bigg{)}\,\mathrm{d}\mathbf{x}\mathrm{d}t\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ \[+\int_{0}^{T}\!\!\!\int_{\Omega}\nu_{1}|\mathbf{e}(\mathbf{v}_{\varepsilon k})|^{p-2}\mathbf{e}(\mathbf{v}_{ \varepsilon k}){:}\mathbf{e}(\widetilde{\mathbf{v}}_{k}{-} \mathbf{v}_{\varepsilon})+\nu_{2}|\nabla^{2}\mathbf{v}_{ \varepsilon k}|^{p-2}\nabla^{2}\mathbf{v}_{\varepsilon k}{:}\nabla^{2 }(\widetilde{\mathbf{v}}_{k}{-}\mathbf{v}_{\varepsilon})\,{ \rm d}\mathbf{x}{\rm d}t\] and it converges to zero due to the strong approximation properties of the approximation \(\widetilde{\mathbf{v}}_{k}\) of \(\mathbf{v}_{\varepsilon}\). Here we used (3.48)-(3.49) and also the strong convergence (3.43a), (3.45b), and (3.47). Knowing already (3.45c) and that \(\mathbf{e}(\mathbf{v}_{\varepsilon k}{-}\widetilde{\mathbf{v}}_{k})\to 0\) weakly in \(L^{p}(I;W^{1,p}(\Omega;\mathbb{R}^{d\times d}_{\rm sym}))\), we have that \(\int_{0}^{T}\!\!\int_{\Omega}\mathbf{T}_{\lambda,\varepsilon}( \mathbf{F}_{\varepsilon k},\mathbf{m}_{\varepsilon k},\theta _{\varepsilon k}){:}\mathbf{e}(\mathbf{v}_{\varepsilon k}{-} \widetilde{\mathbf{v}}_{k})\,{\rm d}\mathbf{x}{\rm d}t\to 0\). Thus we obtain the desired strong convergence \[\mathbf{v}_{\varepsilon k}\to\mathbf{v}_{\varepsilon} \mbox{strongly in }L^{p}(I;W^{2,p}(\Omega;\mathbb{R}^{d}))\] (3.53a) and also of \[\mathbf{v}_{\varepsilon k}(T)\to\mathbf{v}_{\varepsilon}(T)\] in \[L^{2}(\Omega;\mathbb{R}^{d})\]. In fact, executing this procedure for a current time instants \[t\] instead of \[T\], we obtain \[\mathbf{v}_{\varepsilon k}(t)\to\mathbf{v}_{\varepsilon}(t) \mbox{strongly in }L^{2}(\Omega;\mathbb{R}^{d})\,\mbox{ for any }t\in I. \tag{3.53b}\] By (3.42d) and the Aubin-Lions theorem, we also obtain \[\mathbf{m}_{\varepsilon k}\to\mathbf{m}_{\varepsilon} \mbox{strongly in }L^{c}(I;L^{2^{\star}-1/c}(\Omega;\mathbb{R}^{d}))\, \mbox{ for any }1\leq c<\infty. \tag{3.53c}\] It also implies, by continuity of the trace operator \(L^{p}(I;W^{2,p}(\Omega))\to L^{p}(I{\times}\Gamma)\), that \[\mathbf{v}_{\varepsilon k}\big{|}_{I{\times}\Gamma}\to\mathbf{v}_{\varepsilon}\big{|}_{I{\times}\Gamma}\mbox{ \, strongly in }L^{p}(I{\times}\Gamma;\mathbb{R}^{d})\,. \tag{3.53d}\] Having (3.53) at disposal, the limit passage in the Galerkin-approximation of (3.27d) to the weak solution of (3.25b) is then easy. The variational inequality (3.27e) can be converged by lower weak-semicontinuity of its right-hand-side integral functionals. Convergence in (3.27a-c) is due to Lemmas 3.3 and 3.4. For further purposes, let us mention that the energy dissipation balance (3.29) is inherited in the limit, i.e. \[\frac{{\rm d}}{{\rm d}t}\int_{\Omega}\frac{\varrho_{\varepsilon}} {2}|\mathbf{v}_{\varepsilon}|^{2}+\frac{\pi_{\lambda}(\mathbf{F}_{\varepsilon})\varphi(\mathbf{F}_{\varepsilon},\mathbf{m}_{\varepsilon})}{\det\mathbf{F}_{\varepsilon}}+\frac{ \mathbf{\kappa}_{\lambda}(\mathbf{F}_{\varepsilon})}{2\det \mathbf{F}}|\nabla\mathbf{m}_{\varepsilon}|^{2}-\mu_{0} \mathbf{h}_{\rm ext}{\cdot}\mathbf{m}_{\varepsilon}\,{\rm d }\mathbf{x}\] \[+\!\int_{\Omega}\!\!\xi_{\varepsilon}(\mathbf{F}_{ \varepsilon},\theta_{\varepsilon};\mathbf{e}(\mathbf{v}_{ \varepsilon}),\nabla^{2}\mathbf{v}_{\varepsilon},\mathbf{r}_{ \varepsilon})\,{\rm d}\mathbf{x}+\!\int_{\Gamma}\!\!\nu_{\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! strong monotonicity of the operator \(\mathbf{r}\mapsto\tau\mathbf{r}+h_{\mathrm{c}}(\mathbf{F},\theta)\mathrm{dir}_{\varepsilon}( \mathbf{r})\) because the exchange driving force on the right-hand side of (3.25d) is not a compact lower-order term. Thus, as we already passed to the limit in the magneto-mechanical part, we can use the "limsup-trick" and the strict convexity of the potential of the mentioned operator. Specifically, \[\int_{0}^{T}\!\!\bigg{(}\int_{\Omega}\xi_{\varepsilon}\big{(}\mathbf{ F}_{\varepsilon},\theta_{\varepsilon};\mathbf{e}(\mathbf{v}_{\varepsilon}),\nabla^{2} \mathbf{v}_{\varepsilon},\mathbf{r}_{\varepsilon}\big{)}\,\mathrm{d}\mathbf{x}+\int_{ \Gamma}\nu_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! of temperature \(\theta_{\varepsilon}^{-}:=\min(0,\theta_{\varepsilon})\). Let us recall the extension (3.10), which in particular gives \(\omega(\mathbf{F},\mathbf{m},\theta^{-})=\theta^{-}\) and \(\omega^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta^{-})=\mathbf{0}\) and also \(\omega^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m},\theta^{-})=\mathbf{0}\). Note also that \(\theta_{\varepsilon}^{-}\in L^{2}(I;H^{1}(\varOmega))\), so that it is indeed a legal test for (3.25f). Here we rely on the data qualification \(\nu_{1},\nu_{2},\nu_{\flat}\geq 0\), \(\mathscr{K}=\mathscr{K}(\mathbf{F},\theta)\geq 0\), \(\theta_{0}\geq 0\), and \(h(\theta)\geq 0\) for \(\theta\leq 0\), cf. (3.5f,i,n). Realizing that \(\nabla\theta^{-}=0\) wherever \(\theta>0\) so that \(\nabla\theta\cdot\nabla\theta^{-}=|\nabla\theta^{-}|^{2}\) and that \(\breve{\iota}^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta)\theta^{-}=\zeta^{\prime }_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta^{-})\theta^{-}=\mathbf{0}\) and \(\breve{\iota}^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m},\theta)\theta^{-}=\mathbf{0}\) and also \(h(\theta)\theta^{-}=h(\theta^{-})\theta^{-}=0\), this test gives \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\theta_{\varepsilon}^{ -}\|_{L^{2}(\varOmega)}^{2}\leq\int_{\varOmega}\theta_{\varepsilon}^{-}\frac {\partial w_{\varepsilon}}{\partial t}+\mathscr{K}(\mathbf{F}_{\varepsilon}, \theta_{\varepsilon})\nabla\theta_{\varepsilon}\cdot\nabla\theta_{\varepsilon }^{-}\,\mathrm{d}\mathbf{x}\] \[= \int_{\varOmega}\left(\!\!w_{\varepsilon}\mathbf{v}_{\varepsilon}\! \cdot\!\nabla\theta_{\varepsilon}^{-}+\Big{(}\xi_{\varepsilon}\big{(}\mathbf{F}_{ \varepsilon},\theta_{\varepsilon};\mathbf{e}(\mathbf{v}_{\varepsilon}),\nabla^{2}\mathbf{ v}_{\varepsilon},\mathbf{r}_{\varepsilon}\big{)}+\frac{\pi_{\lambda}(\mathbf{F}_{ \varepsilon})\breve{\zeta}^{\prime}_{\mathbf{F}}(\mathbf{F}_{\varepsilon},\mathbf{m}_{ \varepsilon},\theta_{\varepsilon})\mathbf{F}_{\varepsilon}^{\top}}{(1\!+\! \varepsilon|\theta_{\varepsilon}|)\det\mathbf{F}_{\varepsilon}}\!\!:\!\mathbf{e}(\bm {v}_{\varepsilon k})\right.\] \[+\frac{\pi_{\lambda}(\mathbf{F}_{\varepsilon})\breve{\iota}^{\prime} _{\mathbf{m}}(\mathbf{F}_{\varepsilon},\mathbf{m}_{\varepsilon},\theta_{\varepsilon})}{( 1\!+\!\varepsilon|\theta_{\varepsilon}|^{1/2})\det\mathbf{F}_{\varepsilon}}\!\! \cdot\!\big{(}\mathbf{r}_{\varepsilon}\!-\!\mathrm{skw}(\nabla\mathbf{v}_{\varepsilon })\mathbf{m}_{\varepsilon}\big{)}\Big{)}\theta_{\varepsilon}^{-}\right)\mathrm{d} \mathbf{x}+\int_{\varGamma}\Big{(}h(\theta_{\varepsilon})\!+\!\frac{\nu_{\flat}| \mathbf{v}_{\varepsilon}|^{p}}{2\!+\!\varepsilon|\mathbf{v}_{\varepsilon}|^{p}}\Big{)} \theta_{\varepsilon}^{-}\,\mathrm{d}S\] \[\leq \int_{\varOmega}\!\!w_{\varepsilon}\mathbf{v}_{\varepsilon}\!\cdot \!\nabla\theta_{\varepsilon}^{-}\,\mathrm{d}\mathbf{x}=\!\int_{\varOmega}\!\! \theta_{\varepsilon}^{-}\mathbf{v}_{\varepsilon}\!\cdot\!\nabla\theta_{ \varepsilon}^{-}\,\mathrm{d}\mathbf{x}=\!-\int_{\varOmega}\!|\nabla\theta_{ \varepsilon}^{-}|^{2}\mathrm{div}\,\mathbf{v}_{\varepsilon}\,\mathrm{d}\mathbf{x}\] \[= -\frac{1}{2}\int_{\varOmega}|\nabla\theta_{\varepsilon}^{-}|^{2} \mathrm{div}\,\mathbf{v}_{\varepsilon}\,\mathrm{d}\mathbf{x}\leq\|\theta_{\varepsilon}^ {-}\|_{L^{2}(\varOmega)}^{2}\|\mathrm{div}\,\mathbf{v}_{\varepsilon}\|_{L^{\infty} (\varOmega)}\,. \tag{3.57}\] Recalling the assumption \(\theta_{0}\geq 0\) so that \(\theta_{0,\varepsilon}^{-}=0\) and exploiting the information \(\mathbf{v}_{\varepsilon}\in L^{p}(I;W^{1,p}(\varOmega;\mathbb{R}^{d}))\) with \(p>d\) inherited from (3.31a), by the Gronwall inequality we obtain \(\|\theta_{\varepsilon}^{-}\|_{L^{\infty}(I;L^{2}(\varOmega))}=0\), so that \(\theta_{\varepsilon}\geq 0\) a.e. on \(I\!\times\!\varOmega\). Having proved non-negativity of temperature, we can now execute the strategy based of the \(L^{1}\)-theory for the heat equation which led to the estimates (3.18)-(3.19), i.e. here \[\|\mathbf{v}_{\varepsilon}\|_{L^{\infty}(I;L^{2}(\varOmega;\mathbb{R} ^{d}))\,\cap\,L^{p}(I;W^{2,p}(\varOmega;\mathbb{R}^{d}))}\leq C,\quad\|\mathbf{m}_ {\varepsilon}\|_{L^{\infty}(I;H^{1}(\varOmega;\mathbb{R}^{d}))}\leq C, \tag{3.58a}\] \[\|\mathbf{F}_{\varepsilon}\|_{L^{\infty}(I;W^{1,r}(\varOmega;\mathbb{R} ^{d\times d}))}\leq C_{r}\,,\quad\Big{\|}\frac{1}{\det\mathbf{F}_{\varepsilon}}\Big{\|}_ {L^{\infty}(I;W^{1,r}(\varOmega))}\leq C_{r}\,,\] (3.58b) \[\|\varrho_{\varepsilon}\|_{L^{\infty}(I;W^{1,r}(\varOmega))}\leq C _{r}\,,\quad\Big{\|}\frac{1}{\varrho_{\varepsilon}}\Big{\|}_{L^{\infty}(I;W^{1, r}(\varOmega))}\leq C_{r}\quad\text{ for any }1\leq r<+\infty,\] (3.58c) \[\|w_{\varepsilon}\|_{L^{\infty}(I;L^{1}(\varOmega))}\leq C\,,\quad \text{and}\quad\|\theta_{\varepsilon}\|_{L^{\infty}(I;L^{1}(\varOmega))}\leq C\,. \tag{3.58d}\] By interpolation exploiting the Gagliardo-Nirenberg inequality between \(L^{2}(\varOmega)\) and \(W^{2,p}(\varOmega)\), we have \(\|\cdot\|_{L^{\infty}(\varOmega)}\leq C\|\cdot\|_{L^{2}(\varOmega)}^{r}\|\cdot\|_{ W^{2,p}(\varOmega)}^{1-r}\) with \(0<r<pd/(pd+4p-2d)\). Using also Korn's inequality, from (3.58a) we thus obtain the estimate \[\|\mathbf{v}_{\varepsilon}\|_{L^{s}(I;L^{\infty}(\varOmega;\mathbb{R} ^{d}))}\leq C_{s}\quad\text{ with }\ 1\leq s<\frac{p(pd\!+\!4p\!-\!2d)}{4p-2d}\,. \tag{3.58e}\] By comparison from \(\frac{\partial}{\partial t}\varrho_{\varepsilon}=(\mathrm{div}\mathbf{v}_{\varepsilon}) \varrho_{\varepsilon}-\mathbf{v}_{\varepsilon}\!\cdot\!\nabla\varrho_{\varepsilon}\), from \(\frac{\partial}{\partial t}\mathbf{F}_{\varepsilon}=(\nabla\mathbf{v}_{\varepsilon})\mathbf{F}_ {\varepsilon}-(\mathbf{v}_{\varepsilon}\!\cdot\!\nabla)\mathbf{F}_{\varepsilon}\), and from \(\frac{\partial}{\partial t}\mathbf{m}_{\varepsilon}=\mathbf{r}_{\varepsilon}+\mathrm{skw}( \nabla\mathbf{v}_{\varepsilon})\mathbf{m}_{\varepsilon}-(\mathbf{v}_{\varepsilon}\!\cdot\! \nabla)\mathbf{m}_{\varepsilon}\), we also have \[\Big{\|}\frac{\partial\varrho_{\varepsilon}}{\partial t}\Big{\|}_{L^{p}(I;L^{r}( \varOmega))}\leq C\,,\quad\Big{\|}\frac{\partial\mathbf{F}_{\varepsilon}}{ \partial t}\Big{\|}_{L^{p}(I;L^{r}(\varOmega;\mathbb{R}^{d\times d}))}\leq C\,, \text{ and }\ \Big{\|}\frac{\partial\mathbf{m}_{\varepsilon}}{\partial t}\Big{\|}_{L^{2}(I \times\varOmega;\mathbb{R}^{d})}\leq C\,. \tag{3.58f}\] The estimates (3.58d) are naturally weaker than (3.40) but, importantly, are uniform with respect to \(\varepsilon>0\), in contrast to (3.40) which is not uniform in this sense. The total energy balance (2.50) holds for \(\varepsilon\)-solution only as an inequality because the heat sources do not exactly cancel; more in detail, while the regularized adiabatic heat again cancels, the dissipative heat terms are regularized (and smaller) in (3.25f) and in (3.26c) but the corresponding viscous stress in (3.25b) and force in (3.26a) are not regularized. This inequality still allows to execute the above mentioned estimation. Let us also note that the extension (3.10) becomes now inactive and we can work with the original data defined for non-negative \(\theta\) only. _Step 7 - further a-priori estimates_: Furthermore, having \(\boldsymbol{r}_{\varepsilon}\) estimated in \(L^{2}(I{\times}\Omega;\mathbb{R}^{d})\) uniformly with respect to \(\varepsilon\), as in (3.20) we have \[\Delta\boldsymbol{m}_{\varepsilon} =\frac{\det\boldsymbol{F}_{\varepsilon}}{\kappa_{\lambda}( \boldsymbol{F}_{\varepsilon})}\bigg{(}\tau\boldsymbol{r}_{\varepsilon}+h_{ \mathrm{c}}(\boldsymbol{F}_{\varepsilon},\theta_{\varepsilon})\mathrm{dir}_{ \varepsilon}(\boldsymbol{r}_{\varepsilon})-\frac{\boldsymbol{m}_{\varepsilon }{\times}\boldsymbol{r}_{\varepsilon}}{\gamma(\boldsymbol{F}_{\varepsilon}, \boldsymbol{m}_{\varepsilon},\theta_{\varepsilon})}-\boldsymbol{h}_{\mathrm{ ext}}-\nabla u_{\varepsilon}\] \[\quad+\frac{\boldsymbol{\varphi}^{\prime}_{\boldsymbol{m}}( \boldsymbol{F}_{\varepsilon},\boldsymbol{m}_{\varepsilon})+\dot{\zeta}^{ \prime}_{\boldsymbol{m}}(\boldsymbol{F}_{\varepsilon},\boldsymbol{m}_{ \varepsilon},\theta_{\varepsilon})}{\det\boldsymbol{F}_{\varepsilon}}-\Big{(} \frac{\kappa^{\prime}_{\lambda}(\boldsymbol{F}_{\varepsilon})}{\det \boldsymbol{F}_{\varepsilon}}-\frac{\kappa_{\lambda}(\boldsymbol{F}_{ \varepsilon})\mathrm{Cof}\boldsymbol{F}_{\varepsilon}}{\det\boldsymbol{F}_{ \varepsilon}^{2}}\Big{)}\!:\!(\nabla\boldsymbol{F}_{\varepsilon}{\otimes} \nabla\boldsymbol{m}_{\varepsilon})\bigg{)}\,,\] so that we can estimate \(\nabla^{2}\boldsymbol{m}_{\varepsilon}\) by a \(H^{1}\)-regularity as (3.21), i.e. now \[\left\|\nabla^{2}\boldsymbol{m}_{\varepsilon}\right\|_{L^{2}(I{\times}\Omega; \mathbb{R}^{d{\times}d{\times}d})}\leq C\,. \tag{3.59}\] Furthermore, we are to prove an estimate of \(\nabla\theta_{\varepsilon}\) based on the test of the heat equation (3.25f) by \(\chi_{\zeta}(\theta_{\varepsilon})\) with an increasing nonlinear function \(\chi_{\zeta}:[0,+\infty)\to[0,1]\) defined as \[\chi_{\zeta}(\theta):=1-\frac{1}{(1{+}\theta)^{\zeta}}\,,\ \ \ \ \zeta>0\,, \tag{3.60}\] simplifying the original idea of L. Boccardo and T. Gallouet [8, 9] in the spirit of [18], expanding the estimation strategy in [26, Sect. 8.2]. Importantly, here we have \(\chi_{\zeta}(\theta_{\varepsilon}(t,\cdot))\in H^{1}(\Omega)\), hence it is a legal test function, because \(0\leq\theta_{\varepsilon}(t,\cdot)\in H^{1}(\Omega)\) has already been proved and because \(\chi_{\zeta}\) is Lipschitz continuous on \([0,+\infty)\). We consider \(1\leq\mu<2\) and estimate the \(L^{\mu}\)-norm of \(\nabla\theta_{\varepsilon}\) by Holder's inequality as \[\int_{0}^{T}\!\!\!\int_{\Omega}|\nabla\theta_{\varepsilon}|^{\mu}\,\mathrm{d} \boldsymbol{x}\mathrm{d}t\leq C_{1}\bigg{(}\underbrace{\int_{0}^{T}\left\|1{+} \theta_{\varepsilon}(t,\cdot)\right\|_{L^{(1+\zeta)\mu/(2-\mu)}(\Omega)}^{(1 +\zeta)\mu/(2-\mu)}\mathrm{d}t}_{=:I^{(1)}_{\mu,\zeta}(\theta_{\varepsilon})} \bigg{)}^{1-\mu/2}\bigg{(}\underbrace{\int_{0}^{T}\!\!\!\int_{\Omega}\chi^{ \prime}_{\zeta}(\theta_{\varepsilon})|\nabla\theta_{\varepsilon}|^{2}}_{=:I^{ (2)}_{\zeta}(\theta_{\varepsilon})}\bigg{)}^{\mu/2}. \tag{3.61}\] with \(\chi_{\zeta}\) from (3.60) so that \(\chi^{\prime}_{\zeta}(\theta)=\zeta/(1{+}\theta)^{1+\zeta}\) and with a constant \(C_{1}\) dependent on \(\zeta\), \(\mu\), and \(T\). Then we interpolate the Lebesgue space \(L^{(1+\zeta)\mu/(2-\mu)}(\Omega)\) between \(W^{1,\mu}(\Omega)\) and \(L^{1}(\Omega)\) in order to exploit the already obtained \(L^{\infty}(I;L^{1}(\Omega))\)-estimate in (3.58d). More specifically, by the Gagliardo-Nirenberg inequality, we obtain \[\left\|1{+}\theta_{\varepsilon}(t,\cdot)\right\|_{L^{\mu/\sigma}(\Omega)}^{\mu /\sigma}\leq C_{2}\Big{(}1+\left\|\nabla\theta_{\varepsilon}(t,\cdot)\right\|_ {L^{\mu}(\Omega;\mathbb{R}^{d})}\Big{)}^{\mu}\ \ \ \ \text{with}\ \ \sigma=\frac{2{-}\mu}{1{+}\zeta} \tag{3.62}\] with \(C_{2}\) depending on \(\sigma\), \(C_{1}\), and \(C\) from (3.58d), so that \(I^{(1)}_{\mu,\zeta}(\theta_{\varepsilon})\leq C_{3}(1+\int_{\Omega}^{T}\!\int_{ \Omega}\big{|}\nabla\theta_{\varepsilon}\big{|}^{\mu}\,\mathrm{d}\mathbf{x} \mathrm{d}t)\) with \(C_{3}\) depending on \(C_{2}\). Combining it with (3.61), we obtain \[\|\nabla\theta_{\varepsilon}\|^{\mu}_{L^{\mu}(I\times\Omega;\mathbb{R}^{d})}= C_{1}C_{3}\big{(}1+\|\nabla\theta_{\varepsilon}\|^{\mu}_{L^{\mu}(I\times\Omega)} \big{)}^{1-\mu/2}I^{(2)}_{\mu,\zeta}(\theta_{\varepsilon})^{\mu/2}\;. \tag{3.63}\] Furthermore, we estimate \(I^{(2)}_{\zeta}(\theta_{\varepsilon})\) in (3.61). Let us denote by \(\mathscr{X}_{\zeta}\) a primitive function to \(\theta\mapsto\chi_{\zeta}(\theta)\omega^{\prime}_{\theta}(\mathbf{F},\mathbf{m},\theta)\) depending smoothly on \(\mathbf{F}\), specifically \[\mathscr{X}_{\zeta}(\mathbf{F},\mathbf{m},\theta)=\int_{0}^{1}\!\!\theta\chi_{\zeta}( r\theta)\omega^{\prime}_{\theta}(\mathbf{F},\mathbf{m},r\theta)\,\mathrm{d}r\,. \tag{3.64}\] Like (3.34) but using partial (not convective) time derivative, we have now the calculus \[\int_{\Omega}\!\chi_{\zeta}(\theta)\frac{\partial w}{\partial t} \,\mathrm{d}\mathbf{x}= \int_{\Omega}\!\chi_{\zeta}(\theta)\omega^{\prime}_{\theta}(\mathbf{F},\mathbf{m}, \theta)\frac{\partial\theta}{\partial t}+\chi_{\zeta}(\theta)\Big{(}\omega^{ \prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta)\!:\!\frac{\partial\mathbf{F}}{\partial t}+ \omega^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m},\theta)\!\cdot\!\frac{\partial\mathbf{m}}{ \partial t}\Big{)}\,\mathrm{d}\mathbf{x}\] \[=\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\mathscr{X}_{\zeta}( \mathbf{F},\mathbf{m},\theta)\,\mathrm{d}\mathbf{x}-\int_{\Omega}\big{[}\mathscr{X}_{ \zeta}\big{]}^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta)\!:\!\frac{\partial\mathbf{F }}{\partial t}+\big{[}\mathscr{X}_{\zeta}\big{]}^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{ m},\theta)\!:\!\frac{\partial\mathbf{m}}{\partial t}\,\mathrm{d}\mathbf{x}\] \[\text{where}\;\;\;\mathscr{X}_{\zeta}(\mathbf{F},\mathbf{m},\theta):= \mathscr{X}_{\zeta}(\mathbf{F},\mathbf{m},\theta)-\chi_{\zeta}(\theta)\omega(\mathbf{F}, \mathbf{m},\theta)\,. \tag{3.65}\] In view of (3.64), it holds \([\mathscr{X}_{\zeta}]^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta)=\int_{0}^{1} \theta\chi_{\zeta}(r\theta)\omega^{\prime\prime}_{\mathbf{F}\theta}(\mathbf{F},\mathbf{m}, r\theta)\,\mathrm{d}r-\chi_{\zeta}(\theta)\omega^{\prime}_{\mathbf{F}}(\mathbf{F},\mathbf{m},\theta)\) and \([\mathscr{X}_{\zeta}]^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m},\theta)=\int_{0}^{1} \theta\chi_{\zeta}(r\theta)\omega^{\prime\prime}_{\mathbf{m}\theta}(\mathbf{F},\mathbf{ m},r\theta)\,\mathrm{d}r-\chi_{\zeta}(\theta)\omega^{\prime}_{\mathbf{m}}(\mathbf{F},\mathbf{m},\theta)\). Altogether, testing (3.25f) with (3.26c) by \(\chi_{\zeta}(\theta_{\varepsilon})\) gives \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\!\mathscr{X}_{\zeta}( \mathbf{F}_{\varepsilon},\mathbf{m}_{\varepsilon},\theta_{\varepsilon})\,\mathrm{d} \mathbf{x}+\int_{\Omega}\chi^{\prime}_{\zeta}(\theta_{\varepsilon})\mathscr{X}( \mathbf{F}_{\varepsilon},\theta_{\varepsilon})|\nabla\theta_{\varepsilon}|^{2}\, \mathrm{d}\mathbf{x}\] \[= \int_{\Omega}\!\left(\xi_{\varepsilon}\big{(}\mathbf{F}_{\varepsilon},\theta_{\varepsilon};\mathbf{e}(\mathbf{v}_{\varepsilon}),\nabla^{2}\mathbf{v}_{ \varepsilon},\mathbf{r}_{\varepsilon}\big{)}\,\chi_{\zeta}(\theta_{\varepsilon})+ \omega(\mathbf{F}_{\varepsilon},\mathbf{m}_{\varepsilon},\theta_{\varepsilon})\chi^{ \prime}_{\zeta}(\theta_{\varepsilon})\mathbf{v}_{\varepsilon}\!\cdot\!\nabla \theta_{\varepsilon}\right.\] \[+\big{[}\mathscr{X}_{\zeta}\big{]}^{\prime}_{\mathbf{F}}(\mathbf{F}_{ \varepsilon},\mathbf{m}_{\varepsilon},\theta_{\varepsilon})\!:\!\frac{\partial\mathbf{F }_{\varepsilon}}{\partial t}+\big{[}\mathscr{X}_{\zeta}\big{]}^{\prime}_{\mathbf{ m}}(\mathbf{F}_{\varepsilon},\mathbf{m}_{\varepsilon},\theta_{\varepsilon})\!\cdot\!\frac{ \partial\mathbf{m}_{\varepsilon}}{\partial t}\] \[+\chi_{\zeta}(\theta_{\varepsilon})\frac{\pi_{\lambda}(\mathbf{F}_{ \varepsilon})\zeta^{\prime}_{\mathbf{m}}(\mathbf{F}_{\varepsilon},\mathbf{m}_{\varepsilon}, \theta_{\varepsilon})}{(1\!\!+\!\varepsilon\theta_{\varepsilon}^{1/2})\det\mathbf{F }_{\varepsilon}}\!\cdot\!\big{(}\mathbf{r}_{\varepsilon}\!-\!\mathrm{skw}(\nabla \mathbf{v}_{\varepsilon})\mathbf{m}_{\varepsilon}\big{)}\] \[+\chi_{\zeta}(\theta_{\varepsilon})\frac{\pi_{\lambda}(\mathbf{F}_{ \varepsilon})\zeta^{\prime}_{\mathbf{F}}(\mathbf{F}_{\varepsilon},\mathbf{m}_{\varepsilon}, \theta_{\varepsilon})\mathbf{F}_{\varepsilon}^{\top}\!\!:\!\mathbf{e}(\mathbf{v}_{ \varepsilon})}{(1\!\!+\!\varepsilon\theta_{\varepsilon})\det\mathbf{F}_{ \varepsilon}}\bigg{)}\,\mathrm{d}\mathbf{x}+\!\int_{\Gamma}\!\Big{(}h_{ \varepsilon}(\theta_{\varepsilon})\!+\!\frac{\nu_{\mathbf{\flat}}|\mathbf{v}_{ \varepsilon}|^{p}}{2\!\!+\!\!\varepsilon|\mathbf{v}_{\varepsilon}|^{p}}\Big{)}\chi_{ \zeta}(\theta_{\varepsilon})\,\mathrm{d}S\,. \tag{3.66}\] We realize that \(\chi^{\prime}_{\zeta}(\theta)=\zeta/(1+\theta)^{1+\zeta}\) as used already in (3.61) and that \(\mathscr{X}_{\zeta}(\mathbf{F}_{\varepsilon},\mathbf{m}_{\varepsilon},\theta_{\varepsilon })\geq c_{K}\theta_{\varepsilon}\) with some \(c_{K}\) for \(\theta_{\varepsilon}\geq 0\) due to (3.5c); again \(K\) is a compact subset of \(\mathrm{GL}^{+}(d)\) related here with the already proved estimates (3.58b). The convective term in (3.66) is a bit delicate. For any \(\delta>0\), it can be estimated by Holder inequality as \[\int_{\Omega}w_{\varepsilon}\chi^{\prime}_{\zeta}(\theta_{ \varepsilon})\mathbf{v}_{\varepsilon}\!\cdot\!\nabla\theta_{\varepsilon}\, \mathrm{d}\mathbf{x} \leq\frac{1}{\delta}\int_{\Omega}\chi^{\prime}_{\zeta}(\theta_{ \varepsilon})|\mathbf{v}_{\varepsilon}|^{2}w_{\varepsilon}^{2}\,\mathrm{d}\mathbf{x}+ \delta\int_{\Omega}\chi^{\prime}_{\zeta}(\theta_{\varepsilon})|\nabla\theta_{ \varepsilon}|^{2}\,\mathrm{d}\mathbf{x}\] \[=\frac{1}{\delta}\int_{\Omega}\chi^{\prime}_{\zeta}(\theta_{ \varepsilon})|\mathbf{v}_{\varepsilon}|^{2}w_{\varepsilon}^{2}\,\mathrm{d}\mathbf{x}+ \delta I^{(2)}_{\zeta}(\theta_{\varepsilon})\,. \tag{3.67}\] Denoting by \(0<\mathscr{X}_{0}=\inf_{\mathbf{F},\theta}\mathscr{X}(\mathbf{F},\theta)\), and using (3.66) integrated over \(I=[0,T]\), we further estimate: \[I^{(2)}_{\zeta}(\theta_{\varepsilon})=\frac{1}{\zeta}\int_{0}^{T}\!\!\!\int_{ \Omega}\chi^{\prime}_{\zeta}(\theta_{\varepsilon})|\nabla\theta_{ \varepsilon}|^{2}\,\mathrm{d}\mathbf{x}\mathrm{d}t\leq\frac{1}{\mathscr{X}_{0} \zeta}\int_{0}^{T}\!\!\int_{\Omega}\!\mathscr{X}(\mathbf{F}_{\varepsilon},\theta_{ \varepsilon})\nabla\theta_{\varepsilon}\!\cdot\!\nabla\chi_{\zeta}(\theta_{ \varepsilon})\,\mathrm{d}\mathbf{x}\mathrm{d}t\] \[\leq\frac{1}{\mathcal{X}_{0}\zeta}\bigg{(}\int_{0}^{T}\!\!\int_{ \Omega}\!\!\!\mathcal{X}(\mathbf{F}_{\varepsilon},\theta_{\varepsilon})\nabla\theta _{\varepsilon}\!\cdot\!\nabla\chi_{\zeta}(\theta_{\varepsilon})\,\mathrm{d}\bm {x}\mathrm{d}t+\int_{\Omega}\!\!\!\mathcal{X}_{\zeta}(\mathbf{F}_{\varepsilon}(T ),\theta_{\varepsilon}(T))\,\mathrm{d}\mathbf{x}\bigg{)}\] \[=\frac{1}{\mathcal{X}_{0}\zeta}\bigg{(}\int_{\Omega}\!\!\! \mathcal{X}_{\zeta}(\mathbf{F}_{0},\theta_{0,\varepsilon})\,\mathrm{d}\mathbf{x}+\! \int_{0}^{T}\!\!\!\int_{\Omega}\!\!\bigg{(}\xi_{\varepsilon}\big{(}\mathbf{F}_{ \varepsilon},\theta_{\varepsilon};\mathbf{e}(\mathbf{v}_{\varepsilon}),\nabla^{2}\mathbf{ v}_{\varepsilon},\mathbf{r}_{\varepsilon}\big{)}\chi_{\zeta}(\theta_{ \varepsilon})\] \[\quad+\big{[}\mathscr{X}_{\zeta}\big{]}^{\prime}_{\mathbf{F}}(\mathbf{F} _{\varepsilon},\mathbf{m}_{\varepsilon},\theta_{\varepsilon})\!:\!\frac{\partial \mathbf{F}_{\varepsilon}}{\partial t}+\big{[}\mathscr{X}_{\zeta}\big{]}^{\prime}_ {\mathbf{m}}(\mathbf{F}_{\varepsilon},\mathbf{m}_{\varepsilon},\theta_{\varepsilon})\! \cdot\!\frac{\partial\mathbf{m}_{\varepsilon}}{\partial t}+\omega(\mathbf{F}_{ \varepsilon},\mathbf{m}_{\varepsilon},\theta_{\varepsilon})\chi^{\prime}_{\zeta}( \theta_{\varepsilon})\mathbf{v}_{\varepsilon}\!\cdot\!\nabla\theta_{\varepsilon}\] \[\quad+\chi_{\zeta}(\theta_{\varepsilon})\frac{\pi_{\lambda}(\bm {F}_{\varepsilon})\zeta^{\prime}_{\mathbf{m}}(\mathbf{F}_{\varepsilon},\mathbf{m}_{ \varepsilon},\theta_{\varepsilon})}{(1\!+\!\varepsilon\theta_{\varepsilon}^{1/2 })\det\mathbf{F}_{\varepsilon}}\!\cdot\!\big{(}\mathbf{r}_{\varepsilon}\!-\!\mathrm{ skw}(\nabla\mathbf{v}_{\varepsilon})\mathbf{m}_{\varepsilon}\big{)}\] \[\quad+\frac{\pi_{\lambda}(\mathbf{F}_{\varepsilon})\zeta^{\prime}_{ \mathbf{F}}(\mathbf{F}_{\varepsilon},\mathbf{m}_{\varepsilon},\theta_{\varepsilon})\mathbf{F }_{\varepsilon}^{\top}\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\! \cdot\!\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\cdot\!\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\! \cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\! \cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\ Reminding \(\sigma:=(2{-}\mu)/(1{+}\zeta)\) from (3.62) with \(\zeta>0\) arbitrarily small, one gets after some algebra, the condition \(\mu<(d{+}2)/(d{+}1)\). Obviously, for \(r\) big enough (in particular if \(r>d\) as assumed), the exponent in the left-hand side of (3.70) is higher than the exponent in the right-hand side, which gives a bound for \(\nabla\theta_{\varepsilon}\) in \(L^{\mu}(I\times\Omega;\mathbb{R}^{d})\). Altogether, we proved \[\|\theta_{\varepsilon}\|_{L^{\infty}(I;L^{1}(\Omega))\,\cap\,L^{\mu}(I;W^{1, \mu}(\Omega))}\leq C_{\mu}\quad\text{with}\ \ 1\leq\mu<\frac{d{+}2}{d{+}1}\,.\] (3.71a) Next, we again exploit the calculus ( 3.41 ) now omitting the index \[k\], with \[\nabla\boldsymbol{F}_{\varepsilon}\] bounded in \[L^{\infty}(I;L^{r}(\Omega;\mathbb{R}^{d\times d\times d}))\] and \[\nabla\boldsymbol{m}_{\varepsilon}\] bounded in \[L^{\infty}(I;L^{2}(\Omega;\mathbb{R}^{d\times d}))\] and relying on the assumption ( 3.5d ), we have also the bound on \[\nabla w_{\varepsilon}\] in \[L^{\mu}(I;L^{\mu^{*}d/(\mu^{*}+d)}(\Omega;\mathbb{R}^{d}))\], so that \[\|w_{\varepsilon}\|_{L^{\infty}(I;L^{1}(\Omega))\,\cap\,L^{\mu}(I;W^{1,\mu^{*} d/(\mu^{*}+d)}(\Omega))}\leq C_{\mu}\,. \tag{3.71b}\] _Step 8: Limit passage for \(\varepsilon\to 0\)_. We use the Banach selection principle as in Step 4, now also taking (3.58) and (3.71) into account instead of the estimates (3.31) and (3.40). For some subsequence and some \((\varrho,\boldsymbol{v},\boldsymbol{F},\boldsymbol{m},\theta)\), we now have \[\varrho_{\varepsilon}\to\varrho\] weakly* in \[L^{\infty}(I;W^{1,r}(\Omega))\,\cap\,W^{1,p}(I;L^{r}(\Omega))\] \[\text{and strongly in }C(I{\times}\overline{\Omega})\,,\] (3.72a) \[\boldsymbol{v}_{\varepsilon}\to\boldsymbol{v}\] weakly* in \[L^{\infty}(I;L^{2}(\Omega;\mathbb{R}^{d}))\cap L^{2}(I;W^{2,p}(\Omega; \mathbb{R}^{d}))\], (3.72b) \[\boldsymbol{F}_{\varepsilon}\to\boldsymbol{F}\] weakly* in \[L^{\infty}(I;W^{1,r}(\Omega;\mathbb{R}^{d\times d}))\,\cap\,W^{1,p}(I;L^{r}( \Omega;\mathbb{R}^{d\times d}))\] \[\text{and strongly in }C(I{\times}\overline{\Omega};\mathbb{R}^{d \times d})\], (3.72c) \[\boldsymbol{m}_{\varepsilon}\to\boldsymbol{m}\] weakly* in \[L^{\infty}(I;H^{1}(\Omega;\mathbb{R}^{d}))\,\cap\,H^{1}(I;L^{2}(\Omega; \mathbb{R}^{d}))\], (3.72d) \[\theta_{\varepsilon}\to\theta\] weakly* in \[L^{\mu}(I;W^{1,\mu}(\Omega)),\ 1\leq\mu<(d{+}2)/(d{+}1)\]. (3.72e) Like ( 3.45a ), by the Aubin-Lions theorem, we now have \[w_{\varepsilon}\to w\] strongly in \[L^{c}(I{\times}\Omega)\], \[1\leq c<1{+}2/d\], ( 3.72f ) and then, using again continuity of \[(\boldsymbol{F},\boldsymbol{m},w)\mapsto[\omega(\boldsymbol{F},\boldsymbol{m},\cdot)]^{-1}(w)\] as in ( 3.45b ), we have \[\theta_{\varepsilon}\to\theta=[\omega(\boldsymbol{F},\boldsymbol{m},\cdot)]^{ -1}(w)\] strongly in \[L^{c}(I{\times}\Omega)\], \[1\leq c<1{+}2/d\]. ( 3.72g ) By the continuity of \[\varphi^{\prime}_{\boldsymbol{F}}\], \[\zeta^{\prime}_{\boldsymbol{F}}\], \[\det\], and \[\mathcal{K}\], we have also \[\mathcal{K}(\boldsymbol{F}_{\varepsilon},\theta_{\varepsilon}) \to\mathcal{K}(\boldsymbol{F},\theta)\] strongly in \[L^{c}(I{\times}\Omega)\] for any \[1\leq c<\infty\], and ( 3.72h ) \[\boldsymbol{T}_{\lambda,\varepsilon}\to\boldsymbol{T}_{\lambda}=\frac{[\pi_{ \lambda}\varphi]^{\prime}_{\boldsymbol{F}}(\boldsymbol{F},\boldsymbol{m}){+} \pi_{\lambda}(\boldsymbol{F})\zeta^{\prime}_{\boldsymbol{F}}(\boldsymbol{F}, \boldsymbol{m},\theta)}{\det\boldsymbol{F}}\boldsymbol{F}^{\top}\] strongly in \[L^{1}(I{\times}\Omega;\mathbb{R}^{d\times d}_{\text{sym}})\]. ( 3.72i ) The momentum equation (3.25b) (still regularized by \(\varepsilon\)) is to be treated like in Step 4. Here we exploit the information about \(\frac{\partial}{\partial t}(\varrho_{\varepsilon}\boldsymbol{v}_{\varepsilon})\) in \(L^{q^{\prime}}(I;W^{1,q}(\Omega;\mathbb{R}^{d})^{*})+L^{p^{\prime}}(I;W^{2,p}( \Omega;\mathbb{R}^{d})^{*})\) obtained like in (3.46); here we used also (3.58e). By the Aubin-Lions compact-embedding theorem, we then obtain \[\varrho_{\varepsilon}\,\boldsymbol{v}_{\varepsilon}\to\varrho\boldsymbol{v} \qquad\qquad\text{ strongly in $L^{s}(I{\times}\Omega;\mathbb{R}^{d})$}\quad\text{with $s$ from \eqref{eq:C_r_s}}\,. \tag{3.73}\] In fact, the argumentation (3.52) now with \(C_{r}\) instead of \(C_{r,\varepsilon}\) is to be slightly modified by using \((\varrho_{\varepsilon},\boldsymbol{v}_{\varepsilon},\boldsymbol{T}_{\lambda, \varepsilon}(\boldsymbol{F}_{\varepsilon},\boldsymbol{v}_{\varepsilon}),\theta _{\varepsilon})\) in place of \((\varrho_{\varepsilon k},\boldsymbol{v}_{\varepsilon k},\boldsymbol{T}_{ \lambda,\varepsilon}(\boldsymbol{F}_{\varepsilon k},\boldsymbol{v}_{ \varepsilon k}),\theta_{\varepsilon k})\) and with \(\widetilde{\boldsymbol{v}}_{k}\) replaced by \(\boldsymbol{v}\). Also, \(\int_{0}^{T}\!\!\int_{\Omega}\frac{\partial}{\partial t}(\varrho_{ \varepsilon k}\boldsymbol{v}_{\varepsilon k})\!\cdot\!\widetilde{\boldsymbol {v}}_{k}\,\mathrm{d}\boldsymbol{x}\mathrm{d}t\) is to be replaced by the duality \(\langle\frac{\partial}{\partial t}(\varrho_{\varepsilon}\boldsymbol{v}_{ \varepsilon}),\boldsymbol{v}\rangle\) with \(\langle\cdot,\cdot\rangle\) denoting here the duality between \(L^{q^{\prime}}(I;W^{1,q}(\Omega;\mathbb{R}^{d})^{*})+L^{p^{\prime}}(I;W^{2,p} (\Omega;\mathbb{R}^{d})^{*})\) and \(L^{q}(I;W^{1,q}(\Omega;\mathbb{R}^{d}))\cap L^{p}(I;W^{2,p}(\Omega;\mathbb{R} ^{d}))\). Limit passage in the heat equation (3.25f) is then simple. Altogether, we proved that \((\varrho,\boldsymbol{v},\boldsymbol{F},\boldsymbol{m},\theta)\) solves in the weak sense the problem (3.25)-(3.26) with \(\varepsilon=0\) and with \(\boldsymbol{T}_{\lambda}\) from (3.72i) in place of \(\boldsymbol{T}_{\lambda,\varepsilon}\). _Step 9: the original problem_. Let us note that the limit \(\boldsymbol{F}\) lives in \(L^{\infty}(I;W^{1,r}(\Omega;\mathbb{R}^{d\times d}))\,\cap\,W^{1,p}(I;L^{ \infty}(\Omega;\mathbb{R}^{d\times d}))\), cf. (3.58b,f), and this space is embedded into \(C(I{\times}\overline{\Omega};\mathbb{R}^{d\times d})\) if \(r>d\). Therefore \(\boldsymbol{F}\) and its determinant evolve continuously in time, being valued respectively in \(C(\overline{\Omega};\mathbb{R}^{d\times d})\) and \(C(\overline{\Omega})\). Let us recall that the initial condition \(\boldsymbol{F}_{0}\) complies with the bounds (3.22) and we used this \(\boldsymbol{F}_{0}\) also for the \(\lambda\)-regularized system. Therefore \(\boldsymbol{F}\) satisfies these bounds not only at \(t=0\) but also at least for small times. Yet, in view of the choice (3.22) of \(\lambda\), this means that the \(\lambda\)-regularization is nonactive and \((\varrho,\boldsymbol{v},\boldsymbol{F},\boldsymbol{m},\theta)\) solves, at least for a small time, the original nonregularized problem (3.25)-(3.26) for which the a priori \(L^{\infty}\)-bounds (3.19) hold. By the continuation argument, we may see that the \(\lambda\)-regularization remains therefore inactive within the whole evolution of \((\varrho,\boldsymbol{v},\boldsymbol{F},\boldsymbol{m},\theta)\) on the whole time interval \(I\). _Step 10: energy balances_. It is now important that the tests and then all the subsequent calculations leading to the energy balances (2.49) and (2.50) integrated over a current time interval \([0,t]\) are really legitimate. In the calculus (2.34), we rely on that \([\boldsymbol{\varphi}(\boldsymbol{F},\boldsymbol{m})/\mathrm{det}\, \boldsymbol{F}]^{\prime}_{\boldsymbol{F}}\in L^{\infty}(I{\times}\Omega; \mathbb{R}^{d\times d})\) is surely in duality with \(\frac{\partial}{\partial t}\boldsymbol{F}\in L^{p}(I;L^{r}(\Omega;\mathbb{R} ^{d\times d}))\) and \((\boldsymbol{v}{\cdot}\nabla)\boldsymbol{F}\in L^{s}(I;L^{r}(\Omega;\mathbb{R }^{d\times d}))\) with \(s\) from (3.58e). Moreover, \(\frac{\partial}{\partial t}(\varrho\boldsymbol{v})\in L^{1}(I;L^{2}(\Omega; \mathbb{R}^{d})^{*})+L^{p^{\prime}}(I;W^{2,p}(\Omega;\mathbb{R}^{d})^{*})\) is in duality with \(\boldsymbol{v}\in L^{\infty}(I;L^{2}(\Omega;\mathbb{R}^{d}))\cap L^{p}(I;W^ {2,p}(\Omega;\mathbb{R}^{d}))\), as used in (2.38). Further, the calculus (2.38) relies on that \(\frac{\partial}{\partial t}\varrho\) and \(\mathrm{div}(\varrho\boldsymbol{v})=\boldsymbol{v}{\cdot}\nabla\varrho+\varrho \,\mathrm{div}\,\boldsymbol{v}\) live in \(L^{s}(I;L^{rs/(r+s)}(\Omega))\) and thus are surely in duality with \(|\boldsymbol{v}|^{2}\in L^{s/2}(I;L^{\infty}(\Omega))\) with \(3\leq s<p(pd+4p{-}2d)/(4p{-}2d)\), cf. (3.58e). Eventually, since \(\nabla^{2}\boldsymbol{v}\in L^{p}(I{\times}\Omega;\mathbb{R}^{d\times d \times d})\), we have \(\mathrm{div}^{2}(\nu_{2}|\nabla^{2}\boldsymbol{v}|^{p-2}\nabla^{2}\boldsymbol {v})\in L^{p^{\prime}}(I;W^{2,p}(\Omega;\mathbb{R}^{d})^{*})\) in duality with \(\boldsymbol{v}\). Also \(\mathrm{div}(\nu_{1}|e(\boldsymbol{v})|^{p-2}\boldsymbol{e}(\boldsymbol{v}))\in L ^{p^{\prime}}(I;W^{1,p}(\Omega;\mathbb{R}^{d})^{*})\) is in duality with \(\boldsymbol{v}\) due to the growth condition (3.5e). Altogether, the calculations (2.34)-(2.38) are legitimate. Recalling in particular \(\mathrm{div}(\upkappa(\boldsymbol{F})\nabla\boldsymbol{m}/\mathrm{det}\, \boldsymbol{F})\in L^{2}(I{\times}\Omega;\mathbb{R}^{d})\), we can see that also the calculations (2.39)-(2.48) are legitimate. This ends the proof of Theorem 3.2. **Remark 3.5** (The classical solutions).: In fact, having \(\nabla\boldsymbol{m}\) estimated from the exchange energy, (3.25e) holds even in the sense of \(L^{2}(I{\times}\Omega;\mathbb{R}^{d})\), not only in the weak sense, similarly as (3.27a,b). Actually, since \(\mathrm{div}(\upkappa(\boldsymbol{F})\nabla\boldsymbol{m}/\mathrm{det}\, \boldsymbol{F})\in L^{2}(I{\times}\Omega;\mathbb{R}^{d})\), the inequality (3.4b) can be formulated as a classical inclusion (2.30d) and (3.25d) a.e. in the sense of \(L^{2}(I{\times}\Omega;\mathbb{R}^{d})\). **Remark 3.6** (Importance of the exchange energy).: At large magnets, in contrast to micromagnetism, the influence of the exchange energy is small and, for very large magnetic continua, eventually negligible, cf. [11]. Yet, this energy controls \(\nabla\boldsymbol{m}\), which ensures for "compactness" and strong convergence in \(\boldsymbol{m}\) and its complete deletion would be analytically problematic in particular because nonconvexity of \(\uppsi(\boldsymbol{F},\cdot,\theta)\) in ferromagnetic phase. _Acknowledgments._ The author is deeply thankful to Giuseppe Tomassetti for many inspiring discussions and comments to the manuscript. A support from the Ministry of Education of the Czech Republic project CZ.02.1.01/0.0/0.0/15-003/0000493 and the CSF/DFG project GA22-00863K, and from the institutional support RVO:61388998 (CR) is acknowledged.
2306.01676
Multichromatic Floquet engineering of quantum dissipation
The monochromatic driving of a quantum system is a successful technique in quantum simulations, well captured by an effective Hamiltonian approach, and with applications in artificial gauge fields and topological engineering. In this letter, we investigate the modeling of multichromatic Floquet driving for the slow degrees of freedom. Within a well-defined range of parameters, we show that the time coarse-grained dynamics of such a driven closed quantum system is encapsulated in an effective Master equation for the time-averaged density matrix, that evolves under the action of an effective Hamiltonian and tunable Lindblad-type dissipation/quantum gain terms. As an application, we emulate the dissipation induced by phase noise and incoherent emission/absorption processes in the bichromatic driving of a two-level system.
François Impens, David Guéry-Odelin
2023-06-02T16:51:28Z
http://arxiv.org/abs/2306.01676v1
# Multichromatic Floquet engineering of quantum dissipation ###### Abstract The monochromatic driving of a quantum system is a successful technique in quantum simulations, well captured by an effective Hamiltonian approach, and with applications in artificial gauge fields and topological engineering. In this letter, we investigate the modeling of multichromatic Floquet driving for the slow degrees of freedom. Within a well-defined range of parameters, we show that the time coarse-grained dynamics of such a driven closed quantum system is encapsulated in an effective Master equation for the time-averaged density matrix, that evolves under the action of an effective Hamiltonian and tunable Lindblad-type dissipation/quantum gain terms. As an application, we emulate the dissipation induced by phase noise and incoherent emission/absorption processes in the bichromatic driving of a two-level system. pacs: 03.65.-a, 03.65.-b, 03.65.-b There is currently an intense research effort devoted to the realization of quantum simulators able to reproduce complex quantum dynamics in simpler and controllable setups [1]. In many cases, the quantum systems to be emulated are coupled to an environment, and thus behave as open quantum systems. Such an interaction is usually considered as detrimental. However, a controlled dissipation can be a unique asset for quantum state targetting [2] such as ground state [3], pointer state [4; 5], or even excited state [6], and opens many perspective for many-body quantum simulation [7]. The emulation of quantum dissipation is therefore an important step in the roadmap to accurate quantum simulators. Several mechanisms have been used to produce dissipation in a quantum setup. It includes the driving of two interacting quantum subsystems - one of them acting as a bath on the other [8; 9], the use of atom losses for studying loss cooling [10; 11], the Zeno effect [12; 13; 14; 15; 16], the bi-stability of atom transport [17], the control of decoherence effects [18] and the investigation of many-body phase transition with dissipative phenomena [19] to name a few. In this letter, we detail an alternative strategy relying on multichromatic Floquet driving to emulate quantum dissipation while keeping the system conservative. Periodic Floquet-driven quantum systems have become instrumental to emulate novel interactions, quantum states of matter or artificial gauge fields [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. Multichromatic Floquet driving has also been applied recently to manipulate topological quantum states [32; 33]. In the following, we discuss how an effective quantum dissipation can emerge in a time coarse-grained (TCG) dynamics. For this purpose, we exploit a timescale separation formalism [20; 21] for a multichromatic driving, and infer an effective Master equation for the TCG matrix density with well-controlled approximations, and valid over a long time interval. Consider a quantum system driven by a time-independent Hamiltonian \(\hat{H}_{0}\) and a Floquet Hamiltonian \(\hat{H}_{F}(t)=\sum_{m}\hat{V}_{m}e^{i\omega_{m}t}+h.c.\). The corresponding evolution operator can be recast as the product of three unitary transforms involving separately either slow or fast-evolving operators [20; 21]: \[\hat{U}(t,t_{0})=e^{-i\hat{K}(t)}\hat{U}^{\rm eff}(t)e^{i\hat{K}(t_{0})}, \tag{1}\] where \(\hat{U}^{\rm eff}(t)=\mathcal{T}\left[e^{-i\int_{t_{0}}^{t}dt^{\prime}\hat{H} ^{\rm eff}(t^{\prime})}\right]\) accounts for the slow dynamics under the effective Hamiltonian \(\hat{H}^{\rm eff}(t)\) (\(\mathcal{T}\) is the time ordering operator), while the terms involving the kick operator, \(\hat{K}(t)\), contain the fast sinusoidal time-dependence. The Floquet frequencies \(\omega_{m}\) are assumed to be much larger than the eigenfrequencies of \(\hat{H}_{0}\) and \(\hat{V}_{m}\): \(\varepsilon=\Omega/\omega\ll 1\) with \(\Omega=\max_{m}[\|\hat{H}_{0}\|,\|\hat{V}_{m}\|]\) and \(\omega=\min_{m}\{\omega_{m}\}\). This frequency hierarchy is used to expand \(\hat{H}^{\rm eff}(t)=\sum_{n=0}^{+\infty}\hat{H}^{\rm eff}_{n}(t)\) and \(\hat{K}(t)=\sum_{n=1}^{+\infty}\hat{K}_{n}(t)\) where \(||\hat{K}_{n}(t)||=O\left(\frac{t^{m}}{\omega^{m}}\right)\) and \(||\hat{H}^{\rm eff}_{n}(t)||=O\left(\frac{t^{m+1}}{\omega^{m}}\right).\) The instantaneous quantum state \(|\psi(t)\rangle\) undergoes a unitary evolution with a fast time-dependence. However, the evolution of the TCG density matrix \(\overline{\rho}(t)=\overline{|\psi(t)\rangle\langle\psi(t)|}\) is in general non-unitary. The considered TCG procedure works as a low-pass filter in frequency space involving a cutoff frequency \(\omega_{c}\): \(\overline{\hat{O}}(t)=\frac{1}{\sqrt{2\pi}}\int_{-\omega_{c}}^{\omega_{c}} \hat{O}(\omega)e^{-i\omega t}d\omega\), where \(\hat{O}(\omega)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}\hat{O}(t)e^{i \omega t}dt\) is the Fourier transform of the considered operator \(\overline{\hat{O}}(t)\). The cutoff frequency \(\omega_{c}\) is chosen to leave invariant the slow Hamiltonian dynamics, i.e. \(\overline{e^{i\hat{H}_{0}t}}=e^{\pm i\hat{H}_{0}t}\), while filtering out the fast Floquet frequencies \(\forall m\ \overline{e^{\pm i\omega_{m}t}}=0\ (\overline{\hat{H}_{F}(t)}=0)\). Finally, we assume that for the slow operators considered below (such that \(\overline{\hat{O}_{\rm slow}(t)}=\hat{O}_{\rm slow}(t)\)), one always has \(\overline{\hat{O}_{\rm slow}(t)\hat{O}(t)}=\hat{O}_{\rm slow}(t)\overline{\hat{O }(t)}\) and \(\overline{\hat{O}(t)\hat{O}_{\rm slow}(t)}=\overline{\hat{O}(t)}\hat{O}_{\rm slow }(t)\). This property is fulfilled if the \(\hat{O}_{\rm slow}\) operator oscillates at frequencies \(\omega_{\rm slow}\ll\omega_{c}\) and if the \(\hat{O}(t)\) operator does not have frequencies nearly the cutoff \(\omega_{c}\). These assumptions are realistic for a sufficient large separation between the slow and fast timescales. We now proceed to derive an effective Master equation for the TCG density matrix. From Eq. (1), we obtain \(\overline{\rho}(t)=\overline{e^{-i\hat{K}(t)}\rho_{e}(t)e^{i\hat{K}(t)}}\) with \(\rho_{e}(t)=\hat{U}^{\rm eff}(t)e^{i\hat{K}(t_{0})}|\psi(t_{0})\rangle\langle \psi(t_{0})|e^{-i\hat{K}(t_{0})}\hat{U}^{\rm eff}(t)^{\dagger}\) evolving under the effective Hamiltonian \(\hat{H}^{\rm eff}(t)\). By construction of the effective Hamiltonian [20; 21], the density matrix \(\rho_{e}(t)\) follows slow dynamics and fulfills \(\overline{\rho_{e}(t)}=\rho_{e}(t)\). We subsequently expand the fast unitary transforms \(e^{\pm i\hat{K}(t)}\) in terms of the small parameter \(\varepsilon=\Omega/\omega\). The TCG density matrix then reads \[\overline{\rho}(t)=\rho_{e}(t)+\sum_{n=1}^{N}\overline{\delta\rho^{(m)}}(t)+O \left(\varepsilon^{N+1}\right) \tag{2}\] Each term \(\overline{\delta\rho^{(m)}(t)}\) represents a correction of order \(O(\varepsilon^{m})\) and depends linearly on the density matrix \(\rho_{e}(t)\). In order to derive these corrections, one needs explicit expressions for the fast kick operators \(\hat{K}_{m}(t)\). These are used to cancel the fast time-dependence in the effective Hamiltonian, and can be obtained at each order through a systematic procedure [20; 38]. For instance, \(\hat{K}_{1}(t)\) fulfills \(\dot{\hat{K}}_{1}(t)=\hat{H}_{F}(t)\) and reads \(\hat{K}_{1}(t)=\sum_{m}\frac{1}{i\omega_{m}}(\hat{V}_{m}e^{i\omega_{m}t}-h.c)\)[38]. The lowest-order correction is of second-order as \(\overline{\delta\rho^{(1)}}(t)=-i[\widehat{K_{1}}(t),\rho_{e}(t)]=0\) and is given by \(\overline{\delta\rho^{(2)}}(t)=-\frac{1}{2}\{\widehat{K_{1}}(t)^{2},\rho_{e} (t)\}+\widehat{K}_{1}(t)\rho_{e}(t)\widehat{K_{1}}(t)\). An effective equation for the time-averaged density matrix is obtained by taking the time derivative of Eq. (2). Special care is, however, needed in order to gather consistently corrections to the same order. For instance, the contribution \(\overline{\delta\rho^{(2)}(t)}\) involves a product of fast-evolving (\(\hat{K}_{1}(t)\)) and slow-evolving (the density matrix \(\rho_{e}(t)\)) functions. When applied to the latter, the time derivative yields terms which are smaller by one order in the small parameter \(\varepsilon\). This leads us to distinguish the slow and fast time dependence by setting \(\tau\) and \(t\) for the corresponding time variables, with \(\partial_{\tau}=O(\Omega)\) and \(\partial_{t}=O(\omega)\), similarly to the two-timing technique [35; 36]. We note \(\overline{\delta\rho^{(m)}}(t,\tau)\) the corresponding corrections to the density matrix, so that \(\overline{\delta\rho^{(2)}}(t,\tau)=-\frac{1}{2}\{\widehat{K}_{1}(t)^{2},\rho _{e}(t)\}+\widehat{\hat{K}_{1}(t)\rho_{e}(\tau)\widehat{K}_{1}(t)\). Furthermore, we assume that the Floquet frequencies \(\omega_{m}\) are grouped in a narrow bandwidth, i.e. \(\forall(m,n)\,|\omega_{m}-\omega_{n}|<\omega_{c}\ll\omega.\) The third-order correction \(\overline{\delta\rho^{(3)}}(t)\) involves only contributions from the two lowest-order fast operators \(\{\hat{K}_{1}(t),\hat{K}_{2}(t)\}\) as the time-averaging eliminates the isolated contribution of the fast operator \(\hat{K}_{3}(t)\). Cubic terms \(\hat{K}_{1}(t)^{3}\rho_{e}(t),\hat{K}_{1}(t)^{2}\rho_{e}(t)\hat{K}_{1}(t),...\) do not contain low-frequency harmonics and thus disappear upon time-averaging. One obtains \(\overline{\delta\rho_{3}}(t,\tau)=\overline{\hat{K}_{1}(t)\rho_{e}(\tau) \hat{K}_{2}(t)}+\overline{\hat{K}_{2}(t)\rho_{e}(\tau)\hat{K}_{1}(t)}-\frac{1 }{2}\{\{\hat{K}_{1}(t),\hat{K}_{2}(t)\},\rho_{e}(\tau)\}\). The complete effective Master equation can be written to second order as \(\frac{\partial}{\partial t}\overline{\rho}=-i[\hat{H}^{\rm eff},\rho_{e}]+ \partial_{t}\overline{\delta\rho^{(2)}}(t,\tau)+\partial_{r}\overline{\delta \rho^{(2)}}(t,\tau)+\partial_{t}\overline{\delta\rho^{(3)}}(t,\tau)+O( \Omega\varepsilon^{3})\). At the second order expansion, \(\rho_{e}(t)=\overline{\rho}(t)-\overline{\delta\rho^{(2)}}(t)+O(\varepsilon ^{3})\) in the unitary term of the r.h.s., but \(\rho_{e}(t)\simeq\overline{\rho}(t)\) is sufficient in the Lindblad terms. We eventually obtain the effective Master equation for the TCG density matrix which constitute the central result of this article [38]: \[\frac{\partial\overline{\rho}}{\partial t}=-i[\hat{H}_{\rm eff},\overline{ \rho}]+\mathcal{L}_{2}^{FF}[\overline{\rho}]+\mathcal{L}_{2}^{FSF}[\overline{ \rho}]+O(\Omega\varepsilon^{3}) \tag{3}\] with \[\mathcal{L}_{2}^{FF}[\overline{\rho}] = \sum_{m<n}\frac{4\,\overline{\sin(\Delta\omega_{nm}t)}}{\omega_{ mn-}}\mathcal{D}[\hat{V}_{m}^{\dagger},\hat{V}_{n}][\overline{\rho}], \tag{4}\] \[\mathcal{L}_{2}^{FSF}[\overline{\rho}] = i\sum_{m,n}\bigg{[}\frac{1}{\omega_{m}^{2}}\mathcal{D}[\hat{V}_{ m},[\hat{V}_{n}^{\dagger},\hat{H}_{0}]][\overline{\rho}]\] (5) \[+ \frac{1}{\omega_{m}^{2}}\mathcal{D}[\hat{V}_{n}^{\dagger},[\hat{V} _{m},\hat{H}_{0}]][\overline{\rho}]\bigg{]}\,e^{i(\omega_{m}-\omega_{n})t},\] with \(\Delta\omega_{nm}=\omega_{n}-\omega_{m}\), \(1/\omega_{mn-}=\frac{1}{2}(1/\omega_{m}-1/\omega_{n})\) and \(\mathcal{D}[\hat{V},\hat{V}^{\dagger}][\overline{\rho}]=\frac{1}{2}\{\{\hat{V},\hat{V}^{\dagger}\},\overline{\rho}\}-\hat{V}\overline{\rho}\hat{V}^{\prime} -\hat{V}^{\prime}\overline{\rho}\hat{V}\). This effective Master equation contains two non-unitary contributions encapsulated in the Linblad-like terms \(\mathcal{L}_{2}^{FF}[\overline{\rho}]\) and \(\mathcal{L}_{2}^{FSF}[\overline{\rho}]\) provided that the Floquet Hamiltonian contains at least two different frequencies \(\{\omega_{m},\omega_{n}\}\) close enough to ensure \(\overline{e^{\pm i(\omega_{m}-\omega_{n})t}}=e^{\pm i(\omega_{m}-\omega_{n})t}\). Under this assumption, the beat notes between these Floquet modes generate tunable non-Hermitian contributions to the time-averaged dynamics when the inequalities \(|\omega_{m}-\omega_{n}|<\omega_{c}\ll\omega\) are satisfied. The non-unitary operator \(\mathcal{L}_{2}^{FF}[\overline{\rho}]\) is bilinear in the Floquet operators and scales as \(1/\omega_{mn-}\simeq|\Delta\omega_{mn}|/\omega^{2}\). For usual situations where \(|\Delta\omega_{mn}|\leq\Omega\), which corresponds to dissipation terms oscillating at a comparable pace (or slower) as the effective Hamiltonian dynamics, the non-unitary operators of Eq. (3) are of second-order. The extra contribution (5) arises when \([\hat{V}_{m},\hat{H}_{0}]\neq 0\), and axes into account the interaction between slow and fast quantum dynamics in the resulting time-averaged evolution. The effective Master equation derived in the present framework is valid over an arbitrary long time interval. This is an essential benefit from our approach based on the exact expression (1) followed by an expansion in terms of the Floquet frequencies. We obtain (see [20; 21] and the SM [38]) \(\hat{H}_{0}^{\rm eff}=\hat{H}_{0}\), \(\hat{H}_{1}^{\rm eff}=\frac{1}{2}\sum_{m,n}\left(\frac{1}{\omega_{m}}+\frac{1}{ \omega_{n}}\right)[\hat{V}_{m},\hat{V}_{n}^{\dagger}]e^{i(\omega_{m}-\omega_{n})t}\) and \(\hat{H}_{2}^{\rm eff}=\frac{1}{2}\sum_{m,n}\left(\frac{1}{\omega_{m}^{2}}[[ \hat{V}_{m},\hat{H}_{0}],\hat{V}_{n}^{\dagger}]+\frac{1}{\omega_{n}^{2}}[[ \hat{V}_{n}^{\dagger},\hat{H}_{0}],\hat{V}_{m}]\right)e^{i(\omega_{m}-\omega_{n})t}\). At the considered second-order and for Floquet frequencies taken in a narrow bandwidth, kick operators must be grouped pairwise in order to generate low-frequency harmonics that survive the time-averaging. This is why the bichromatic case considered below, contains the phenomenology of the non-unitary effects that arise in any multichromatic Floquet driving. As a first example, we consider a two-level system with \(\hat{H}_{0}=\omega_{0}\sigma_{z}\), and the Floquet operators \(\hat{V}_{m}=\Omega_{m}\sigma_{x}\) for \(m=1,2\) (\(\sigma_{x,y,z}\) are the Pauli matrices, and we set here \(\Omega_{1,2}=\Omega>0\)). This choice yields \(\mathcal{L}_{2}^{FF}[\overline{\rho}]=8\Omega^{2}\left(\sin(\Delta\omega_{21 }t)/\omega_{12}\right)-\left(\overline{\rho}-\sigma_{x}\overline{\rho} \sigma_{x}\right)\) and \(\mathcal{L}^{FSF}[\overline{\rho}]=-8\omega_{0}\Omega^{2}\left(\frac{1}{\omega_{1} ^{2}}+\frac{1}{\omega_{m}^{2}}\right)\cos^{2}(\frac{1}{2}\Delta\omega_{21} effective Hamiltonian contributions are \(\hat{H}_{1}^{\rm eff}=0\) and \(\hat{H}_{2}^{\rm eff}=-8\omega_{0}\Omega^{2}\left(\frac{1}{\omega_{1}^{2}}+\frac {1}{\omega_{2}^{2}}\right)\cos^{2}(\frac{1}{2}\Delta\omega_{21}t)\sigma_{z}\). To emphasize the role of the dissipative dynamics, we provide hereafter the quantum evolution within the interaction picture with respect to the second-order effective Hamiltonian \(\hat{H}^{\rm eff}=\hat{H}_{0}+\hat{H}_{2}^{\rm eff}\). Figure 1 pictures the time evolution of the instantaneous density matrix coherence \(\tilde{\rho}_{eg}(t)\) with \(\tilde{\rho}(t)=\hat{U}^{\rm eff}(t)^{\dagger}\rho(t)\hat{U}^{\rm eff}(t)\). We express all time-related quantities using an arbitrary time unit \(T_{0}\). We also provide the TCG evolution using a convolution with the sinus cardinal function \(f(t)=\sin(\omega_{c}t)/(\pi t)\). We subsequently compare this time-averaged density matrix \(\overline{\tilde{\rho}}(t)=\int dt^{\prime}f(t^{\prime}-t)\tilde{\rho}(t)\) with the predictions of the effective Master equation with the initial condition \(\overline{\tilde{\rho}}_{0}=e^{i\hat{K}_{1}(t_{0})}\left(\int_{-\infty}^{+ \infty}dtf(-t)\tilde{\rho}(t)\right)e^{-i\hat{K}_{1}(t_{0})}\). We have also added the prediction from the Master equation in the absence of the \(\mathcal{L}^{FSF}[\overline{\tilde{\rho}}]\) term, i.e. as derived in [34]. This latter approach, based on a Dyson perturbative expansion, yields an effective Master equation whose validity is restricted by construction to a very short time interval, and to moderate dissipation strengths. As a matter of fact, these assumptions can be overly restrictive, as a long duration is needed for moderate dissipation to alter significantly the dynamics of a given quantum system. Our second example illustrates the emulation of phase noise in the time-averaged quantum dynamics. In NMR, such a dissipative physical mechanism results from fluctuations of the magnetic field. The phenomenological equation for the average spin dynamics, \(\hat{\bf M}=\gamma{\bf M}\times{\bf B}+(M_{0}-M_{z})/T_{1}\hat{\bf z}-{\bf M }_{\perp}/T_{2}\), accounts for the dissipative effects through two times \(T_{1}\) and \(T_{2}\), associated respectively to the longitudinal (\(M_{z}\)) and transverse (\({\bf M}_{\perp}\)) relaxations. In terms of the density matrix, the former corresponds to the population difference \(\rho_{ee}-\rho_{gg}\) while the latter involves the density matrix coherences \(\rho_{eg}\), \(\rho_{ge}\). The phase noise is accounted for with decay times \(T_{1}=+\infty\) and \(T_{2}=1/\gamma\)[39]. The Master equation that models the phase noise reads \(\partial_{t}\rho=-i[\hat{H}_{0},\rho]+\frac{\gamma}{2}\mathcal{L}_{\rm phase }[\rho]\) with the Liouvillian \(\mathcal{L}_{\rm phase}[\rho]=\sigma_{z}\rho\sigma_{z}-\frac{1}{2}\{\sigma_{z }\sigma_{z},\rho\}\). To emulate such a dissipative dynamics, we consider a bichromatic driving with \(\hat{H}_{0}=\omega_{0}\sigma_{z}\) and \(\hat{V}_{m}=\Omega_{m}\sigma_{z}\) for \(m=1,2\). In this particular case, the contribution of the \(\mathcal{L}^{FSF}[\overline{\rho}]\) term vanishes and the resulting Master equation coincides with the desired form with \(\gamma(t)=-16{\rm Re}[\Omega_{1}^{2}\Omega_{2}]\sin(\omega_{2}-\omega_{1})t/ \omega_{12}\) (the contribution of the \(\mathcal{D}[\hat{V}_{m}^{\dagger},\hat{V}_{n}]|\rho\) term is proportional to a Lindblaian operator \(\mathcal{L}_{\rm phase}[\rho]\)). Here, the coefficient \(\gamma\) has a time-dependent value and alternates between regimes of gain (\(\gamma<0\)) and damping (\(\gamma>0\)). Setting very close and non-commensurate frequencies \(\omega_{1}\) and \(\omega_{2}\) enables to accumulate decoherence (or gain) over a significant time interval. As previously, we validate numerically our findings by resolving the full unitary quantum dynamics driven by the Hamiltonian \(\hat{H}(t)=\hat{H}_{0}+\hat{H}_{F}(t)\). In Fig. 2, we illustrate our results. The seemingly erratic oscillations of the instantaneous density matrix coherence depicted in Fig. 2[38] generate a TCG dynamics that follows very accurately the effective Master equation, i.e. the one of a damped Rabi oscillation. This averaging effect on the Floquet-induced peaks is reminiscent of the averaging on individual stochastic trajectories involving quantum jumps in the Monte Carlo wave function formalism [37]. Floquet-induced peaks accumulate periodically at a pace determined by the beat frequency \(\Delta\omega_{21}\) between the two involved Floquet modes. This periodic increase/decrease of sharp peaks provokes an oscillation of the effective damping rate \(\gamma(t)\) at the same frequency \(\Delta\omega_{21}\). An initial loss (gain) phase can be obtained by setting a specific phase difference \(\phi\) between the two Floquet modes. By convention we use \(\Omega_{1}\in{\bf R}^{+}\) and \(\Omega_{2}=|\Omega_{2}|e^{i\phi}\), with the Floquet frequencies ordered with their labels \(\omega_{n}>\omega_{m}\) if \(n>m\). The choice \(\Omega_{2}=-\Omega_{1}\) gives the decoherence pictured in Fig. 2. Our framework provides a very accurate approximation of the full time-averaged dynamics in this second example. This is not obvious, as Eq. (3) is a mere second-order approximation, and discards several contributions associated to the higher-order kick operators \(\hat{K}_{m}(t)\). Actually, the operators \(\hat{K}_{m}(t)\) vanish here for \(m\geq 2\) as a result of the commutation between the Floquet and time-independent Hamiltonians. Thus, the expansion of the unitary operators \(e^{\pm i\hat{K}(t)}\) boils down to a simple power expansion in the operator \(\hat{K}_{1}(t)\). Furthermore, odd powers of \(\hat{K}_{1}(t)\) do not generate any low-frequency harmonics, and the effective Master equation only receive con tributions from even-order terms. Incidentally, the 4th-order contribution also cancels [38]. In the special case of commuting operators, Eq. (3) is thus accurate up to the 5th-order, which explains the remarkable agreement between the approximate effective Master equation (3) and the full quantum dynamics, which still holds for moderate values of the parameter \(\varepsilon\) (\(\varepsilon\)=0.35 in Fig. 2). In our third example, we propose to emulate a quantum dynamics reminiscent of incoherent emission/absorption processes in the TCG evolution of a two-level system. These processes are described respectively by the Liouvillians \(\mathcal{L}_{\rm em}[\rho]=\sigma_{-}\rho\sigma_{+}-\frac{1}{2}\{\sigma_{-} \sigma_{+},\rho\}\) and \(\mathcal{L}_{\rm ab}[\rho]=\sigma_{+}\rho\sigma_{-}-\frac{1}{2}\{\sigma_{+} \sigma_{-},\rho\}\), where \(\sigma_{+}=|e\rangle\langle g|\) and \(\sigma_{-}=\sigma_{+}^{\dagger}\). By symmetry of the dissipative term \(\mathcal{D}[\hat{V},\hat{V}^{\prime}][\rho]\) in the effective Master equation, if the TCG dynamics contains the Liouvillian \(\mathcal{L}_{\rm em}[\rho]\), it also contains the Liouvillian \(\mathcal{L}_{\rm ab}[\rho]\) associated to the reverse process. This regime illustrates, for example, the dynamics of a two-level atom illuminated by an intense light field, so that stimulated emission predominates over spontaneous emission [40]. In this case, the emission/absorption rates are approximately equal \(\gamma_{\rm em}\simeq\gamma_{\rm ab}\simeq\gamma\). To produce an analog of this dissipation, we take the time-independent \(\hat{H}_{0}=\omega_{0}\sigma_{x}\) and Floquet Hamiltonians with \(\hat{V}_{m}=\Omega_{m}\sigma_{+}\) for \(m=1,2\) (\(\Omega_{1,2}=\Omega>0\)). With this choice, the bilinear term \(\mathcal{L}_{2}^{FF}[\overline{\rho}]\) accounts for these two incoherent processes as \(\mathcal{L}_{2}^{FF}[\overline{\rho}]=\gamma(t)(\mathcal{L}_{\rm em}[\overline {\rho}]+\mathcal{L}_{\rm ab}[\overline{\rho}])\) with the time-dependent effective emission/absorption rate \(\gamma(t)=-4\Omega^{2}\sin(\Delta\omega_{21}t)/\omega_{12-}\). The remaining contribution reads \(\mathcal{L}_{2}^{FF}[\overline{\rho}]=-\omega_{0}\Omega^{2}\left(\frac{1}{ \omega_{1}^{2}}+\frac{1}{\omega_{2}^{2}}\right)\cos^{2}(\frac{1}{2}\Delta \omega_{21}t)(\sigma_{y}\rho\sigma_{z}+\sigma_{z}\rho\sigma_{y})+\Omega O( \varepsilon^{3})\). One finds the effective Hamiltonian corrections \(\hat{H}_{1}^{\rm eff}=2\Omega^{2}\left(\frac{1}{\omega_{1}}+\frac{1}{\omega_{ 2}}\right)\cos^{2}(\frac{1}{2}\Delta\omega_{21}t)\sigma_{z}\) and \(\hat{H}_{2}^{\rm eff}=-2\omega_{0}\Omega^{2}\left(\frac{1}{\omega_{1}^{2}}+ \frac{1}{\omega_{2}^{2}}\right)\cos^{2}(\frac{1}{2}\Delta\omega_{21}t)\sigma_ {x}+\Omega O(\varepsilon^{3})\). In Fig. 3, we plot as a function of time the exact instantaneous density matrix coherence and its corresponding TCG evolution. We observe an excellent agreement with the prediction of the effective Master equation (the two curves are almost perfectly superposed). More generally, our approach enables one to emulate a Lindblad Master equation of the form \(\hat{\rho}=-i[\hat{H},\hat{\rho}]+\sum_{m=1}^{N}\gamma_{m}\left[\hat{L}_{m} \rho\hat{L}_{m}^{\dagger}+\hat{L}_{m}^{\dagger}\rho\hat{L}_{m}-\frac{1}{2}\{ \{\hat{L}_{m},\hat{L}_{m}^{\dagger}\},\rho\}\right]\), i.e. involving, for each quantum jump operator \(\hat{L}_{m}\), the reverse jump \(\hat{L}_{m}^{\dagger}\) at the same rate \(\gamma_{m}\)[41]. It is approximately generated by the Floquet Hamiltonian \(\hat{H}_{F}(t)=\sum_{m=1}^{N}\Omega_{m}\hat{L}_{m}\left(e^{i\omega_{m}t}+e^{i (\omega_{m}+\Delta\omega_{m})t+i\varphi_{m}}\right)+h.c.\) with well-separated pairs of close frequencies \(\{\omega_{m},\omega_{m}+\Delta\omega_{m}\}\), such that \(\Delta\omega_{m}\ll\omega_{c}\) and \(|\omega_{m}-\omega_{n}|>\omega_{c}\) for \(m\neq n\) in order to avoid crossed terms involving different pairs of jump operators. With these assumptions, the operator \(\mathcal{L}_{2}^{FF}[\overline{\rho}]\) (4) takes the desired form. Interestingly, the effective time-dependent rates \(\gamma_{m}(t)\simeq-4\left(|\Omega_{m}|^{2}\Delta\omega_{m}/\omega_{m}^{2} \right)\sin(\Delta\omega_{m}t+\varphi_{m})\) can be shaped independently by a suitable choice of the Rabi pulsations (\(\Omega_{m}\)), frequency (\(\Delta\omega_{m}\)) and phase (\(\varphi_{m}\)) differences. Regarding the \(\mathcal{L}_{2}^{FF}[\overline{\rho}]\) term, its contribution can be attenuated by an appropriate choice of \(\hat{H}_{0}\), e.g. the third example detailed above. In summary, we have used the formalism of kick operators and effective Hamiltonians to derive an effective Master equation for the TCG dynamics in a multichromatic Floquet system. Our treatment, based on a perturbative expansion in terms of powers of kick operators, holds in the long-time limit. The beat modes between pairs of Floquet frequencies generate effective quantum Figure 2: Emulation of Phase Noise: Results of the effective Master equation vs full quantum evolution: Instantaneous density matrix coherence \({\rm Re}[\rho_{eg}(t)]\) as a function of time (solid gray line), time coarse-grained coherence \({\rm Re}[\overline{\rho}_{eg}(t)]\) (solid black line) and the density matrix coherence \({\rm Re}[\overline{\rho}_{eg}(t)]\) (dashed black line) obtained from the effective Master equation (Eq. 3). Parameters used: constant and Floquet Hamiltonians corresponding to \(\hat{H}_{0}=\omega_{0}\sigma_{x}\) and \(\hat{H}_{F}(t)=\Omega_{1}\sigma_{x}e^{i\omega_{1}t}+\Omega_{2}\sigma_{+}e^{i \omega_{2}t}+h.c.\) with \(\omega_{0}=0.5\times(2\pi)/T_{0}\) and Floquet terms in opposite phase \(\Omega_{1}=-\Omega_{2}=7/T_{0}\). We have used non-commensurate Floquet frequencies \(\omega_{1}=\sqrt{10}\times(2\pi)/T_{0}\), \(\omega_{2}=\omega_{1}+\Delta\omega_{21}\) with \(\Delta\omega_{21}=0.025\times(2\pi)/T_{0}\), yielding the small parameter \(\varepsilon=0.35\). The initial density matrix (\(\rho_{0}\)) and cut-off (\(\omega_{c}\)) frequencies are identical to Fig.1. Figure 3: Incoherent absorption/emission in the time-averaged dynamics: Instantaneous density matrix coherence \({\rm Re}[\rho_{eg}(t)]\) as a function of time (solid gray line), time-averaged coherence \({\rm Re}[\overline{\rho}_{eg}(t)]\) (solid black line) vs density matrix coherence \({\rm Re}[\overline{\rho}_{eg}(t)]\) (dashed black line) obtained from the effective Master equation. Parameters: initial density matrix \(\rho_{0}=|e\rangle\langle e|\), constant and Floquet Hamiltonians \(\hat{H}_{0}=\omega_{0}\sigma_{x}\), \(\hat{H}_{F}(t)=\Omega_{1}\sigma_{x}e^{i\omega_{1}t}+\Omega_{2}\sigma_{+}e^{i \omega_{2}t}+h.c.\) with the frequency \(\omega_{0}=0.25\times(2\pi)/T_{0}\) and in-phase Floquet terms s.t. \(\Omega_{1,2}=2/T_{0}\). Small parameter \(\epsilon=0.1\). Floquet (\(\omega_{1,2}\)) and cutoff (\(\omega_{c}\)) frequencies are identical to those of Fig. 2. dissipation that results from a blurring of the fast instantaneous motion. Different Floquet Hamiltonians and time-averaging procedures can be considered to emulate a wide range of dynamics involving gains or losses. Our approach paves the way for quantum simulations based on Floquet-engineered non-unitary dynamics. _Acknowledgments_. The authors thank Jean Dalibard for useful comments. F.I. was supported by the Brazilian agencies CNPq (310265/2020-7), CAPES and FAPERJ (210.296/2019). This work was supported by the CAPES-PRINT Program and by INCT-IQ (465469/2014-0).
2304.10102
Near-Extremal Limits of Warped Black Holes
A holographic description of three-dimensional warped black holes suffers from ambiguities due to a seemingly harmless choice of coordinate system. This gives rise to the notion of ensembles in warped black holes, and we focus on two of them: the canonical and quadratic ensemble. Our aim is to quantify the imprint of these ensembles in the near-extremal limit of a warped black hole. To this end, for each ensemble, we explore the thermodynamic response and evaluate greybody factors. We also set-up a holographic dictionary in their near-AdS$_2$ region, and decode aspects of the dual near-CFT$_1$. This gives us different perspectives of the black hole that we can contrast and compare. On the one hand, we find perfect agreement between the near-extremal limit of the canonical ensemble warped black holes, their near-AdS$_2$ effective analysis, and a warped conformal field theory description. On the other, we are led to rule out the quadratic ensemble due to inconsistencies at the quantum level with the near-AdS$_2$ effective description.
Ankit Aggarwal, Alejandra Castro, Stéphane Detournay, Beatrix Mühlmann
2023-04-20T06:07:38Z
http://arxiv.org/abs/2304.10102v2
# Near-Extremal Limits of Warped Black Holes ###### Abstract A holographic description of three-dimensional warped black holes suffers from ambiguities due to a seemingly harmless choice of coordinate system. This gives rise to the notion of ensembles in warped black holes, and we focus on two of them: the canonical and quadratic ensemble. Our aim is to quantify the imprint of these ensembles in the near-extremal limit of a warped black hole. To this end, for each ensemble, we explore the thermodynamic response and evaluate greybody factors. We also set-up a holographic dictionary in their near-AdS\({}_{2}\) region, and decode aspects of the dual near-CFT\({}_{1}\). This gives us different perspectives of the black hole that we can contrast and compare. On the one hand, we find perfect agreement between the near-extremal limit of the canonical ensemble warped black holes, their near-AdS\({}_{2}\) effective analysis, and a warped conformal field theory description. On the other, we are led to rule out the quadratic ensemble due to inconsistencies at the quantum level with the near-AdS\({}_{2}\) effective description. November 6, 2021 ###### Contents * 1 Introduction * 2 Black holes in topologically massive gravity * 2.1 Warped black hole: canonical ensemble * 2.2 Warped black hole: quadratic ensemble * 2.3 Ties between canonical and quadratic ensemble * 3 Near-extremal warped black holes: canonical ensemble * 3.1 Thermodynamics * 3.2 Decoupling limit * 3.3 Two-point function * 4 Near-extremal warped black holes: quadratic ensemble * 4.1 Thermodynamics * 4.2 Decoupling limit * 4.3 Two-point function * 5 A two-dimensional perspective of warped black holes * 5.1 Near-AdS\({}_{2}\): Linear response * 5.1.1 Solutions * 5.2 Boundary analysis * 6 Comparing perspectives and ensembles * 6.1 Comparing ensembles * 6.2 Comparing perspectives * 7 Conclusions Introduction Warped black holes are three-dimensional stationary spacetimes, which can carry mass and angular momentum. They usually appear as classical solutions of gravitational theories with a massive degree of freedom, such as topologically massive gravity [1, 2], theories with a massive vector field [3, 4], or higher-derivative theories [5, 6, 7]. In all of these cases, the term "warped" originates from approximate symmetries of the solution: in the absence of the black hole, the Killing vectors of a warped background form an \(sl(2)\times u(1)\) algebra. Intuitively, this algebra can be understood as a deformation, or warping, of the size of a circle fiber inside of three-dimensional Anti-de Sitter space (AdS\({}_{3}\)). In relation to its parent theory, the mass of the extra degree of freedom controls the size of the fiber. An appealing aspect of warped black holes is their delicate balance between simplicity and complexity. They are simple configurations because they are a quotient of a warped AdS\({}_{3}\) spacetime [8]: this places them on a similar footing to the BTZ black hole [9, 10], and several concepts that are useful in BTZ can be applied to warped black holes [11]. Their complexity is due to its warped nature: a warped spacetime is neither locally, nor asymptotically, AdS\({}_{3}\) which makes it an instance of non-AdS holography. Moreover, this deviation from AdS has similarities with the near horizon geometry of the extreme Kerr black hole [12]. This places several holographic aspects of warped solutions closer to the challenges faced by Kerr/CFT [13], where it remains difficult to construct (or even characterise!) precisely the field theory that would represent a holographic dual in Kerr/CFT. Our aim here is to differentiate among different proposals of a holographic dual to warped black holes. At the moment there are at least three different proposals to describe them holographically. Based on the results in [8], one expectation is that warped spacetimes (WAdS) are dual to a two-dimensional conformal field theory (CFT\({}_{2}\)). This gives rise to a WAdS/CFT\({}_{2}\) duality, and some evidence towards it includes [14, 15, 16]. Another approach is to view the dual to warped spaces as a CFT\({}_{2}\) for which one turns on a suitable irrelevant operator. The choice of deformation is such that the theory becomes non-relativistic, and in particular one would break the conformal group from \(sl(2)\times sl(2)\) down to \(sl(2)\times u(1)\). Two approaches that can accomplish this mechanism are either a dipole deformation [17, 18] or a J\(\bar{\text{T}}\) deformation [19, 20]. Here we will take a third approach, where the proposed dual theory to WAdS is expected to be a warped conformal field theory (WCFT). In its essence, a WCFT is a non-relativistic field theory whose symmetries are \(sl(2)\times u(1)\). Examples of WCFTs, and its field theoretic properties, have been reported in [21, 22, 23, 24, 25, 26, 27, 28], and evidence towards a WAdS/WCFT correspondence can be found in [29, 30, 31, 32, 33, 34, 35]. Although this proposal seems compelling, and might be compatible with the second proposal involving deformations, it also suffers from ambiguities. As observed in [11], a WCFT seems to admit two different facades: a description in terms of a _canonical_ ensemble or a _quadratic_ ensemble.1 The main difference between these ensembles is a choice of coordinates. This coordinate transformation is state-dependent, and was introduced in [11] to make a WCFT mimic some thermodynamic properties of a CFT\({}_{2}\). In this context, the canonical ensemble is a natural choice to describe the non-relativistic system using a state-independent algebra; the quadratic ensemble has a state-dependent algebra that tries to imitate a CFT\({}_{2}\). We will review the definitions and properties of each of these ensemble in the next section. Footnote 1: The word “ensemble” is used here to match the nomenclature in [11]. It denotes a frame, or set of variables, to describe the theory; it has nothing to do with ensembles in statistical physics. The notion of ensembles also percolates into the definition of a warped black hole, giving rise to a canonical ensemble solution and its counterpart quadratic ensemble black hole. In this work we will be able to distinguish the fitness of each ensemble at setting up a holographic dictionary between WAdS and WCFT, and we will also comment on the WAdS/CFT\({}_{2}\) proposal. Our approach exploits the near-extremal limit of warped black holes, which will be used as a lamppost to establish the basic features of a holographic dual. Extremality corresponds to the zero temperature black hole, where the inner and outer horizon coincide. Near-extremality splits apart these horizons slightly: this increases the temperature by a small degree, and it induces an increase of mass and entropy (and although not essential, angular momentum is kept fixed in our analysis). Taking a near-extremal limit is a useful strategy. As it has been advocated in [36, 37], and shown in countless examples,2 the near-extremal dynamics of a black hole is well approximated by Jackiw-Teitelboim (JT) gravity [39, 40]. This provides a universal sector in the low-temperature regime of the black hole, which can capture both classical and quantum aspects of the black hole as one ignites the solutions from extremality to near-extremality. Footnote 2: For a recent review, see [38]. When we apply these new developments to warped black holes, we will see that each ensemble (canonical and quadratic) follows parallel and consistent descriptions at the classical level. In the near-extremal limit, we will analyse the thermodynamic properties of their Wald entropy, the properties of the near-horizon geometry and correlation functions. We will also construct a low-energy (IR) effective theory that describes the near-extremal dynamics: this theory contains a JT sector, in addition to a massive degree of freedom. All these quantities can be mapped and contrasted using the state-dependent coordinate transformation without any issues, and we find perfect agreement among the quantities considered here. The remarkable results come from the fact that the IR theory and the dual WCFT3_independently_ make predictions about quantum corrections to the black hole entropy in the near-extremal regime. Comparing the quantum corrections to the entropy predicted by these derivations gives a non-trivial test to WAdS/WCFT: only the canonical ensemble is compatible with the prediction of the effective IR description. We deem this as a non-trivial and compelling reason to discard the quadratic ensemble as a description of quantum properties of warped black holes. The analysis of the near-extremal dynamics of warped black holes will be done when they are solutions to topologically massive gravity. Regardless of the theory used, we expect that qualitatively the observables involved in our analysis will be robust and follow the trend described here; this is due to the robustness of the Wald entropy and the fact that the quantities in play rely mainly on the geometrical properties of the background and not the theory. In particular, from the perspective of the IR effective theory, the appearance of a JT sector will be universal. However, we expect differences to arise which involve the additional massive degree of freedom in the IR theory. In our analysis, it will appear as a massive scalar field, which has a negative mass squared. There is a range for which the field is stable, and would be dual to a relevant operator. However, it can also create an instability in the theory--the same one found in [41]. We suspect this instability is specific to topologically massive gravity. We will comment more on these differences in our final section. This paper is structured as follows. We start in Sec. 2 with a review of warped black holes as solutions to three-dimensional topologically massive gravity. In this context, we introduce the canonical and quadratic ensembles, and for each ensemble we overview their thermodynamic properties at finite temperature, the associated asymptotic symmetry group, and how these quantities are compatible with a dual WCFT description. The next two sections, Sec. 3 and Sec. 4, we cover several aspects of the near-extremal limit of warped black holes. The content is presented in a way that it treats in parallel the properties of canonical and quadratic ensemble of warped black holes. For each ensemble we report on the extremal limit, the low temperature response of thermodynamic variables, the near-horizon geometry at near-extremality, and the behaviour of greybody factors in the near-extremal regime. In Sec. 5 we take a different approach to the near-extremal limit: via dimensional reduction, we construct an effective description of the near-AdS\({}_{2}\) region. This effective IR theory should consistently describe the response of the black hole due to turning on a small temperature at fixed angular momentum. As a simple check, we verify that the solutions in Sec. 3.2 and Sec. 4.2 are correctly captured by the effective IR theory. In Sec. 6 we discuss and contrast warped black holes from various perspectives. We first contrast the results in Sec. 3 and Sec. 4. Then we contrast those to the near-extremal limit of WCFT. And finally we contrast with the outcomes of the near-AdS\({}_{2}\) theory. We conclude with a summary of our main findings and outlook in Sec. 7. ## 2 Black holes in topologically massive gravity In this section we review the basic features of the two families of black holes we will be considering in this work. These fall under the broad umbrella of warped black holes (WBH), with one family denoted as black holes in the _canonical ensemble_ (CE) and the other as black holes in the _quadratic ensemble_ (QE). They share several similarities and ties, which we will highlight below, and also stress their differences. One particularly interesting gravitational theory in three dimensions in which these WBH can be embedded is topologically massive gravity (TMG) [42, 43, 44]. In terms of its action, TMG contains two terms: the Einstein-Hilbert action and a gravitational Chern-Simons term. The explicit expression is \[I_{\text{\tiny 3D}}=I_{\text{\tiny EH}}+I_{\text{\tiny CS}}\, \tag{2.1}\] where the two contributions are \[\begin{split} I_{\text{\tiny EH}}&=\frac{1}{16\pi G _{3}}\int\text{d}^{3}x\sqrt{-g}\left(\mathscr{R}^{(3)}-2\Lambda\right)\,\\ I_{\text{\tiny CS}}&=\frac{1}{32\pi G_{3}\mu} \int\text{d}^{3}x\sqrt{-g}\varepsilon^{MNL}\left(\Gamma^{P}_{MS}\partial_{N} \Gamma^{S}_{LP}+\frac{2}{3}\Gamma^{P}_{MS}\Gamma^{S}_{NQ}\Gamma^{Q}_{LP} \right)\,\end{split} \tag{2.2}\] where \(\mathscr{R}^{(3)}\) denotes the three-dimensional Ricci scalar. We have added a cosmological constant to the Einstein-Hilbert term, which we will take to always be negative: \(\Lambda=-1/\ell^{2}\). The gravitational Chern-Simons term is controlled by a real coupling \(\mu\) that has dimensions of mass. The equations of motion of TMG are \[\mathscr{R}^{(3)}_{MN}-\frac{1}{2}g_{MN}\mathscr{R}^{(3)}-\frac{1}{\ell^{2}}g _{MN}=-\frac{1}{\mu}C_{MN}\, \tag{2.3}\] where \(C_{MN}\) is the Cotton tensor,4 Footnote 4: We are using convention where \(\sqrt{-g}\,\epsilon^{012}=-1\). Indices with capital latin letters label three-dimensional space-time, i.e., \(M,N,\ldots=\{0,1,2\}\). \[C_{MN}=\epsilon_{M}^{\ \ \ QP}\nabla_{Q}\left(\mathscr{R}^{(3)}_{PN}-\frac{1}{4 }g_{PN}\mathscr{R}^{(3)}\right). \tag{2.4}\] To categorise the solutions to TMG, it is common to introduce the dimensionless coupling \[\nu\equiv\frac{\mu\ell}{3}. \tag{2.5}\] Without loss of generality, we will always take \(\nu\) to be a positive number. This can always be reverted by a choice of orientation, since the Chern-Simons action is parity odd. There are two branches of solutions in TMG that will be recurrent in our analysis: **Warped backgrounds.** These solutions have a non-vanishing Cotton tensor, \(C_{MN}\neq 0\). The term warped is used to highlight the symmetries of the vacuum solutions of this theory: warped AdS\({}_{3}\) (WAdS\({}_{3}\)). These are the most simple non-Einstein manifolds one can obtain in TMG, and we will review their properties below. In this context, we will focus on the so-called warped black hole solutions [1, 2, 8], which are quotients of specific instances of WAdS\({}_{3}\). **Locally AdS\({}_{3}\) backgrounds.** These have a vanishing Cotton tensor, \(C_{MN}=0\), and hence the backgrounds are independent of \(\mu\). These solutions are also part of the classical phase space of pure AdS\({}_{3}\) gravity, i.e., when only the Einstein-Hilbert term is in play. And as it is well-known, these type of solutions are all locally AdS\({}_{3}\) spacetimes. Among this class, the solution that will be prominently used here is the BTZ black hole [9, 10] as a means to contrast against the warped black holes. It is worth reviewing in more detail the general properties of WAdS\({}_{3}\). We will be following the discussion in [8]. Similar to AdS\({}_{3}\), the warped solution is a real line fibration over AdS\({}_{2}\), with the crucial difference being that the size of the fibration of WAdS\({}_{3}\) depends on \(\mu\ell\). This has the effect of breaking the \(SO(2,2)\) symmetries of AdS\({}_{3}\) down to \(SL(2,\mathbb{R})\times U(1)\). In this context there are three categories of vacua: spacelike, timelike, and null WAdS\({}_{3}\). This nomenclature refers to the signature of the fibration. For spacelike and timelike vacua, there are two distinct cases depending on \(\nu\): stretched (\(\nu^{2}>1\)), and squashed (\(\nu^{2}<1\)). Null WAdS\({}_{3}\) requires that \(\nu=1\), and there are two choices for the sign of the fiber. For our work, the relevant vacua are _spacelike_ and _timelike_ WAdS\({}_{3}\). Spacelike WAdS\({}_{3}\) is given by the metric \[ds^{2}=\frac{\ell^{2}}{\nu^{2}+3}\left(-\cosh^{2}\sigma\mathrm{d}\tau^{2}+ \mathrm{d}\sigma^{2}+\frac{4\nu^{2}}{\nu^{2}+3}(\mathrm{d}u-\sinh\sigma\mathrm{ d}\tau)^{2}\right)\, \tag{2.6}\] with \(\{\tau,\sigma,u\}\in[-\infty,\infty]\). In this coordinate system the structure of the fiber and isometries of the vacua are manifest. Note that for \(\nu=1\) one recovers an AdS\({}_{3}\) space with \(SO(2,2)\) isometries.5 The warped black holes we will study here are obtained as quotients of this space, but this requires that \(\nu^{2}\geq 1\) in order to avoid closed timelike curves [8]. Footnote 5: Note that while being smoothly connected to WAdS\({}_{3}\), the AdS\({}_{3}\) vacuum and the locally AdS\({}_{3}\) backgrounds discussed above exist as classical solutions regardless the value of \(\nu\). The timelike WAdS\({}_{3}\) metric is given by \[ds^{2}=-\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{r\big{(}(\nu^{2}+3)r+4\big{)}} -2\nu r\mathrm{d}t\mathrm{d}\phi+\frac{r}{4}\big{(}3(1-\nu^{2})r+4)\mathrm{d }\phi^{2}\, \tag{2.7}\] with \(\phi\sim\phi+2\pi\). These coordinates cover the global spacetime, and for \(\nu>1\), there are closed timelike curves at large \(r\). Alternatively, global timelike WAdS\({}_{3}\) can be obtained from (2.6) by taking \(u\to i\tau\), \(\tau\to iu\). It will appear when we review the properties of the vacuum state in the holographic picture. ### Warped black hole: canonical ensemble The first class of black hole solutions will be referred to as WAdS\({}_{3}\) black holes in the canonical ensemble. These appeared in [1, 2, 45, 46, 29] and were first studied holographically in [8]. The metric is given by \[ds^{2}=-N(r)^{2}\mathrm{d}t^{2}+\frac{\ell^{2}}{4R(r)^{2}N(r)^{2}}\mathrm{d}r^ {2}+R(r)^{2}\left(\mathrm{d}\theta-N^{\theta}(r)\mathrm{d}t\right)^{2}\, \tag{2.8}\] where we defined \[\begin{split} R(r)^{2}&=\frac{r}{4}\left(3(\nu^{2} -1)r+(\nu^{2}+3)(r_{+}+r_{-})-4\nu\sqrt{r_{+}r_{-}(\nu^{2}+3)}\right)\,\\ N(r)^{2}&=\frac{1}{4R(r)^{2}}(\nu^{2}+3)(r-r_{+})( r-r_{-})\,\\ N^{\theta}(r)&=\frac{2\nu r-\sqrt{r_{+}r_{-}(\nu^ {2}+3)}}{2R(r)^{2}}\.\end{split} \tag{2.9}\] The constants \(r_{\pm}\) determine the positions of the outer and inner horizons of the black hole. These solutions are obtained as a discrete quotient from the metric in (2.6), much like BTZ black holes are discrete quotients of global AdS\({}_{3}\). Actually for \(\nu=1\), the metrics are locally AdS\({}_{3}\), and represent BTZ black holes, albeit in an unusual coordinate system. A difference though is that the global metric (2.6) is not recovered for any value of the black hole parameters [41]. However, for the choice \[r_{+}=-\frac{4i\ell}{\nu^{2}+3}\,\quad r_{-}=0\, \tag{2.10}\] the metric possesses enhanced symmetries (four Killing vectors forming the algebra of \(sl(2,\mathbb{R})\times u(1)\), instead of two generically). The metric is then complex, and the analytic continuation \(r\to ir\), \(t\to-it\) brings it to the global timelike WAdS\({}_{3}\) metric (2.7), which is viewed as the global vacuum [11].6 Footnote 6: Notice that the determination of the vacuum metric is ambiguous. First, \(r_{+}\) and \(r_{-}\) could be switched. Second, the choice (2.10) is not unique as in the AdS\({}_{3}\) situation. For instance, symmetry enhancement from two to four Killing vectors occurs for \(r_{+}-r_{-}=-\frac{4i\ell}{3+\nu^{2}}\). These black holes satisfy the usual thermodynamics laws, which we now review. The mass \(M^{\mathrm{CE}}\) and angular momentum \(J^{\mathrm{CE}}\) of the black hole, are defined as conserved charges associated to \(\partial_{t}\) and \(\partial_{\theta}\) respectively, and are given by \[\begin{split} M^{\rm CE}&=\frac{\nu^{2}+3}{24\nu G_{3} \ell}\left((r_{+}+r_{-})\nu-\sqrt{(\nu^{2}+3)r_{+}r_{-}}\right)\,\\ J^{\rm CE}&=\frac{(\nu^{2}+3)}{96\nu G_{3}\ell} \left[\left((r_{+}+r_{-})\nu-\sqrt{(\nu^{2}+3)r_{+}r_{-}}\right)^{2}-\frac{5 \nu^{2}+3}{4}(r_{+}-r_{-})^{2}\right]\.\end{split} \tag{2.11}\] These values are tied to the theory the WBH belongs to: TMG in this case. They are also tied to the black hole background: this is why we are using the superscript "CE", which stands for canonical ensemble. The Wald entropy of the black hole (2.8) is given by [47, 48, 49, 8] \[S^{\rm CE}=\frac{\pi}{24\nu G_{3}}\left((9\nu^{2}+3)r_{+}-(\nu^{2}+3)r_{-}-4 \nu\sqrt{(\nu^{2}+3)r_{+}r_{-}}\right). \tag{2.12}\] The first law of black hole thermodynamics then reads \[dM^{\rm CE}=T^{\rm CE}dS^{\rm CE}+\Omega^{\rm CE}dJ^{\rm CE}. \tag{2.13}\] where the Hawking temperature and angular velocity are given by \[T^{\rm CE} =\frac{\nu^{2}+3}{4\pi\ell}\frac{r_{+}-r_{-}}{2\nu r_{+}-\sqrt{( \nu^{2}+3)r_{+}r_{-}}}\,\] \[\Omega^{\rm CE} =\frac{2}{\left(2\nu r_{+}-\sqrt{(\nu^{2}+3)r_{+}r_{-}}\right)}. \tag{2.14}\] The thermodynamic behaviour of a WBH can be accounted for holographically by making use of the symmetries of their semi-classical phase space. The key observations are as follows. A phase space accommodating the WBH solutions (but _not_ the global timelike WAdS\({}_{3}\) vacuum) was proposed and further studied in [29, 30, 31, 32]. Its symmetry algebra is generated by the following asymptotic Killing vectors, \[\ell_{n}=e^{in\theta}\partial_{\theta}-inre^{in\theta}\partial_{r}\,\qquad p_{n}=e^{in\theta} \partial_{t}\, \tag{2.15}\] with \(n\in\mathbb{Z}\). To each of these vectors we can associate a corresponding conserved charge, which we denote as \(L_{n}\) and \(P_{n}\). In particular, the zero modes are related to the mass and angular momentum in (2.11) via \[P_{0}=M^{\rm CE}\,\qquad L_{0}=-J^{\rm CE}. \tag{2.16}\] The algebra for the charges is given by \[\begin{split}[L_{n},L_{m}]&=(n-m)L_{n+m}+\frac{c}{12}(n ^{3}-n)\delta_{n+m}\,\\ [P_{n},P_{m}]&=\frac{\mathsf{k}}{2}n\delta_{n+m}\,\\ [L_{n},P_{m}]&=-mP_{m+n}\.\end{split} \tag{2.17}\] This is a Virasoro-Kac-Moody algebra. The central extensions appearing here that are appropriate for TMG are given by [30] \[c=\frac{(5\nu^{2}+3)}{\nu(\nu^{2}+3)}\frac{\ell}{G_{3}},\qquad\mathsf{k}=- \frac{\nu^{2}+3}{6\nu}\frac{1}{\ell G_{3}}. \tag{2.18}\] In the same way that the Virasoro algebra incarnates the symmetries of a CFT\({}_{2}\), the algebra (2.17) represents those of a warped CFT [11], that is a two-dimensional field theory with chiral scaling \[\theta\to f(\theta)\,\qquad t\to t+g(\theta). \tag{2.19}\] Here \(f(\theta)\) is a diffeomorphism and \(g(\theta)\) an arbitrary function. This suggests that a gravity theory with WAdS\({}_{3}\) boundary conditions is dual to a WCFT, whose intrinsic and holographic properties have been explored in various works [50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 11]. WCFTs are able to capture certain properties of WAdS\({}_{3}\) backgrounds [7, 11] and of higher dimensional spacetimes [65, 66, 4]. Here we will highlight how the black hole mechanics can be reproduced by the thermodynamic behaviour of a WCFT. Using an adapted form of the Cardy formula, one can show that at high-temperature the entropy of a WCFT is given by [11] \[S^{\text{WCFT-CE}}=-\frac{4\pi iP_{0}P_{0}^{\text{vac}}}{\mathsf{k}}+4\pi\sqrt{ -\left(L_{0}^{\text{vac}}-\frac{(P_{0}^{\text{vac}})^{2}}{\mathsf{k}}\right) \left(L_{0}-\frac{P_{0}^{2}}{\mathsf{k}}\right)}. \tag{2.20}\] In order to compare this expression with \(S^{\text{CE}}\) in (2.12), we need to provide the values of \(L_{0}^{\text{vac}}\) and \(P_{0}^{\text{vac}}\), i.e., determine the vacuum charges. One key obstacle, relative to a CFT\({}_{2}\), is that for a WCFT the vacuum charges are not fully specified by the symmetries alone. Under the assumption that the vacuum state is normalizable and invariant under \(sl(2,\mathbb{R})\times u(1)\), one can show that \[L_{0}^{\text{vac}}=-\frac{c}{24}+\frac{(P_{0}^{\text{vac}})^{2}}{\mathsf{k}}. \tag{2.21}\] It is interesting to note that (2.10) satisfies this relation. Therefore, taking (2.10) as a choice of vacuum state, and using (2.11), we would have \[L_{0}^{\rm vac}=-\frac{1}{24\nu}\frac{\ell}{G_{3}}\,\qquad P_{0}^{\rm vac}=- \frac{i}{6}\frac{1}{G_{3}}. \tag{2.22}\] It is straightforward to check that \(S^{\rm CE}=S^{\rm WCFT\text{-}CE}\), provided (2.18), (2.16) and (2.22). The values in (2.18) and (2.22) show a persistent feature of holographic WCFTs: they typically have negative level and \(P_{0}^{\rm vac}\) is purely imaginary. This makes the theories non-unitary; still, these features are manageable, rich, and interesting as they lead to intriguing synergy with black holes [26, 28]. ### Warped black hole: quadratic ensemble Another family of spacetimes we will be considering are the so-called Warped BTZ metrics; also known as WBHs in the quadratic ensemble, a nomenclature that will become clear below. Their line element reads \[ds^{2}=-N_{\text{\tiny{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}}}}}}}}}}}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The mass and angular momentum of this black hole are given by \[\begin{split} M^{\text{\tiny QE}}&=\frac{(3-4H^{2})( \mathpzc{r}_{+}^{2}+\mathpzc{r}_{-}^{2})-2\mathpzc{r}_{-}\mathpzc{r}_{+}}{24G _{3}L\sqrt{1-2H^{2}}}\,\\ J^{\text{\tiny QE}}&=\frac{\mathpzc{r}_{+}^{2}+ \mathpzc{r}_{-}^{2}-2(3-4H^{2})\mathpzc{r}_{+}\mathpzc{r}_{-}}{24G_{3}L \sqrt{1-2H^{2}}}\,\end{split} \tag{2.27}\] and the Wald entropy is \[S^{\text{\tiny QE}}=\frac{\pi}{6G_{3}\sqrt{1-2H^{2}}}((3-4H^{2})\mathpzc{r}_{ +}-\mathpzc{r}_{-}). \tag{2.28}\] Here the superscript "QE" denotes quadratic ensemble. As expected these quantities satisfy a first law, which reads \[dM^{\text{\tiny QE}}=T^{\text{\tiny QE}}dS^{\text{\tiny QE}}+\Omega^{\text {\tiny QE}}dJ^{\text{\tiny QE}}\, \tag{2.29}\] where the Hawking temperature and angular velocity are \[T^{\text{\tiny QE}}=\frac{\mathpzc{r}_{+}^{2}-\mathpzc{r}_{-}^{2}}{2\pi L \mathpzc{r}_{+}}\,\qquad\Omega^{\text{\tiny QE}}=-\frac{\mathpzc{r}_{-}}{ \mathpzc{r}_{+}}. \tag{2.30}\] As we did in Sec. 2.1, we will next review how to capture the thermodynamic properties of the quadratic ensemble solution holographically. Starting with the phase space, boundary conditions containing the WBH in the quadratic ensemble, and their symmetry algebra, have been identified in [35]. The asymptotic vector fields that enter in this construction, to leading order, are given by \[\tilde{\ell}_{n}=e^{inx^{+}}(\partial_{+}-\frac{1}{2}in\mathpzc{r}\partial_{r })\,\qquad\tilde{p}_{n}=e^{inx^{+}}\partial_{-}\, \tag{2.31}\] with \(x^{\pm}=\frac{\mathpzc{r}}{L}\pm\varphi\) and \(n\in\mathbb{Z}\). Each of these generators has an associated conserved charge, which we coin \(\mathscr{L}_{n}\) and \(\mathscr{P}_{n}\). The relation of the zero modes (\(n=0\)) to the mass and angular momentum is \[\begin{split} M^{\text{\tiny QE}}&=\frac{1}{L}( \mathscr{L}_{0}+\mathscr{P}_{0})\,\\ J^{\text{\tiny QE}}&=\mathscr{L}_{0}-\mathscr{P}_{0} \.\end{split} \tag{2.32}\] The corresponding charge algebra is then found to be \[\begin{split}[\mathscr{L}_{n},\mathscr{L}_{m}]&=(n-m )\mathscr{L}_{n+m}+\frac{c}{12}\left(n^{3}-n\right)\delta_{n+m}\,\\ [\mathscr{L}_{n},\mathscr{P}_{m}]&=-m\mathscr{P}_{m+ n}\,\\ [\mathscr{P}_{n},\mathscr{P}_{m}]&=-2n\mathscr{P}_{0 }\delta_{m+n}\.\end{split} \tag{2.33}\] with central charge \(c\) given by (2.18), which in terms of the variables used here is \[c=\frac{2(1-H^{2})}{\sqrt{1-2H^{2}}}\frac{L}{G_{3}}. \tag{2.34}\] The algebra (2.33) resembles a Virasoro-Kac-Moody algebra, but with a key twist. The level of the affine \(u(1)\) generators is controlled by \(\mathscr{P}_{0}\): this makes the algebra non-local. Another curious, and useful, observation is that the algebras (2.17) and (2.33) are related though the redefinition \[\mathscr{L}_{n}=L_{n}-\frac{2}{\mathsf{k}}P_{0}P_{n}+\frac{1}{ \mathsf{k}}P_{0}^{2}\delta_{n}\,\quad\mathscr{P}_{n}=-\frac{2}{\mathsf{k}}P_{0}P_{n}+\frac{1}{ \mathsf{k}}P_{0}^{2}\delta_{n}. \tag{2.35}\] This transformation is the reason why we refer to solutions in this classical phase space as being in a "quadratic ensemble." Despite the undesirable non-local aspects it is possible to extract information about the density of states for Hilbert spaces described by (2.33). As shown in [11], starting from the partition function \[Z^{\textsc{wCFT-qE}}(\beta_{R},\beta_{L})=\mathrm{Tr}\ e^{-\beta _{R}\mathscr{P}_{0}-\beta_{L}\mathscr{L}_{0}}\, \tag{2.36}\] it is possible to extract a universal behaviour at high temperatures, analogous to the Cardy behaviour in a CFT\({}_{2}\). More concretely, the relation (2.35) allows to relate properties of (2.36) to those of a regular WCFT, which leads to the following entropy formula \[S^{\textsc{wCFT-qE}}=4\pi\sqrt{-\mathscr{P}_{0}^{\textsc{vac}} \mathscr{P}_{0}}+4\pi\sqrt{-\mathscr{L}_{0}^{\textsc{vac}}\mathscr{L}_{0}}. \tag{2.37}\] Despite the absence of full conformal invariance of the system, one obviously cannot help but notice the similarity between (2.37) and the Cardy formula of a CFT\({}_{2}\); but we stress that \(\mathscr{P}_{0}^{\textsc{vac}}\) and \(\mathscr{L}_{0}^{\textsc{vac}}\) are not fixed by symmetries. One simple, and interesting, check is to notice that \(S^{\textsc{wCFT-qE}}\) is equivalent to \(S^{\textsc{wCFT-ce}}\) in (2.20), due to (2.35). We can now proceed to compare (2.37) to the Wald entropy (2.28). The key is to choose a vaccum state. We will use (2.10) and (2.35); with this we infer that in the quadratic ensemble the vacuum charges are \[\mathscr{L}_{0}^{\textsc{vac}}=-\frac{c}{24}\,\qquad\mathscr{P}_{0}^{ \textsc{vac}}=\frac{1}{36\mathsf{k}G_{3}^{2}}\, \tag{2.38}\] where \(c\) and \(\mathsf{k}\) are defined in (2.18). With this choice, and using (2.32), it is simple to check that \(S^{\textsc{wCFT-qE}}=S^{\textsc{qE}}\). In this comparison, it is also useful to report how the potentials (2.30) are related to WCFT variables. We have \[\frac{1}{T^{\rm QE}}=\frac{1}{2}(\beta_{R}+\beta_{L})\,\qquad\frac{\Omega^{\rm QE }}{T^{\rm QE}}=\frac{1}{2}(\beta_{L}-\beta_{R})\, \tag{2.39}\] where the left and right moving potentials have a very simple expression, \[\beta_{L}=\frac{2\pi L}{\mathpzc{r}_{+}-\mathpzc{r}_{-}}\,\qquad\beta_{R}= \frac{2\pi L}{\mathpzc{r}_{+}+\mathpzc{r}_{-}}. \tag{2.40}\] ### Ties between canonical and quadratic ensemble Until now we have been treating (2.8) and (2.23) as two distinct black hole solutions of TMG. Here we will review how they are intimately related, and fit it with the relation among the generators in (2.35). The basic observation is that the metrics are related by the following change of coordinates7 Footnote 7: The radial component of the diffeomorphism is such that the radial function in (2.9) and (2.24) report the same value, i.e., \(R(r)^{2}=R_{\rm QE}(r)^{2}\). For more details see [67]. \[\begin{split}\frac{\mathpzc{r}}{L}&=-\frac{\mathsf{ k}}{4M^{\rm CE}}t\,\\ \varphi&=\theta+\frac{\mathsf{k}}{4M^{\rm CE}}t\,\\ \mathpzc{r}^{2}&=\frac{(\nu^{2}+3)}{4\nu^{2}}\left(R (r)^{2}-\frac{3}{4}(\nu^{2}-1)(r-r_{+})(r-r_{-})\right)\,\end{split} \tag{2.41}\] where \[M^{\rm CE}=-\frac{\mathsf{k}}{4}\left((r_{+}+r_{-})\nu-\sqrt{(\nu^{2}+3)r_{+} r_{-}}\right)\, \tag{2.42}\] which is just a re-writing of (2.11) in terms of the level \(\mathsf{k}\). Since the mass, \(M^{\rm CE}\), enters here this is a state-dependent transformation between the two solutions. The coordinate transformation would also lead to the relation between the generators (2.35). We also note that \(S^{\rm CE}=S^{\rm QE}\), that is the entropy of the CE and QE WBH match; this is expected from the perspective of the WCFT, and also it is expected since the Wald entropy is diffeomorphism invariant. It will be useful to record for later derivations the relations between the thermodynamic potentials, which reads \[\beta^{\rm CE}\Omega^{\rm CE}=\beta_{L}\,\qquad\beta^{\rm CE}=-\frac{2\pi}{3 \mathsf{k}G_{3}}\left(1+\frac{\beta_{R}}{\beta_{L}}\right). \tag{2.43}\] Here \(\beta^{\rm CE}=1/T^{\rm CE}\) and the potentials are defined in (2.14) and (2.40) for each ensemble. At first glance the diffeomorphism (2.41) seems trivial and should indicate that the dual description in the canonical and quadratic ensemble is simply a choice. However we also have reviewed that the implications of this transformation gives a non-local algebra in one case, which is dramatic. In the following sections our task will be to analyse and contrast the solutions starting from their near-extremal limit. With this we aim to decode differences and similarities among these two ensembles. ## 3 Near-extremal warped black holes: canonical ensemble Our analysis starts with the WBH solution, casted in the canonical ensemble (CE). Building on the general features reviewed in the previous section, we will focus on three aspects of the solution near-extremality: the response of thermodynamic quantities, the shape of the near horizon geometry, and the scattering of massive scalar fields. For BTZ black holes, these aspects have been addressed in various works including [68, 69, 70, 71]. ### Thermodynamics An important aspect of our work is to consider the extremal version of the metrics (2.8), and look at small deviations away from it. In the following we will introduce these concepts for the CE warped black hole and define the concept of "near-extremality" from the thermodynamic perspective. The extremal black hole is defined as the solution of (2.8) for which \[\text{Extremality:}\quad r_{+}=r_{-}\equiv r_{0}. \tag{3.1}\] It is important to remark that the potentials (2.14) are only well defined in this limit if in addition \(\nu\neq 1\). This means that the extremal CE solution is not smoothly connected to the extremal BTZ black hole. For this reason, we stress that in the following equations we always assume \(\nu>1\). At extremality, the potentials (2.14) take the values \[\begin{split}&\left.T^{\text{CE}}\right|_{r_{\pm}=r_{0}}=0\,\\ &\left.\Omega^{\text{CE}}\right|_{r_{\pm}=r_{0}}=\frac{2}{\left(2 \nu-\sqrt{\nu^{2}+3}\right)r_{0}}\equiv\Omega^{\text{CE}}_{\text{ext}}\,\end{split} \tag{3.2}\] while the charges (2.11) become \[\begin{split} M^{\text{CE}}_{\text{ext}}&\equiv M^{ \text{CE}}\Big{|}_{r_{\pm}=r_{0}}=\frac{\nu^{2}+3}{12\nu\ell G_{3}}\frac{1}{ \Omega^{\text{CE}}_{\text{ext}}}\,\\ J^{\text{CE}}_{\text{ext}}&\equiv J^{\text{CE}} \Big{|}_{r_{\pm}=r_{0}}=\frac{\nu^{2}+3}{24\nu\ell G_{3}}\frac{1}{(\Omega^{ \text{CE}}_{\text{ext}})^{2}}\,\\ S^{\text{CE}}_{\text{ext}}&\equiv S^{\text{CE}} \Big{|}_{r_{\pm}=r_{0}}=\frac{\pi}{3G_{3}}\frac{1}{\Omega^{\text{CE}}_{\text{ ext}}}\.\end{split} \tag{3.3}\] Near extremality is a small deviation from extremality leading to a non-vanishing temperature, while keeping the angular momentum \(J^{\text{CE}}_{\text{ext}}\) fixed.8 This can be achieved by modifying (3.1) as Footnote 8: In many setups, such as Reissner-Nordstrom or Myers-Perry black holes, the prescription is to keep the conserved charge that controls the AdS\({}_{2}\) radius fixed, for reasons that will be more transparent in Sec. 5.2. In three dimensions this is not necessary, but it facilitates the analysis to keep one of the conserved charges fixed. \[\text{Near-extremality:}\quad r_{+}=r_{0}+\epsilon\,\mathfrak{d}\,\quad r_{-}=r_{0}- \epsilon\,\mathfrak{d}. \tag{3.4}\] Here \(\epsilon\ll 1\), that is a small parameter that introduces the small deviation from extremality; \(\mathfrak{d}\) is a parameter that will remain fixed as one takes \(\epsilon\to 0\) and was chosen to keep \(J^{\text{CE}}_{\text{ext}}\) fixed at leading order in \(\epsilon\). The leading order response in \(\epsilon\) of the temperature is linear, and reads \[T^{\text{CE}}=\frac{\nu^{2}+3}{4\pi\ell}\Omega^{\text{CE}}_{\text{ext}}\, \mathfrak{d}\,\epsilon+\mathcal{O}(\epsilon^{2}). \tag{3.5}\] The mass on the other hand increases quadratically \[\Delta E^{\text{CE}}=M^{\text{CE}}-M^{\text{CE}}_{\text{ext}}=\frac{(T^{\text {CE}})^{2}}{M^{\text{CE}}_{\text{gap}}}+\mathcal{O}(\epsilon^{3})\, \tag{3.6}\] where the commonly coined mass gap [72, 73, 74] is given by \[M^{\text{CE}}_{\text{gap}}\equiv\frac{6G_{3}}{\pi^{2}\ell}\,\frac{\nu(3+\nu^{ 2})}{(3+5\nu^{2})}\Omega^{\text{CE}}_{\text{ext}}=\frac{3}{\pi^{2}c}\sqrt{ \frac{-\mathsf{k}}{J^{\text{CE}}_{\text{ext}}}}. \tag{3.7}\] In the last equality we used (2.18) and (3.3) to cast the mass gap in terms of the central extensions that enter in the holographic description. Note that since \(\nu>1\) (\(\Omega^{\text{CE}}_{\text{ext}}>0\)) the mass gap is always positive.9 Footnote 9: Also recall that \(\mathsf{k}<0\) (2.18), so the mass gap is real. It then also follows that the entropy responds linearly in temperature as we deviate from extremality with the slope inversly proportional to the mass gap: \[S^{\text{CE}}=S^{\text{CE}}_{\text{ext}}+2\frac{T^{\text{CE}}}{M^{\text{CE}}_{ \text{gap}}}+\mathcal{O}(\epsilon^{2}). \tag{3.8}\] This is the universal response of the entropy based on general grounds: that is, the compliance of the Wald entropy to a first law of thermodynamics and that extremality only involves two-coincident horizons. In our subsequent sections we will compare this analysis to the QE warped black hole, provide a derivation of the entropy via a holographic analysis, and match it to near-extremal limits of WCFTs. ### Decoupling limit In this portion we will report on the geometrical effects of the near-extremal limit. In this context a WBH behaves similarly to their AdS\({}_{3}\) counterparts: at extremality the near horizon geometry develops an AdS\({}_{2}\) throat. Here we show this limiting procedure explicitly, and also incorporate the near-extremal contributions. This will quantify the notion of near-AdS\({}_{2}\) for the WBH in the canonical ensemble. As it is common practice, we start by introducing a coordinate transformation catered to zoom into the horizon of the black hole. First, in the context of the near-extremal limit (3.4), we introduce a new coordinate system \((\rho,\tau,\psi)\) which redefines the coordinates \((r,t,\theta)\) used in (2.8). The transformation reads \[\begin{split} r&=r_{0}+\epsilon\left(e^{\rho/\ell_ {2}}+\frac{\mathfrak{d}^{2}}{4}e^{-\rho/\ell_{2}}\right)\,\\ t&=2R_{0}\frac{\ell_{2}}{\ell}\frac{\tau}{ \epsilon}\,\\ \theta&=\psi+2\frac{\ell_{2}}{\ell}\frac{\tau}{ \epsilon}\,\end{split} \tag{3.9}\] where \(\epsilon\) and \(\mathfrak{d}\) are defined around (3.4). In this context, \(\epsilon\) implements extremality, and it is also the decoupling parameter that takes us to the near-horizon region, i.e., near to \(r\to r_{0}\); a finite value of \(\mathfrak{d}\) quantifies a deviation away from extremality. We have also introduced two constants in (3.9) which are defined as \[\ell_{2}^{2}\equiv\frac{\ell^{2}}{\nu^{2}+3}\,\quad R_{0}\equiv R(r_{0}) \Big{|}_{r_{\pm}=r_{0}}=\frac{r_{0}}{2}\left(2\nu-\sqrt{\nu^{2}+3}\right). \tag{3.10}\] As we will see momentarily, \(\ell_{2}\) is the AdS\({}_{2}\) radius. It is also worth remarking that \(\Omega_{\text{ext}}^{\text{CE}}=R_{0}^{-1}\), which is just a coincidence for the CE warped black hole. More significantly, \(R_{0}\) is the size of the extremal horizon, and controls the extremal Wald entropy in (3.3). The near-horizon region is defined by using (3.9) on (2.8) and taking the limit \(\epsilon\to 0\), while keeping all other parameters fixed. The resulting line element is \[\begin{split} ds_{\text{\tiny CE}}^{2}=&\,\mathrm{d} \rho^{2}-e^{2\rho/\ell_{2}}\left(1-\frac{\mathfrak{d}^{2}}{4}e^{-2\rho/\ell_{ 2}}\right)^{2}\mathrm{d}\tau^{2}\\ &+R_{0}^{2}\left(\mathrm{d}\psi+\frac{2\nu}{R_{0}}\frac{\ell_{2} }{\ell}e^{\rho/\ell_{2}}\left(1+\frac{\mathfrak{d}^{2}}{4}e^{-2\rho/\ell_{2}} \right)\mathrm{d}\tau\right)^{2}+\mathcal{O}(\epsilon)\.\end{split} \tag{3.11}\] As expected the result is finite, resulting into a non-degenerate metric, referred to as _self-dual warped AdS\({}_{3}\) space_[75, 8]. This is the warped version of the near-horizon geometry of extremal BTZ black holes - self-dual AdS\({}_{3}\) space [76] -, and a constant polar angle section of the NHEK geometry [12, 13]. The first line reflects that the near-horizon geometry contains an AdS\({}_{2}\) factor: for \(\mathfrak{d}=0\), it is AdS\({}_{2}\) in Poincare coordinates, while for \(\mathfrak{d}\neq 0\) the metric is locally AdS\({}_{2}\).10 The later is usually coined "near-AdS\({}_{2}\)". The second line reflects that the total space time is a fibration of a circle over AdS\({}_{2}\). The resulting local symmetries of the near-horizon region is therefore \(sl(2,\mathbb{R})\times u(1)\). Footnote 10: More specifically it is a Rindler (thermal) patch of AdS\({}_{2}\), where \(\mathfrak{d}\) controls the acceleration of the observer. This geometry is also at times refered to as an ”AdS\({}_{2}\) black hole.” As we further explore the holographic properties of this black hole, it will also be important to quantify how the solution responds to first order away from extremality. With some foresight to the subsequent sections, we will parametrize the first order response in \(\epsilon\) as \[ds_{\text{\tiny CE}}^{2}=\left(\bar{g}_{\mu\nu}+\epsilon\,h_{\mu\nu}\right) \mathrm{d}x^{\mu}\mathrm{d}x^{\nu}+\left(R_{0}^{2}+\epsilon\mathscr{Y}\right) \left(\mathrm{d}\psi+(\bar{A}_{\mu}+\epsilon\mathscr{A}_{\mu})\mathrm{d}x^{ \mu}\right)^{2}+\cdots\, \tag{3.12}\] that is, there is a response from the AdS\({}_{2}\) metric (\(h_{\mu\nu}\)), the size of the \(U(1)\) circle (\(\mathscr{Y}\)), and the fiber (\(\mathscr{A}_{\mu}\)). Here the variables with a bar are those in (3.11): \(\bar{g}_{\mu\nu}\) is the locally AdS\({}_{2}\) background, and \(\bar{A}_{\mu}\) is the background component of the fibration. It is straightforward to read the values of these responses by keeping the first correction in \(\epsilon\) of the coordinate transformation, where one finds \[\begin{split}\mathscr{Y}&=2\nu R_{0}\,e^{\rho/\ell _{2}}\left(1+\frac{\mathfrak{d}^{2}}{4}e^{-2\rho/\ell_{2}}\right)\,\\ \mathscr{A}&=-\frac{\ell_{2}}{2\ell R_{0}^{2}}(3+5 \nu^{2})\,e^{2\rho/\ell_{2}}\left(1+\frac{\mathfrak{d}^{4}}{16}e^{-4\rho/\ell _{2}}\right)\mathrm{d}\tau-\frac{\mathfrak{d}^{2}}{R_{0}r_{0}}\frac{\nu\ell_{ 2}}{\ell}\mathrm{d}\tau\,\\ h_{\tau\tau}&=\frac{2\nu}{R_{0}}e^{3\rho/\ell_{2}} \left(1-\frac{\mathfrak{d}^{2}}{4}e^{-2\rho/\ell_{2}}\right)^{2}\left(1+\frac{ \mathfrak{d}^{2}}{4}e^{-2\rho/\ell_{2}}\right),\quad h_{\rho\rho}=h_{\tau\rho }=0\.\end{split} \tag{3.13}\] As is persistent in many other black hole backgrounds, the responses grow rapidly as one reaches the boundary of AdS\({}_{2}\) at \(\rho\to\infty\). This reflects that deviations away from extremality should be interpreted holographically as irrelevant deformations, and we will comment more on this in Sec. 5. ### Two-point function The behaviour of probes, and in particular its two-point function, is a useful way to encode properties of a black hole. Here we will analyse a massive probe around the CE warped black hole focusing on its near-extremal limit. The aim is to contrast the results against the analogous treatment for the BTZ black hole and the QE warped black hole; this comparison will be discussed in Sec. 6.1. Our derivations follow the analysis in, e.g., [71, 77]. We start by solving the Klein-Gordon equation of a scalar field with mass \(m\) in the CE black hole background, \[\nabla^{2}\Phi(t,r,\theta)=m^{2}\Phi(t,r,\theta)\, \tag{3.14}\] where \(\nabla^{2}\) is the Laplace-Beltrami operator for the metric (2.8). Using a separable ansatz for \(\Phi\), and further decomposing it into Fourier modes, we will write \[\Phi(t,r,\theta)=\sum_{k}\int\mathrm{d}\omega\,e^{-i\frac{\omega}{\ell}t+ik \theta}\Psi(r)\, \tag{3.15}\] for which the wave equation then reads \[\begin{split}\frac{\partial}{\partial r}\Big{(}(r-r_{+})(r-r_{- })&\frac{\partial}{\partial r}\Big{)}\Psi(r)\\ &+\frac{1}{\nu^{2}+3}\left(\frac{1}{N(r)^{2}}\left(\omega-N^{ \theta}(r)\ell k\right)^{2}-\frac{\ell^{2}k^{2}}{R(r)^{2}}\right)\Psi(r)= \frac{\ell^{2}m^{2}}{\nu^{2}+3}\Psi(r)\.\end{split} \tag{3.16}\] The functions \(N(r)^{2}\), \(R(r)^{2}\), and \(N^{\theta}(r)\) are defined in (2.8). Note that we have chosen to normalise time in (3.15) with respect to \(\ell\), which makes \(\omega\) dimensionless. Our main task in the following is to extract the two-point function by solving (3.16).11 We are mainly interested in the behaviour near extremality, which implies that we are exploring the low-temperature and low-frequency limit of the correlation function. In this context the quantity that is interesting to report on is the relation between the two-point function evaluated in the UV region (\(r\to\infty\)) and the one evaluated in the IR region (\(r\to r_{+}\)). That is, the relation between the two-point function evaluated near the WAdS\({}_{3}\) boundary and the one in the near-AdS\({}_{2}\) region in Sec. 3.2. Footnote 11: In our setups, a two-point function is equivalent to a greybody factor (up to an overall normalization). To implement the near-extremal limit we will introduce very similar variables as in Sec. 3.2, with some small adjustments to avoid clutter. We define the dimensionless parameters \[x\equiv\frac{r-r_{+}}{r_{+}}\,\quad\tau_{H}\equiv\frac{r_{+}-r_{-}}{r_{+}}. \tag{3.17}\] Notice that near-extremality, as defined in (3.4), implies that \(\tau_{H}\sim\epsilon\ll 1\). In terms of these variables, (3.16) becomes \[\partial_{x}(x(x+\tau_{H})\partial_{x})\Psi(x)-\frac{4R_{\tau_{H}}^ {2}}{\tau_{H}r_{+}^{2}}\frac{\ell_{2}^{4}}{\ell^{4}}\left(\omega-\frac{k\ell}{ R_{\tau_{H}}}-\frac{\nu\tau_{+}}{R_{\tau_{H}}}\tau_{H}\omega\right)^{2}\frac{1}{(x+ \tau_{H})}\Psi(x)\\ +\frac{4R_{\tau_{H}}^{2}}{\tau_{H}r_{+}^{2}}\frac{\ell_{2}^{4}}{ \ell^{4}}\left(\omega-\frac{k\ell}{R_{\tau_{H}}}\right)^{2}\frac{1}{x}\Psi(x)- \ell_{2}^{2}m_{\rm CE}^{2}\Psi(x)=0. \tag{3.18}\] where \[R_{\tau_{H}}\equiv\frac{r_{+}}{2}\left(2\nu-\sqrt{(\nu^{2}+3)(1-\tau_{H})} \right)\, \tag{3.19}\] which is the non-extremal version of (3.10), and \[m_{\rm CE}^{2}=m^{2}+\frac{3\ell_{2}^{2}}{\ell^{4}}\left(1-\nu^{2}\right) \omega^{2}\, \tag{3.20}\] with \(\ell_{2}\) given by (3.10). \(m_{\rm CE}\) is an effective mass with a non-trivial frequency dependence--this is reminiscent of Kerr/CFT [77]. Note that in the BTZ limit, where \(\nu=1\), the frequency dependence drops from (3.20). It is also interesting to note that \(m_{\rm CE}\) enters in (3.18) measured in units of the AdS\({}_{2}\) radius, and not AdS\({}_{3}\). To extract the two-point function, it is common to divide the wave equation into two zones, \[\begin{split}\textbf{Far region:}\qquad x\gg\tau_{H}\,\\ \textbf{Near region:}\qquad x\ll 1\,\end{split} \tag{3.21}\] as one takes \(\tau_{H}\to 0\). The **far zone** reaches to the asymptotically warped AdS\({}_{3}\) portion of the geometry, far from the horizon of the black hole. The **near zone** covers the area close to the horizon, and near extremality, it corresponds to the near-AdS\({}_{2}\) portion of the geometry described in Sec. 3.2. In the near-extremal limit, these two regions overlap at \[\textbf{Matching region:}\qquad 1\gg x\gg\tau_{H}. \tag{3.22}\] As it is common in this sort of analysis, one solves the wave equation separately in the far and near region, and then overlaps them in the matching region. This gives a connection between the correlation functions in the UV (far) and IR (near) regimes. For AdS\({}_{3}\) and WAdS\({}_{3}\), this matching procedure is very simple to implement. The reason being that the singularity structure in \(x\) of the Klein-Gordon operator on (W)AdS\({}_{3}\) and AdS\({}_{2}\) is exactly the same. The difference between the near region and the whole geometry is the behaviour of the coefficients governing the poles. That is, the radial wave equation in the far, near and matching region has the general structure \[\partial_{x}(x(x+\tau_{H})\partial_{x})\Psi(x)-\frac{\mathtt{a}(\omega,k)}{(x+ \tau_{H})}\Psi(x)+\frac{\mathtt{b}(\omega,k)}{x}\Psi(x)-\ell_{2}^{2}m_{\rm CE}^ {2}\Psi(x)=0\, \tag{3.23}\] which is manifest in (3.18). The difference between each region arises from frequency dependence of \(\mathtt{a}(\omega,k)\) and \(\mathtt{b}(\omega,k)\); this reflects the details of an AdS\({}_{2}\) background versus (W)AdS\({}_{3}\), which we will discuss in detail below.12 The differential equation (3.23) can be solved exactly, and its solutions are governed by hypergeometric functions. Footnote 12: It is important to stress that this is due to a local \(sl(2,\mathbb{R})\) factor present in all of these spaces. In higher dimensions this is no longer true, and the matching procedure is more delicate. Within the generality of (3.23), we can report on the behaviour of the two-point function. In the far zone, where the variable \(x\) is large, the solutions to the wave equation (3.23) reduces to \[\Psi(x)=\psi_{1}(\omega,k)\,x^{\Delta_{\rm CE}-1}+\psi_{2}(\omega,k)\,x^{- \Delta_{\rm CE}}\, \tag{3.24}\] where \[\Delta_{\rm CE}\equiv\frac{1}{2}+\frac{1}{2}\sqrt{1+4\ell_{2}^{2}m_{\rm CE}^{ 2}}\, \tag{3.25}\] and \(\psi_{1,2}\) are independent of \(x\). Note that \(\Delta_{\rm CE}\) plays the role of a conformal dimension in AdS, although here it is frequency-dependent due to (3.20). We will impose in-going boundary conditions at the horizon, i.e., for \(x\ll 1\) we fix \[\Psi(x)=x^{-i\sqrt{\mathtt{b}/\tau_{H}}}(1+\cdots). \tag{3.26}\] This then implies that in the far region the terms in (3.24) are \[\begin{split}\psi_{1}(\omega,k)&=\tau_{H}^{1-\Delta _{\rm CE}-i\sqrt{\frac{\mathtt{b}}{\tau_{H}}}}\frac{\Gamma\left(2\Delta_{\rm CE }-1\right)\Gamma\left(1-2i\sqrt{\frac{\mathtt{b}}{\tau_{H}}}\right)}{\Gamma \left(\Delta_{\rm CE}-i\sqrt{\frac{\mathtt{b}}{\tau_{H}}}-i\sqrt{\frac{ \mathtt{a}}{\tau_{H}}}\right)\Gamma\left(\Delta_{\rm CE}-i\sqrt{\frac{\mathtt{ b}}{\tau_{H}}}+i\sqrt{\frac{\mathtt{a}}{\tau_{H}}}\right)}\,\\ \psi_{2}(\omega,k)&=\tau_{H}^{\Delta_{\rm CE}-i\sqrt{ \frac{\mathtt{b}}{\tau_{H}}}}\frac{\Gamma\left(1-2\Delta_{\rm CE}\right) \Gamma\left(1-2i\sqrt{\frac{\mathtt{b}}{\tau_{H}}}\right)}{\Gamma\left(1- \Delta_{\rm CE}-i\sqrt{\frac{\mathtt{b}}{\tau_{H}}}-i\sqrt{\frac{\mathtt{a}}{ \tau_{H}}}\right)\Gamma\left(1-\Delta_{\rm CE}-i\sqrt{\frac{\mathtt{b}}{\tau_ {H}}}+i\sqrt{\frac{\mathtt{a}}{\tau_{H}}}\right)}\.\end{split} \tag{3.27}\] From this we can read off the two-point function to be \[G_{\rm CE}(\omega,k)=\frac{\psi_{2}(\omega,k)}{\psi_{1}(\omega,k)}. \tag{3.28}\] Up to an overall normalization, the dependence on gamma functions in (3.28) agrees with a WCFT retarted Green's function reported in [25]. Here we are selecting a simple normalization of the correlator, that we will be consistent between the CE and QE WBH. It is worth remarking that this not the standard normalization used for free fields in AdS\({}_{d+1}\), see for example [78, 79], nor the equivalent derivation done in [71]. The next step is to report on the low-temperature and low-frequency behaviour of (3.28). Implementing the decoupling limit (3.9) on the frequency and momenta we find, \[k_{\rm ir}=k\,\qquad\epsilon\,\omega_{\rm ir}=2\frac{\ell_{2}}{\ell^{2}}R_{0} \left(\omega-\frac{k\ell}{R_{0}}\right)\,, \tag{3.29}\] where \((\omega_{\rm ir},k_{\rm ir})\) are conjugate to \((\tau,\psi)\). Note that in the limit \(\epsilon\to 0\) one holds \(\omega_{\rm ir}\) and \(k_{\rm ir}\) fixed.13 The coefficients in (3.23) then become Footnote 13: It is useful again to compare with [71]. There the authors take \(R_{0}\gg 1\) and this suppresses the momentum dependence in (3.29). To keep the discussion more general, we will take \(R_{0}\) large but fixed. In this context, we are following [77], which is a near to superradiance limit. \[\begin{split}\mathsf{a}(\omega_{\rm ir},k_{\rm ir})& =\frac{\tau_{H}}{4\mathsf{a}^{2}}\ell_{2}^{2}\left(\omega_{\rm ir }-2\nu\mathsf{0}\frac{\ell_{2}}{R_{0}}k_{\rm ir}\right)^{2}\,\\ \mathsf{b}(\omega_{\rm ir},k_{\rm ir})&=\frac{ \tau_{H}}{4\mathsf{a}^{2}}\ell_{2}^{2}\left(\omega_{\rm ir}+2\nu\mathsf{0} \frac{\ell_{2}}{R_{0}}k_{\rm ir}\right)^{2}\,\\ \Delta_{\rm CE}&=\frac{1}{2}+\frac{1}{2}\sqrt{1+4 \ell_{2}^{2}m^{2}+12\frac{\ell_{2}^{4}}{R_{0}^{2}\ell^{2}}(1-\nu^{2})k_{\rm ir }^{2}}\,\end{split} \tag{3.30}\] where we are reporting on their leading behaviour in the limit \(\epsilon\to 0\). It is important to mention that these are exactly the coefficients one would obtain in (3.23) when the Klein-Gordon operator is evaluated on the near-horizon background (3.11); this is part of matching procedure, which works easily in this geometry. With this, the two-point function in the near-AdS\({}_{2}\) regime is \[G_{\rm CE}(\omega_{\rm ir},k_{\rm ir})=\tau_{H}^{2\Delta_{\rm CE}-1}\frac{ \Gamma(1-2\Delta_{\rm CE})}{\Gamma(2\Delta_{\rm CE}-1)}\frac{\Gamma\left( \Delta_{\rm CE}-i\frac{\ell_{2}}{\mathsf{b}}\omega_{\rm ir}\right)\Gamma\left( \Delta_{\rm CE}-i2\nu\frac{\ell_{2}^{2}}{R_{0}}k_{\rm ir}\right)}{\Gamma\left( 1-\Delta_{\rm CE}-i\frac{\ell_{2}}{\mathsf{b}}\omega_{\rm ir}\right)\Gamma \left(1-\Delta_{\rm CE}-i2\nu\frac{\ell_{2}^{2}}{R_{0}}k_{\rm ir}\right)}. \tag{3.31}\] At \(k_{\rm ir}=0\), or alternatively \(R_{0}\gg 1\), this expression reduces to \[\begin{split} G_{\rm CE}(\omega_{\rm ir})&=\tau_{H} ^{2\Delta_{\rm CE}-1}\frac{\Gamma(1-2\Delta_{\rm CE})\Gamma\left(\Delta_{\rm CE }\right)}{\Gamma(2\Delta_{\rm CE}-1)\Gamma\left(1-\Delta_{\rm CE}\right)} \frac{\Gamma\left(\Delta_{\rm CE}-i\frac{\ell_{2}}{\mathsf{b}}\omega_{\rm ir }\right)}{\Gamma\left(1-\Delta_{\rm CE}-i\frac{\ell_{2}}{\mathsf{b}}\omega_{ \rm ir}\right)}\\ &\sim\left(8\pi\frac{\ell_{2}^{2}}{\ell^{2}}\frac{R_{0}}{r_{0}} \frac{\ell}{\beta^{\rm CE}}\right)^{2\Delta_{\rm CE}-1}G_{\rm AdS_{2}}(\omega_{ \rm ir})\,\end{split} \tag{3.32}\] where now \(\Delta_{\rm CE}\) is independent of the momentum; \(R_{0}\) was defined in (3.9) and the temperature \(\beta^{\rm CE}=1/T^{\rm CE}\) and level are defined in (2.14) and (2.18) respectively. Since we have been zooming into extremality to derive (3.32) it only makes sense as long as \(\nu\neq 1\), as we also remark below (3.1). \(G_{{}_{\rm AdS}{}_{\rm s}}(\omega_{\rm ir})\) is the greybody factor one would obtain in thermal AdS\({}_{2}\). In the last line we are being cavalier about the normalization of \(G_{{}_{\rm AdS}{}_{\rm s}}\), but any ambiguity here is frequency independent. The relation (3.32) is in accordance with similar results obtained for BTZ in [71]. ## 4 Near-extremal warped black holes: quadratic ensemble In this section we turn to the warped black hole metric in the quadratic ensemble (QE), described in Sec. 2.2. The analysis mirrors the canonical ensemble (CE) in Sec. 3. We perform a similar near-horizon analysis. The contrast between CE and QE is delegated to Sec. 6. ### Thermodynamics In the following we will introduce the concepts of "extremality" and "near-extremality" for the QE black hole from a thermodynamic perspective. Our analysis follows the structure in the CE ensemble--see Sec. 3.1. The extremal black hole is defined as the solution (2.23) for which \[\text{Extremality:}\quad r_{+}=r_{-}\equiv r_{0}. \tag{4.1}\] At extremality, the values of the potentials (2.30) is \[\begin{split} T^{{}_{\rm QE}}\Big{|}_{r_{\pm}=r_{0}}& =0\,\\ \Omega^{{}_{\rm QE}}\Big{|}_{r_{\pm}=r_{0}}&=-1\, \end{split} \tag{4.2}\] while the charges (2.27) become \[\begin{split} M^{{}_{\rm QE}}_{\rm ext}& \equiv M^{{}_{\rm QE}}\Big{|}_{r_{\pm}=r_{0}}=\frac{r_{0}^{2}}{6G_{3}L} \sqrt{1-2H^{2}}\,\\ J^{{}_{\rm GR}}_{\rm ext}&\equiv J^{{}_{\rm GR}} \Big{|}_{r_{\pm}=r_{0}}=-\frac{r_{0}^{2}}{6G_{3}L}\sqrt{1-2H^{2}}\,\\ S^{{}_{\rm GR}}_{\rm ext}&\equiv S^{{}_{\rm QE}} \Big{|}_{r_{\pm}=r_{0}}=\frac{\pi r_{0}}{3G_{3}}\sqrt{1-2H^{2}}\.\end{split} \tag{4.3}\] Near extremality is a small deviation from extremality leading to a non-vanishing temperature, while keeping the angular momentum \(J^{{}_{\rm GR}^{\rm QE}}_{\rm ext}\) fixed. This can be achieved by modifying (4.1) as \[\text{Near-extremality:}\quad r_{+}=r_{0}+\epsilon\,\mathfrak{d}\,\quad r_{-}=r_{0}- \epsilon\,\mathfrak{d}. \tag{4.4}\] Similar to the CE black hole, here \(\epsilon\ll 1\), that is a small parameter that introduces the small deviation from extremality; \(\mathfrak{d}\) is a parameter that will remain fixed as one takes \(\epsilon\to 0\) and was chosen to keep \(J_{\text{ext}}^{\text{\tiny QE}}\) fixed at leading order in \(\epsilon\). The leading order response in \(\epsilon\) of the temperature is linear, and reads \[T^{\text{\tiny QE}}=\frac{2}{L\pi}\,\epsilon\,\mathfrak{d}+\mathscr{O}( \epsilon^{2}). \tag{4.5}\] The mass on the other hand increases quadratically \[\Delta E^{\text{\tiny QE}}=M^{\text{\tiny QE}}-M_{\text{ext}}^{\text{\tiny QE }}=\frac{(T^{\text{\tiny QE}})^{2}}{M_{\text{\tiny gap}}^{\text{\tiny CE}}}+ \mathscr{O}(\epsilon^{3})\, \tag{4.6}\] where the commonly coined mass gap is given by \[M_{\text{\tiny gap}}^{\text{\tiny QE}}\equiv\frac{6G_{3}\sqrt{1-2H^{2}}}{\pi ^{2}L(1-H^{2})}=\frac{12}{\pi^{2}c}. \tag{4.7}\] Here we have used (2.34) to relate the mass gap to the central charge associated to the asymptotic symmetry group in the quadratic ensemble. It also follows that the entropy responds linearly in temperature as we deviate from extremality with the slope inversly proportional to the mass gap: \[S^{\text{\tiny QE}}=S^{\text{\tiny QE}}_{\text{ext}}+2\frac{T^{\text{\tiny QE }}}{M_{\text{\tiny gap}}^{\text{\tiny QE}}}+\mathscr{O}(\epsilon^{2}). \tag{4.8}\] ### Decoupling limit In this portion we will report on the geometrical effects of the near-extremal limit. Again, the analysis is very analogous to Sec. 3.2, therefore we will only present the main equations and minimal commentary. To zoom into the horizon in the near-extremal regime, we introduce a new coordinate system \((\rho,\tau,\psi)\) which redefines the coordinates \((r,t,\theta)\) used in (2.23). The transformation reads \[\begin{split}\varkappa&=\varkappa_{0}+\epsilon \left(e^{\rho/\ell_{2}}+\frac{\mathfrak{d}^{2}}{4}e^{-\rho/\ell_{2}}\right)\,\\ \varkappa&=\ell_{2}\frac{\tau}{\epsilon}\,\\ \varphi&=\psi+\frac{\ell_{2}}{L}\frac{\tau}{\epsilon }\,\end{split} \tag{4.9}\] where \(\epsilon\) and \(\mathfrak{d}\) are defined around (4.4). Here the AdS\({}_{2}\) radius is also given by (3.10), and it is interesting to note that in the nomenclature of QE, we have \[\ell_{2}\equiv\frac{L}{2}\, \tag{4.10}\] where \(L\) is the effective AdS\({}_{3}\) radius, defined in (2.25). The near-horizon region is defined by using (4.9) on (2.23) and taking the limit \(\epsilon\to 0\), while keeping all other parameters fixed. The resulting line element is \[\begin{split} ds^{2}_{\text{\tiny{QE}}}=&\,\mathrm{d} \rho^{2}-e^{2\rho/\ell_{2}}\left(1-\frac{\mathfrak{d}^{2}}{4}e^{-2\rho/\ell_{2 }}\right)^{2}\mathrm{d}\tau^{2}\\ &+R_{0}^{2}\left(\mathrm{d}\psi+\frac{1}{R_{0}}\sqrt{1-2H^{2}}\,e ^{\rho/\ell_{2}}\left(1+\frac{\mathfrak{d}^{2}}{4}e^{-2\rho/\ell_{2}}\right) \mathrm{d}\tau\right)^{2}+\mathcal{O}(\epsilon)\,\end{split} \tag{4.11}\] where we have defined \[R_{0}\equiv R(\boldsymbol{r}_{0})\Big{|}_{\boldsymbol{r}_{\pm}=r_{0}}= \boldsymbol{r}_{0}\sqrt{1-2H^{2}}\, \tag{4.12}\] which is equivalent to (3.10) due to the ties in (2.41). This locally AdS\({}_{2}\) solution is exactly the same as (3.11). As we remarked for the CE black hole, it will also be important to quantify how the solution responds to the first order way from extremality. We again parametrize the first response in \(\epsilon\) as \[ds^{2}_{\text{\tiny{QE}}}=(\bar{g}_{\mu\nu}+\epsilon\,h_{\mu\nu})\,\mathrm{d} x^{\mu}\mathrm{d}x^{\nu}+\left(R_{0}^{2}+\epsilon\mathcal{Y}\right)\left( \mathrm{d}\psi+(\bar{A}_{\mu}+\epsilon\mathcal{A}_{\mu})\mathrm{d}x^{\mu} \right)^{2}+\cdots. \tag{4.13}\] For the QE black hole the responses are given by \[\begin{split}\mathcal{Y}&=2R_{0}\sqrt{1-2H^{2}}\,e ^{\rho/\ell_{2}}\left(1+\frac{\mathfrak{d}^{2}}{4}e^{-2\rho/\ell_{2}}\right) \,\\ \mathcal{A}&=\frac{1}{2R_{0}^{2}}(3-2H^{2})\,e^{2 \rho/\ell_{2}}\left(1+\frac{\mathfrak{d}^{4}}{16}e^{-4\rho/\ell_{2}}\right) \mathrm{d}\tau+\frac{1}{4R_{0}^{2}}(1-6H^{2})\,\mathrm{d}\tau\,\\ h_{\tau\tau}&=\frac{\sqrt{1-2H^{2}}}{R_{0}}e^{3\rho /\ell_{2}}\left(1-\frac{\mathfrak{d}^{2}}{4}e^{-2\rho/\ell_{2}}\right)^{2} \left(1+\frac{\mathfrak{d}^{2}}{4}e^{-2\rho/\ell_{2}}\right)\,\\ h_{\rho\rho}&=\frac{\sqrt{1-2H^{2}}}{R_{0}}e^{\rho /\ell_{2}}\left(1+\frac{\mathfrak{d}^{2}}{4}e^{-2\rho/\ell_{2}}\right)\,\quad h_{\tau\rho}=0\.\end{split} \tag{4.14}\] An obvious difference relative to (3.13) is that we are not preserving the radial gauge: \(h_{\rho\rho}\) is non-zero. Conceptually this is not a problem. It can be restored by doing a re-definition of the radial coordinate. ### Two-point function Following the analysis in Sec. 3.3, in this section we discuss the Klein-Gordon equation of a massive scalar field when the background is given by the near-extremal QE black hole (2.23). More explicitly, we will solve \[\nabla^{2}\Phi(\boldsymbol{\ell},\boldsymbol{r},\varphi)=m^{2}\Phi(\boldsymbol {\ell},\boldsymbol{r},\varphi)\, \tag{4.15}\] and report on the behaviour at low energies. To solve this equation, we use a separable ansatz for \(\Phi\) to further decompose it into Fourier modes; we will write \[\Phi(t,\boldsymbol{\nu},\varphi)=\sum_{k}\int\mathrm{d}\omega\,e^{-i\frac{ \varphi}{L}\ell+ik\varphi}\Psi(\boldsymbol{\nu}). \tag{4.16}\] Here, we are using \(L=2\ell_{2}\) as the unit of time since it is the parameter that naturally enters in the QE ensemble. Notice that we are abusing the notation here to avoid clutter: \((\omega,k)\) in (4.16) are not equal to those used in (3.15), since the notion of time is different for each geometry--see (2.41). Using (4.16), we obtain the wave equation \[\begin{split}\frac{1}{\boldsymbol{\nu}}\frac{\partial}{ \partial\boldsymbol{\nu}}\Big{(}(\boldsymbol{\nu}^{2}-\boldsymbol{\nu}_{+}^{2 })(\boldsymbol{\nu}^{2}-\boldsymbol{\nu}_{-}^{2})&\frac{1}{ \boldsymbol{\nu}}\frac{\partial}{\partial\boldsymbol{\nu}}\Big{)}\Psi( \boldsymbol{\nu})\\ &+\left(\frac{1}{N_{\text{\tiny QE}}(\boldsymbol{\nu})^{2}}\,( \omega+N^{\varphi}(\boldsymbol{\nu})Lk)^{2}-\frac{L^{2}k^{2}}{R_{\text{\tiny QE }}(\boldsymbol{\nu})^{2}}\right)\Psi(\boldsymbol{\nu})=L^{2}m^{2}\Psi( \boldsymbol{\nu})\.\end{split} \tag{4.17}\] The functions \(N_{\text{\tiny QE}}(\boldsymbol{\nu})^{2}\), \(R_{\text{\tiny QE}}(\boldsymbol{\nu})^{2}\) and \(N^{\varphi}(\boldsymbol{\nu})\) are defined in (2.24). To analyse the greybody factors, we will introduce very similar variables as in Sec. 3.3. We define \[x\equiv\frac{\boldsymbol{\nu}^{2}-\boldsymbol{\nu}_{+}^{2}}{\boldsymbol{\nu}_{ +}^{2}}\,\quad\tau_{H}=\frac{\boldsymbol{\nu}_{+}^{2}-\boldsymbol{\nu}_{-}^{2}}{ \boldsymbol{\nu}_{+}^{2}}\, \tag{4.18}\] where again, to avoid clutter, the notation is abused relative to (3.17). Notice that for the QE ensemble, we have \(\tau_{H}=\frac{2\pi L}{\boldsymbol{\nu}_{+}}T^{\text{\tiny QE}}\), where \(T^{\text{\tiny QE}}\) is the temperature of the QE black hole. With these definitions, (4.17) becomes \[\begin{split}\frac{\partial}{\partial x}\Big{(}x(x+\tau_{H})& \frac{\partial}{\partial x}\Big{)}\Psi(x)\\ &+\frac{L^{2}}{4\tau_{H}\boldsymbol{\nu}_{+}^{4}}\frac{( \boldsymbol{\nu}_{+}\omega-\boldsymbol{\nu}_{-}k)^{2}}{x}\Psi(x)-\frac{L^{2} }{4\tau_{H}\boldsymbol{\nu}_{+}^{4}}\frac{(\boldsymbol{\nu}_{-}\omega- \boldsymbol{\nu}_{+}k)^{2}}{x+\tau_{H}}\Psi(x)=\frac{L^{2}}{4}m_{\text{\tiny QE }}^{2}\Psi(\boldsymbol{\nu})\,\end{split} \tag{4.19}\] with \[m_{\text{\tiny QE}}^{2}\equiv m^{2}+\frac{2H^{2}}{(1-2H^{2})}\frac{(\omega+k )^{2}}{(\boldsymbol{\nu}_{+}+\boldsymbol{\nu}_{-})^{2}}. \tag{4.20}\] There are a few features that are worth highlighting. First the left-hand side of (4.19) is the wave equation of BTZ, which appears in (2.26). In that context the effects of the warping appear all in the right-hand side as a distortion of the mass of the probe. Note that this shift in the mass vanishes when \(H^{2}=0\), i.e., in the limiting BTZ case. Another key feature is that (4.17) has the same structure as (3.23), and hence it is straightforward to report on the prebody factors. The steps between (3.23)-(3.28) are exactly the same, and hence we have \[G_{\text{\tiny QE}}(\omega,k)=\frac{\psi_{2}(\omega,k)}{\psi_{1}(\omega,k)}. \tag{4.21}\] with \[\begin{split}\psi_{1}(\omega,k)&=\tau_{H}^{1-\Delta_{ \text{\tiny QE}}-i\sqrt{\frac{\mathsf{b}}{\tau_{H}}}}\frac{\Gamma\left(2\Delta_{ \text{\tiny QE}}-1\right)\Gamma\left(1-2i\sqrt{\frac{\mathsf{b}}{\tau_{H}}} \right)}{\Gamma\left(\Delta_{\text{\tiny QE}}-i\sqrt{\frac{\mathsf{b}}{\tau_{H }}}-i\sqrt{\frac{\mathsf{a}}{\tau_{H}}}\right)\Gamma\left(\Delta_{\text{\tiny QE }}-i\sqrt{\frac{\mathsf{b}}{\tau_{H}}}+i\sqrt{\frac{\mathsf{a}}{\tau_{H}}} \right)}\,\\ \psi_{2}(\omega,k)&=\tau_{H}^{\Delta_{\text{\tiny QE }}-i\sqrt{\frac{\mathsf{b}}{\tau_{H}}}}\frac{\Gamma\left(1-2\Delta_{\text{ \tiny QE}}\right)\Gamma\left(1-2i\sqrt{\frac{\mathsf{b}}{\tau_{H}}}\right)}{ \Gamma\left(1-\Delta_{\text{\tiny QE}}-i\sqrt{\frac{\mathsf{b}}{\tau_{H}}}-i \sqrt{\frac{\mathsf{a}}{\tau_{H}}}\right)\Gamma\left(1-\Delta_{\text{\tiny QE }}-i\sqrt{\frac{\mathsf{b}}{\tau_{H}}}+i\sqrt{\frac{\mathsf{a}}{\tau_{H}}} \right)}\,\end{split} \tag{4.22}\] and \(\mathsf{a}(\omega,k)\) and \(\mathsf{b}(\omega,k)\) are the coefficients of \((x+\tau_{H})^{-1}\) and \(x^{-1}\) in (4.19), respectively. We have also introduced \[\Delta_{\text{\tiny QE}}\equiv\frac{1}{2}+\frac{1}{2}\sqrt{1+L^{2}m_{\text{ \tiny QE}}^{2}}\, \tag{4.23}\] which is a frequency and momentum dependent "conformal dimension." Next, we can take the near-extremal limit. From (4.9), we have that \[k=k_{\text{ir}}\,\quad\epsilon\omega_{\text{ir}}=\frac{1}{2}\left(\omega-k \right)\, \tag{4.24}\] in combination with (4.4). We will taking the limit \(\epsilon\to 0\) while keeping \(\omega_{\text{ir}}\) and \(k_{\text{ir}}\) fixed. In this limit we have \[\begin{split}\mathsf{a}(\omega_{\text{ir}},k_{\text{ir}})& =\frac{\tau_{H}}{16\mathsf{a}^{2}}L^{2}\left(\omega_{\text{ir}}- \frac{\mathsf{d}}{\tau_{0}}k_{\text{ir}}\right)^{2}\,\\ \mathsf{b}(\omega_{\text{ir}},k_{\text{ir}})&=\frac{ \tau_{H}}{16\mathsf{a}^{2}}L^{2}\left(\omega_{\text{ir}}+\frac{\mathsf{d}}{ \tau_{0}}k_{\text{ir}}\right)^{2}\,\\ \Delta_{\text{\tiny QE}}&=\frac{1}{2}+\frac{1}{2} \sqrt{1+L^{2}m^{2}+\frac{2H^{2}}{(1-2H^{2})}\frac{L^{2}}{\tau_{0}^{2}}k_{\text {ir}}^{2}}\,\end{split} \tag{4.25}\] It is important to notice that \(\mathsf{a}/\tau_{H}\), \(\mathsf{b}/\tau_{H}\) and \(\Delta_{\text{\tiny QE}}\) here exactly agree with those in the canonical ensemble in (3.30); a useful identity to check this is \[\begin{split}\mathbf{r}_{0}^{2}=\frac{\nu^{2}+3}{4\nu^{2}}R_{0}^{2}= \frac{\ell^{2}}{\ell_{2}^{2}}\frac{R_{0}^{2}}{4\nu^{2}}\,\end{split} \tag{4.26}\] which arises from (2.41). This is also what we expect, since the near-horizon geometry in the canonical ensemble (3.11) is the same as the one in the quadratic ensemble (4.11). And hence our definitions of \(\omega_{\text{ir}}\) and \(k_{\text{ir}}\) are the same in both cases. Finally, we report on the greybody factor when \(k_{\rm ir}=0\), which reads \[\begin{split} G_{\rm QE}(\omega_{\rm ir})&=\tau_{H}^{2 \Delta_{\rm QE}-1}\frac{\Gamma(1-2\Delta_{\rm QE})\Gamma\left(\Delta_{\rm QE} \right)}{\Gamma(2\Delta_{\rm QE}-1)\Gamma\left(1-\Delta_{\rm QE}\right)}\frac{ \Gamma\left(\Delta_{\rm QE}-i\frac{\ell_{2}}{\delta}\omega_{\rm ir}\right)}{ \Gamma\left(1-\Delta_{\rm QE}-i\frac{\ell_{2}}{\delta}\omega_{\rm ir}\right) }\\ &\sim\left(\frac{2\pi L}{r_{0}}\frac{1}{\beta^{\rm QE}}\right)^{2 \Delta_{\rm QE}-1}G_{\rm AdS_{2}}(\omega_{\rm ir})\.\end{split} \tag{4.27}\] This follows in a straightforward way from the derivations in the canonical ensemble around (3.32). The temperature \(\beta^{\rm QE}\equiv 1/T^{\rm QE}\) in (4.27) is defined in (2.30). ## 5 A two-dimensional perspective of warped black holes Until now we have explored near-extremal properties of WBH starting from the non-extremal solutions. That is, we have captured the near zero temperature behaviour by taking appropriate limits of the finite temperature black hole. In this section we will set up the stage to reverse this logic: we want to capture the near-extremal behaviour by deforming away from the extremal, zero temperature black hole. A systematic way to proceed is to view these black holes from a two-dimensional perspective. More explicitly, we will perform a dimensional reduction along a compact direction. The way we will decompose our three-dimensional spacetime is as follows, \[ds_{3}^{2}=g_{MN}{\rm d}x^{M}{\rm d}x^{N}=g_{\mu\nu}{\rm d}x^{\mu}{\rm d}x^{\nu }+e^{-2\phi}\left({\rm d}z+A_{\mu}{\rm d}x^{\mu}\right)^{2}\, \tag{5.1}\] where the Greek indices run along the two-dimensional directions, \(\mu,\nu=0,1\), and \(z\) is a compact direction with \(z\sim z+2\pi\). We will be trading the three-dimensional metric \(g_{MN}\) for the two-dimensional variables: a two-dimensional metric \(g_{\mu\nu}\), a gauge field \(A_{\mu}\) and a dilaton field \(\phi\). The working assumption is that all the variables are independent of \(z\), which is a truncation of the three-dimensional theory, but it will suffice to describe the near-extremal system. The effects of this dimensional reduction on the three-dimensional action (2.1) are known [80, 81], and we will follow the conventions in [70]. The resulting two-dimensional theory is \[I_{\rm 2D}=I_{\rm EMD}+I_{\rm {rCS}}. \tag{5.2}\] \(I_{\rm EMD}\) is a two-dimensional Einstein-Maxwell-Dilaton theory whose couplings are dictated by the -dimensional reduction of the Einstein-Hilbert term in (2.1); it reads \[I_{\mbox{\tiny EMD}}=\frac{1}{8G_{3}}\int\mathrm{d}^{2}x\,\sqrt{-g}\,e^{-\phi} \left(\mathscr{R}+\frac{2}{\ell^{2}}-\frac{1}{4}e^{-2\phi}\,F_{\mu\nu}F^{\mu\nu }\right). \tag{5.3}\] In this expression, \(\mathscr{R}\) is the two-dimensional Ricci scalar associated to the metric \(g_{\mu\nu}\), and the field strength is given by \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\). \(I_{\mbox{\tiny CS}}\), the'reduced Chern-Simons' term, contains the information from the dynamics of the gravitational Chern-Simons term in (2.1), and it is given by \[I_{\mbox{\tiny CS}}=\frac{1}{32G_{3}\mu}\int\mathrm{d}^{2}x\,e^{-2\phi}\epsilon ^{\mu\nu}\left(F_{\mu\nu}\mathscr{R}+F_{\mu\rho}F^{\rho\sigma}F_{\sigma\nu}\,e ^{-2\phi}-2F_{\mu\nu}\nabla^{2}\phi\right). \tag{5.4}\] Here \(\epsilon^{\mu\nu}\) is the epsilon symbol, where \(\epsilon^{01}=1\), and \(\nabla_{\mu}\) is the covariant derivative with respect to the two-dimensional metric \(g_{\mu\nu}\). The equations of motion one obtains from (5.2) are given by \[\epsilon^{\alpha\beta}\partial_{\beta}\left(e^{-3\phi}f+\frac{1}{ 2\mu}\,e^{-2\phi}\left(\mathscr{R}+3\,e^{-2\phi}f^{2}-2\nabla^{2}\phi\right) \right)=0\, \tag{5.5}\] \[e^{-\phi}\left(\mathscr{R}+\frac{2}{\ell^{2}}+\frac{3}{2}\,e^{- 2\phi}f^{2}\right)+\frac{1}{\mu}\,e^{-2\phi}f\left(\mathscr{R}+2\,e^{-2\phi}f ^{2}-2\nabla^{2}\phi\right)+\frac{1}{\mu}\nabla^{2}\left(e^{-2\phi}f\right)=0\,\] \[g_{\alpha\beta}\left(\nabla^{2}e^{-\phi}-\frac{1}{\ell^{2}}\,e^{ -\phi}+\frac{1}{4}\,e^{-3\phi}f^{2}\right)-\nabla_{\alpha}\nabla_{\beta}e^{-\phi}\] \[\quad+\frac{1}{2\mu}\Big{(}(\nabla_{\alpha}e^{-2\phi}f)\nabla_{ \beta}\phi+(\nabla_{\beta}e^{-2\phi}f)\nabla_{\alpha}\phi-\nabla_{\alpha} \nabla_{\beta}(e^{-2\phi}f)\Big{)}\] \[\quad+\frac{1}{2\mu}g_{\alpha\beta}\Big{(}\frac{1}{2}\,e^{-2\phi }f\mathscr{R}-e^{-2\phi}f\nabla^{2}\phi-\nabla_{\mu}(e^{-2\phi}f)\nabla^{\mu }\phi+\nabla^{2}(e^{-2\phi}f)+e^{-4\phi}f^{3}\Big{)}=0\,\] where we have introduce the auxiliary scalar \[f\equiv\frac{1}{2\sqrt{-g}}\,\epsilon^{\alpha\beta}F_{\alpha\beta}. \tag{5.6}\] As stressed in [70, 80, 81], the action (5.2) is a consistent truncation of the three-dimensional theory. That is, all solutions to the equations of motion (5.5), when uplifted via (5.1), are solutions to (2.3). The solutions that will serve as our base in the subsequent analysis are those that have a _constant dilaton_ background. As shown in [70], the equations of motion (5.5) admit two branches of solutions when we set \(\phi(x)=\phi_{0}\) constant. The first branch is characterised by being a solution that is independent of the TMG coupling \(\mu\); that is, it is determined by the equations that arise from the EMD action, and hence it automatically satisfies also (5.5). The Ricci scalar and auxiliary scalar \(f\) (5.6) are \[\mbox{\bf EMD Branch}:\ \ \ \ \mathscr{R}_{0}=-\frac{8}{\ell^{2}}\,\qquad f_{0}^{2 }=\frac{4}{\ell^{2}}e^{2\phi_{0}}. \tag{5.7}\] The second branch is a solution that relies on a balance between the EMD and rCS contributions to (5.5), and hence is intrinsic to the TMG dynamics. The corresponding Ricci scalar and auxiliary scalar read \[\mathbf{TMG Branch}:\ \ \ \ \ \mathscr{R}_{0}=-\frac{6}{\ell^{2}}-\frac{2\mu^{2}}{9}\, \qquad f_{0}=-\frac{2\mu}{3}e^{\phi_{0}}. \tag{5.8}\] For both branches the subscript "0" simply denotes the background values when we set the dilaton to be a constant. For both of these branches, the Ricci scalar is constant, and negative, indicating the two-dimensional metric is locally AdS\({}_{2}\), as expected. From here we will identify the AdS\({}_{2}\) radius via the relation \[\mathscr{R}_{0}=-\frac{2}{\ell_{2}^{2}}. \tag{5.9}\] The EMD branch is the appropriate solution to describe the near-horizon geometry of the extremal BTZ black hole, as is explained in [70]. The TMG branch is the appropriate solution to describe the near-horizon of extremal WBH, both in the canonical and quadratic ensemble. We will show this explicitly below, and at this stage we just remark that the radius of the AdS\({}_{2}\) space in (5.8) agrees with (3.10) as it should. ### Near-AdS\({}_{2}\): Linear response We are interested in small fluctuations about our AdS\({}_{2}\) background solution (5.8). We will parametrize these fluctuations as follows, \[\begin{split} e^{-2\phi}&=e^{-2\phi_{0}}+\mathscr{ G}\,\\ f&=f_{0}+\mathscr{F}\,\\ g_{\alpha\beta}&=\overline{g}_{\alpha\beta}+h_{ \alpha\beta}\.\end{split} \tag{5.10}\] Here the fields \((\phi_{0},f_{0},\overline{g}_{\alpha\beta})\) will correspond to the TMG branch in (5.8); in particular \(\overline{g}_{\alpha\beta}\) is a locally AdS\({}_{2}\) metric whose curvature is given by \(\mathscr{R}_{0}\). In the following we will describe the dynamics of the fluctuations \((\mathscr{G},\mathscr{F},h_{\alpha\beta})\), whose support is on the two-dimensional coordinates \(x^{\mu}\), at the linearized level. We will describe the equations of motion and the effective action that describes this leading order response. By expanding the equations of motion (5.5) around (5.10), the linearized equations of motion read \[e^{2\phi_{0}}\bigg{(}\overline{\nabla}^{2}+\frac{4\mu^{2}}{9}- \frac{6}{\ell^{2}}\bigg{)}\mathscr{Y}-2\mu e^{-\phi_{0}}\mathscr{F}+\delta \mathscr{R} =0\, \tag{5.11}\] \[e^{2\phi_{0}}\bigg{(}\overline{\nabla}_{\alpha}\overline{\nabla }_{\beta}+\overline{g}_{\alpha\beta}\Big{\{}\frac{5\mu^{2}}{9}-\frac{3}{\ell^ {2}}\Big{\}}\bigg{)}\mathscr{Y}+\frac{3}{\mu}e^{-\phi_{0}}\bigg{(}\overline{ \nabla}_{\alpha}\overline{\nabla}_{\beta}-\overline{g}_{\alpha\beta}\Big{\{} \overline{\nabla}^{2}+\frac{5\mu^{2}}{9}-\frac{3}{\ell^{2}}\Big{\}}\bigg{)} \mathscr{F}\] \[+\overline{g}_{\alpha\beta}\,\delta\mathscr{R} =0\,\] where \(\overline{\nabla}\) stands for the covariant derivative with respect to the background metric \(\overline{g}_{\alpha\beta}\). The fluctuation of the Ricci scalar, \(\delta\mathscr{R}\), which contains the terms depending on \(h_{\alpha\beta}\), is \[\delta\mathscr{R}=\overline{\nabla}^{\alpha}\overline{\nabla}^{\beta}h_{ \alpha\beta}-\overline{\nabla}^{2}h^{\alpha}_{\ \alpha}+\frac{1}{\ell_{2}^{2}}h^{\alpha}_{\ \alpha}. \tag{5.12}\] The equations in (5.11) couple our three fluctuations, but it is possible to decouple the system systematically. First, we can use the first two equations in (5.11) to solve for \(\delta\mathscr{R}\) and replace in the third equation. This gives the following equation \[\overline{\nabla}_{\alpha}\overline{\nabla}_{\beta}\Phi(x)-\overline{g}_{ \alpha\beta}\overline{\nabla}^{2}\Phi(x)+\frac{1}{\ell_{2}^{2}}\overline{g}_{ \alpha\beta}\Phi(x)=0\, \tag{5.13}\] where \[\Phi(x)\equiv 3\mathscr{F}(x)+e^{3\phi_{0}}\mu\mathscr{Y}(x). \tag{5.14}\] This is the characteristic equation for Jackiw-Teitelboim (JT) gravity [39, 40]. For this reason we will refer to \(\Phi(x)\) as the "dilaton." However, in sharp contrast to other instances of JT gravity, \(\Phi\) does not parametrize the size of the black hole horizon (\(\mathscr{Y}\) plays that role). With this, we can rewrite (5.11) in terms of \((\Phi(x),\mathscr{F},h_{\mu\nu})\); the first equation is (5.13) and the remaining two are \[\begin{split}\bigg{(}\overline{\nabla}^{2}-\frac{1}{\ell_{2}^{2} }+\frac{4\mu^{2}}{9}\bigg{)}\mathscr{F}-\frac{1}{3}\bigg{(}\frac{1}{\ell_{2}^{ 2}}+\frac{2}{9}\mu^{2}\bigg{)}\Phi&=0\,\\ \delta\mathscr{R}+\frac{e^{-\phi_{0}}}{\mu}\bigg{(}\left( \frac{8\mu^{2}}{3}-3\ell_{2}^{-2}\right)\mathscr{F}+\left(-\frac{4\mu^{2}}{9} +\ell_{2}^{-2}\right)\Phi\bigg{)}&=0\.\end{split} \tag{5.15}\] It is worth analysing the physical interpretation of these fluctuations. From (5.13), we see that the conformal dimension of \(\Phi\) is \(\Delta_{\Phi}=2\). The solution to the first equation in (5.15) contains a homogeneous and inhomegenous part, i.e., \[\mathscr{F}(x)=\mathscr{F}_{\text{hom}}(x)+\frac{1}{3}\left(\frac{9+2\ell_{2} ^{2}\mu^{2}}{9+4\ell_{2}^{2}\mu^{2}}\right)\Phi(x)\, \tag{5.16}\] with the homogenous part satisfying \[\left(\overline{\nabla}^{2}-\frac{1}{\ell_{2}^{2}}+\frac{4\mu^{2}}{9}\right) \mathscr{F}_{\text{hom}}(x)=0. \tag{5.17}\] \(\mathscr{F}_{\text{hom}}\) is an independent degree of freedom, and it can be tracked to the extra degree of freedom due to the appearance of a massive graviton that is characteristic to TMG. It useful to further interpret this field in the usual AdS/CFT dictionary. From (5.17) we can relate the mass of the field to its conformal dimension; this gives \[\Delta_{\mathscr{F}}(\Delta_{\mathscr{F}}-1)=1-\frac{4}{9}\mu^{2}\ell_{2}^{2}= \frac{3(1-\nu^{2})}{3+\nu^{2}}\, \tag{5.18}\] where \(\nu=\frac{\mu\ell}{3}\) as before, and we used (3.10). The solutions are \[\Delta_{\mathscr{F}}^{\pm}=\frac{1}{2}\bigg{(}1\pm\sqrt{\frac{15-11\nu^{2}}{3+ \nu^{2}}}\bigg{)}. \tag{5.19}\] As we saw in Sec. 2, our WBH solutions have the restriction that \(\nu^{2}\geq 1\), making the mass squared negative. We also have the Breitenlohner-Freedman (BF) bound [82]: this restricts \(\nu^{2}\leq\frac{15}{11}\) such that \(\Delta_{\mathscr{F}}\geq 0\). Therefore, we have a linear stable mode when \[1\leq\nu^{2}\leq\frac{15}{11}\ :\qquad\frac{1}{2}\leq\Delta_{\mathscr{F}}^{+} \leq 1\,\qquad 0\leq\Delta_{\mathscr{F}}^{-}\leq\frac{1}{2}. \tag{5.20}\] Altogether, this makes \(\mathscr{F}_{\text{hom}}(x)\) a relevant perturbation, and being marginal when \(\nu^{2}=1\), around the AdS\({}_{2}\) background. This mode, and its non-trivial bounds, were also found in [41]; an interesting contrast is that here we detected it from an analysis of the IR (AdS\({}_{2}\)) background rather than from the fluctuations around Warped AdS\({}_{3}\). Finally, it is worth reporting on the effective action that captures the linear response. The equations of motion obtained from \[\begin{split} I_{\text{eff}}=&\frac{e^{-4\phi_{0} }}{48G_{3}}\int\mathrm{d}^{2}x\sqrt{-g}\,\Phi\left(R+\frac{2}{\ell_{2}^{2}} \right)\\ &-\frac{9e^{-3\phi_{0}}}{16\mu G_{3}}\int\mathrm{d}^{2}x\sqrt{-g }\left(\overline{\nabla}_{\mu}\mathscr{F}\,\overline{\nabla}^{\mu}\mathscr{F }+\frac{1}{\ell_{2}^{2}}\Delta_{\mathscr{F}}(\Delta_{\mathscr{F}}-1)\mathscr{ F}^{2}\right)\\ &+\frac{e^{-3\phi_{0}}}{48\mu G_{3}}\int\mathrm{d}^{2}x\sqrt{-g }\left(\frac{1}{3}\left(\mu^{2}+\frac{9}{\ell_{2}^{2}}\right)\Phi^{2}-4\left( \mu^{2}-\frac{3}{\ell_{2}^{2}}\right)\mathscr{F}\Phi+15\overline{\nabla}_{ \mu}\mathscr{F}\,\overline{\nabla}^{\mu}\Phi\right)\,\end{split} \tag{5.21}\] exactly match (5.13) and (5.15) at linear order in the fields. There is an overall factor in \(I_{\text{eff}}\) that we fix such that the action here matches the normalization in (5.2) at the linear level. The first line of (5.21) is the renown JT action, and for this reason several components of our analysis will agree with the universal properties advocated in [36, 37]; the second line contains the kinetic and mass terms for \(\mathscr{F}\) (which captures the relevant/marginal operator); and third line of (5.21) captures the non-trivial interactions among the fields. For the purposes of capturing dynamics in the near-AdS\({}_{2}\) region, the effective action (5.21) is much simpler to manipulate relative to (5.3)-(5.4), since the latter is a higher derivative theory and the former is a two-derivative action. In Sec. 5.2, we will use \(I_{\rm eff}\) to discuss holographic renormalization and thermal properties in the near-AdS\({}_{2}\) background. #### 5.1.1 Solutions In this last portion we will construct explicit solutions to the linear equations (5.13)-(5.15); this follows very closely [70, 83], which we refer to for further details. It will be convenient to work in a radial gauge, where we introduce coordinates \(x^{\mu}=(\rho,\tau)\) and we set \[ds^{2}=g_{\mu\nu}{\rm d}x^{\mu}{\rm d}x^{\nu}=\gamma_{\tau\tau}{\rm d}\tau^{2}+ {\rm d}\rho^{2}\,\quad A_{\rho}=0. \tag{5.22}\] In this gauge, the background AdS\({}_{2}\) metric and the gauge field are \[\bar{g}_{\mu\nu}{\rm d}x^{\mu}{\rm d}x^{\nu}=\bar{\gamma}_{\tau\tau}{\rm d} \tau^{2}+{\rm d}\rho^{2}\,\qquad A^{0}=A^{0}_{\tau}{\rm d}\tau\, \tag{5.23}\] with \[\begin{split}\bar{\gamma}_{\tau\tau}&=-\left( \alpha(\tau)e^{\rho/\ell_{2}}+\beta(\tau)e^{-\rho/\ell_{2}}\right)^{2}\,\\ A^{0}_{\tau}&=\chi(\tau)-f_{0}\ell_{2}\bigg{(} \alpha(\tau)e^{\rho/\ell_{2}}-\beta(\tau)e^{-\rho/\ell_{2}}\bigg{)}\.\end{split} \tag{5.24}\] Here \(\alpha(\tau)\), \(\beta(\tau)\) and \(\chi(\tau)\) are arbitrary functions. The constant \(f_{0}\) and the AdS\({}_{2}\) radius, \(\ell_{2}\), are given by (5.8). With this choice of gauge and background solution, the solution to the JT equation (5.13) reads \[\Phi(x)=\lambda(\tau)\,e^{\rho/\ell_{2}}+\sigma(\tau)\,e^{-\rho/\ell_{2}}\, \tag{5.25}\] where \[\begin{split}\sigma(\tau)&=-\frac{\ell_{2}^{2}}{4 \lambda}\bigg{(}\frac{(\partial_{\tau}\lambda)^{2}}{\alpha^{2}}+c_{0}\bigg{)} \,\\ \beta(\tau)&=-\frac{\ell_{2}^{2}}{4}\frac{\alpha}{ \partial_{\tau}\lambda}\partial_{\tau}\bigg{(}\frac{1}{\lambda}\bigg{(}\frac{ (\partial_{\tau}\lambda)^{2}}{\alpha^{2}}+c_{0}\bigg{)}\bigg{)}=\frac{\alpha} {\partial_{\tau}\lambda}\partial_{\tau}\sigma\.\end{split} \tag{5.26}\] Here \(c_{0}\) is an arbitrary constant. Using the standard AdS/CFT terminology, from (5.25) we interpret \(\lambda(\tau)\) as the source and \(\sigma(\tau)\) as the vacuum expectation value. In (5.26) we have chosen to solve for the vacuum expectation values \((\beta,\sigma)\) in terms of the sources \((\alpha,\lambda)\). Solving the two equations in (5.15) is straightforward, and we start with the first equation. As described in (5.16), the field \(\mathscr{F}\) has two components. The inhomogeneous solution is \[\mathscr{F}_{\rm in-hom}(x)=\frac{\nu^{2}+1}{5\nu^{2}+3}\Phi(x)\, \tag{5.27}\] with \(\Phi(x)\) given by (5.25). The homogeneous solution has the standard behaviours of fields in AdS, whose radial behaviour near the boundary is \[\mathscr{F}_{\rm hom}(x)=e^{-\Delta_{\mathscr{F}}\rho/\ell_{2}}(f_{1}(\tau)+ \cdots)+e^{(\Delta_{\mathscr{F}}-1)\rho/\ell_{2}}(f_{2}(\tau)+\cdots)\, \tag{5.28}\] with \(\Delta_{\mathscr{F}}\) defined in (5.18). Since this mode is relevant or marginal, it depends on its quantization conditions if \(f_{1}\), or \(f_{2}\), is the source, or a vacuum expectation value. The second equation in (5.15) determines the metric perturbation. Recall that we are working in the radial gauge (5.22), and hence the only metric perturbation in the game is \(h_{\tau\tau}\); the resulting equation is therefore \[\frac{1}{\bar{\gamma}_{\tau\tau}}\partial_{\rho}\Big{(}\bar{\gamma}_{\tau\tau }\partial_{\rho}\left(\bar{\gamma}^{\tau\tau}h_{\tau\tau}\right)\Big{)}=\frac{ e^{-\phi_{0}}}{\nu\ell}\left(\left(7\nu^{2}-3\right)\mathscr{F}+\left(1-\nu^{2} \right)\Phi\right). \tag{5.29}\] The solutions to \(h_{\tau\tau}\) also split into a homogeneous and inhomogeneous solution. The homogeneous solution is the same as the background solution and can be absorbed in \(\bar{\gamma}_{\tau\tau}\). The inhomogeneous solution is determined by the on-shell values we have already determined for \(\mathscr{F}\) and \(\Phi\). For concreteness, let us take \(\mathscr{F}=\mathscr{F}_{\rm in-hom}(x)\), i.e., turn off the homogeneous solution in (5.28). With this, the inhomogeneous solution to the metric pertubation reads \[h_{\tau\tau}=\frac{2\nu\ell}{3}\frac{1}{5\nu^{2}+3}e^{-\phi_{0}}\bigg{(}\bar {\gamma}_{\tau\tau}\Phi-2\ell_{2}^{2}\sqrt{-\bar{\gamma}}\partial_{\tau}\left( \frac{\partial_{\tau}\lambda}{\alpha}\right)\bigg{)}. \tag{5.30}\] Comparison with warped black holes.In this last portion we compare the solutions described here with the black hole background: the near-AdS\({}_{2}\) background in the canonical ensemble Sec. 3.2 and the quadratic ensemble Sec. 4.2. Since they are stationary black holes, they will be matched with static (\(\tau\)-independent) solutions described in the two-dimensional language used in this section. The background AdS\({}_{2}\) solutions for both black holes are exactly the same, and in terms of the language used in this section it corresponds to \[e^{-\phi_{0}}=R_{0}\,\quad\alpha(\tau)=1\,\quad\beta(\tau)=-\frac{\mathfrak{d }^{2}}{4}\, \tag{5.31}\] where we used (3.11) and (4.11) for the canonical and quadratic solution, respectively. From here we see that \(\beta\) is tied to the near-extremal parameter \(\mathfrak{d}\). One small subtlety between the canonical and quadratic ensemble comes from the specific values of the sources in the JT field, \(\Phi(x)\). For the canonical black hole we have \[\lambda_{\text{\tiny CE}}(\tau)=\frac{3(3+5\nu^{2})}{\ell R_{0}^{2}}\epsilon\, \tag{5.32}\] which is obtained by reconstructing \(\Phi\) from (5.14) and the on-shell values in (3.13).14 In contrast, the quadratic black hole has Footnote 14: Recall that \(\epsilon\) is the decoupling parameter used to obtained the near-horizon geometry. In this context, it controls the smallness of \(\Phi(x)\) which will use in Sec. 5.2. \[\lambda_{\text{\tiny QE}}(\tau)=\frac{12(1-H^{2})}{\ell_{2}R_{0}^{2}}\epsilon\, \tag{5.33}\] which again uses (5.14) and the black holes values in (4.14). To relate (5.33) to (5.32) we need to incorporate that the decoupling parameter \(\epsilon\) is not the same for the canonical and quadratic ensembles. By relating (3.9) and (4.9) via (2.41), one finds \[\epsilon_{\text{\tiny CE}}=2\frac{\ell_{2}}{\ell}\epsilon_{\text{\tiny QE}} \quad\Rightarrow\quad\lambda_{\text{\tiny QE}}(\tau)=\lambda_{\text{\tiny CE }}(\tau). \tag{5.34}\] The subleading component of the JT field is simple to read off, and for both cases we have \[\sigma_{\text{\tiny CE,QE}}(\tau)=\frac{\mathfrak{d}^{2}}{4}\lambda_{\text{ \tiny CE,QE}}(\tau). \tag{5.35}\] Finally, for the massive vector field we find \[\mathscr{F}_{\text{\tiny CE,QE}}=\frac{\nu^{2}+1}{5\nu^{2}+3}\Phi_{\text{\tiny CE,QE}}\, \tag{5.36}\] which shows that, for both black hole backgrounds, the fields on-shell only have the inhomogeneous solution to (5.15). The remaining fields listed in (3.13) and (4.14), i.e., \(h_{\mu\nu}\) and \(\mathscr{A}_{\mu}\), will follow from the values listed here, in accordance to the dynamics described in this section. ### Boundary analysis In this final portion, we return to thermodynamic aspects of the warped black holes, but now with the perspective of near-AdS\({}_{2}\). We will perform some of the basic computations to read off the entropy of near-AdS\({}_{2}\) via a boundary analysis of the system. In a nutshell, we will construct an effective boundary action via the traditional tools of holographic renormalization. From there, we will identify the Schwarzian sector of the theory and report on its contribution to the entropy. The derivations here follow very closely again [70, 83], and conceptually there is no deviation from those references. For that reason we will keep the presentation brief and to the point. A holographic analysis around the near-AdS\({}_{2}\) background requires some care. We are interested in renormalizing the theory when the source of \(\Phi(x)\), \(\lambda(\tau)\), is turned on; this means we are doing conformal perturbation theory in the presence of irrelevant couplings. In more practical terms for our purposes, we have to specify what are the allowed divergences and the regime of validity of the procedure. Following [83], at a specified cutoff \(\rho=\rho_{c}\to\infty\) we will have \[\lambda(\tau)\,e^{\rho_{c}/\ell_{2}}\ll 1\,\qquad e^{-2\phi_{0}}\gg\mathscr{Y}. \tag{5.37}\] The setup of the variational problem will be standard. Our bulk action is \(I_{\rm eff}\) in (5.21), and for this action we construct a functional that is well-defined for Dirichlet boundary conditions on the field. With this we will see that the responses of the functional under the variations \[\delta\gamma_{\tau\tau}=-2\alpha(\tau)e^{2\rho_{c}/\ell_{2}}\delta\alpha\, \qquad\delta\Phi=e^{\rho_{c}/\ell_{2}}\delta\lambda\, \tag{5.38}\] are finite and integrable. Note that we will only turn on the sources for the metric and JT field, \(\alpha\) and \(\lambda\) respectively; for simplicity we are setting \(\mathscr{F}_{\rm hom}=0\), which suffices to discuss semi-classical aspects of the thermodynamics of the system that connects to JT gravity.15 Footnote 15: However, it should be stressed that it is of interest to investigate in more details the effects of the massive degree of freedom \(\mathscr{F}_{\rm hom}\). In particular, we expect this mode to lead to instabilities similar to those advocated recently in [84] for higher-dimensional extremal AdS black holes. The functional that renders a well-defined variational problem for our system is \[I_{\rm ren}=I_{\rm eff}+I_{\rm GH}+I_{\rm ct}. \tag{5.39}\] The bulk term is given by \(I_{\rm eff}\) in (5.21). The second term is the usual Gibbons-Hawking term, which in our context reads \[I_{\rm GH}=2\mu e^{\phi_{0}}\int{\rm d}\tau\sqrt{-\gamma}\,\Phi\,K. \tag{5.40}\] Here, \(K=\partial_{\rho}\log\sqrt{-\gamma}\) is the extrinsic curvature. The third term in \(I_{\rm ren}\) are local boundary counterterms, whose functionality is to remove divergences in the action (and its variation). The counterterms for our setup are \[I_{\rm ct}=-\frac{e^{-2\phi_{0}}}{24G_{3}\mu\ell_{2}}\int{\rm d}\tau\;\sqrt{- \gamma}\,\Phi+\frac{e^{-3\phi_{0}}\ell_{2}}{48G_{3}(9+4\mu^{2}\ell_{2}^{2})} \int{\rm d}\tau\;\sqrt{-\gamma}\,\Phi^{2}. \tag{5.41}\] Although the second term is quadratic in \(\Phi\), and hence subleading according to (5.37), it is needed to render finite the variation of the action with respect to (5.38). We can now easily very that the response of \(I_{\rm ren}\) is finite and integrable. First we compute the one-point functions dual to our two sources in (5.38); this gives \[\hat{\Pi}_{\alpha} =\lim_{\rho_{c}\to\infty}\frac{\delta}{\delta\alpha}(I_{\text{eff}} +I_{\text{GH}}+I_{\text{ct}})=-\frac{e^{-2\phi_{0}}}{12G_{3}\ell_{2}}\sigma( \tau)\, \tag{5.42}\] \[\hat{\Pi}_{\lambda} =\lim_{\rho_{c}\to\infty}\frac{\delta}{\delta\lambda}(I_{\text{ eff}}+I_{\text{GH}}+I_{\text{ct}})=-\frac{e^{-2\phi_{0}}}{12G_{3}\ell_{2}}\beta( \tau)\,\] which is clearly finite. Thus, the variation of the renormalised effective action is \[\delta I_{\text{ren}} =\int\mathrm{d}\tau(\hat{\Pi}_{\alpha}\delta\alpha+\hat{\Pi}_{ \lambda}\delta\lambda) \tag{5.43}\] \[=\frac{\ell_{2}e^{-2\phi_{0}}}{48G_{3}\mu}\int\mathrm{d}\tau\left[ \frac{q(\tau)}{\lambda(\tau)}\delta\alpha+\frac{\alpha(\tau)}{\partial_{\tau} \lambda(\tau)}\partial_{\tau}\left(\frac{q(\tau)}{\lambda(\tau)}\right)\delta \lambda\right]\,\] where we used (5.26) and defined \[q(\tau)\equiv\left(\frac{(\partial_{\tau}\lambda)^{2}}{\alpha^{2}}+c_{0} \right)\,. \tag{5.44}\] This expression is integrable over the phase space \((\alpha,\lambda)\). After performing this integration over (5.43), we get \[I_{\text{ren}}=\frac{\ell_{2}}{48G_{3}\mu}e^{-2\phi_{0}}\int\mathrm{d}\tau\, \frac{\alpha(\tau)}{\lambda(\tau)}\left(c_{0}-\left(\frac{\partial_{\tau} \lambda(\tau)}{\alpha(\tau)}\right)^{2}\right)\,. \tag{5.45}\] Hence we have obtained a finite and well-defined on-shell action for the system at hand. Schwarzian effective action.It is useful to re-cast (5.45) in a way that the Schwarzian effective action is manifest. To do so, we first make manifest the re-parametrization mode as follows. Take \(\alpha=1\) and \(\beta=0\) \[ds^{2}=\mathrm{d}\rho^{2}-e^{2\rho/\ell_{2}}\mathrm{d}\tau^{2}. \tag{5.46}\] This is what we coin an "empty" AdS\({}_{2}\) background. Next, consider the following diffeomorphism \[\tau \to f(\tau)+\frac{\ell_{2}^{2}}{2}\frac{f^{\prime\prime}(\tau)}{e^ {2\rho/\ell_{2}}-\frac{\ell_{2}^{2}}{4}\frac{f^{\prime\prime}(\tau)^{2}}{f^{ \prime}(\tau)^{2}}}\] \[e^{\rho/\ell_{2}} \to\frac{e^{-\rho/\ell_{2}}}{f^{\prime}(\tau)}\left(e^{2\rho/\ell _{2}}-\frac{\ell_{2}^{2}}{4}\frac{f^{\prime\prime}(\tau)^{2}}{f^{\prime}(\tau) ^{2}}\right) \tag{5.47}\] where \(f(\tau)\) is an arbitrary function of time representing boundary time reparametrizations. Under these diffeomorphisms, the line element becomes \[ds^{2}=\mathrm{d}\rho^{2}-\left(e^{\rho/\ell_{2}}+\frac{\ell_{2}}{2}\{f(\tau),\tau\}e^{-\rho/\ell_{2}}\right)^{2}\mathrm{d}\tau^{2} \tag{5.48}\] where \(\{f(\tau),\tau\}=\left(\frac{f^{\prime\prime}}{f^{\prime}}\right)^{\prime}-\frac{ 1}{2}\left(\frac{f^{\prime\prime}}{f^{\prime}}\right)^{2}\) is the Schwarzian derivative. We see that these diffeomorphisms preserve the radial gauge while ensuring that the asymptotic form of the metric is the same as that of empty AdS\({}_{2}\) in (5.46). So, these are the asymptotic symmetries of AdS\({}_{2}\). Comparing with (5.23), we have \[\alpha(\tau)=1\,\qquad\beta(\tau)=\frac{\ell_{2}^{2}}{2}\{f(\tau),\tau\}. \tag{5.49}\] It is now also simple to re-examine (5.45) in the view of (5.49). Substituting for \(c_{0}\) in terms of \(\beta\) in (5.45) by using (5.26), we get \[I_{\text{ren}}=\frac{\ell_{2}}{24G_{3}\mu}e^{-2\phi_{0}}\int\text{d}\tau\left[ \lambda(\tau)\{f(\tau),\tau\}-\frac{{\lambda^{\prime}}^{2}(\tau)}{\lambda( \tau)}\right]\,. \tag{5.50}\] In this expression we used (5.49); we have also ignored total derivatives, since shortly the time direction will be periodic (in Euclidean signature). This is the well-known Schwarzian effective action that is characteristic of JT gravity [37]. Near-extremal entropy.Finally, we turn to extracting some thermodynamic information from (5.50). For this, we take a static background where all functions are independent of \(\tau\). In particular we take16 Footnote 16: In terms of the time reparametrization mode, we have \(f(\tau)=e^{f_{0}\tau}\), and hence \(\beta_{0}=-\frac{\ell_{2}^{2}}{4}f_{0}^{2}\). \[\lambda(\tau)=\lambda_{0}\,\qquad\beta(\tau)=\beta_{0}. \tag{5.51}\] For this configuration, the background AdS\({}_{2}\) solution has a horizon at \(e^{2\rho_{h}/\ell_{2}}=-\beta_{0}\). The temperature associated to this horizon is \[T_{\text{2d}}=\frac{1}{2\pi}\partial_{\rho}\sqrt{-\bar{\gamma}_{\tau\tau}} \Big{|}_{\rho=\rho_{h}}=\frac{1}{\pi\ell_{2}}\sqrt{|\beta_{0}|}. \tag{5.52}\] To extract the entropy, we take an Euclidean approach, for which we evaluate the renormalized action (5.50) in Euclidean signature. By Wick rotating time, \(\tau=i\tau_{E}\) with \(\tau_{E}\sim\tau_{E}+T_{\text{2d}}^{-1}\), the action (5.50) reads \[I_{\text{ren}}^{E}=\frac{\ell_{2}}{48G_{3}\mu}e^{-2\phi_{0}}\int_{0}^{\frac{1} {2\text{d}}}\text{d}\tau_{E}\,\lambda_{0}\,(2\pi T_{\text{2d}})^{2}=\frac{ \pi^{2}\ell_{2}}{12G_{3}\mu}e^{-2\phi_{0}}\,\lambda_{0}\,T_{\text{2d}}\, \tag{5.53}\] where we used (5.51) and (5.52). We take the usual relation between the on-shell action and entropy dictated by thermodynamics, which gives \[S_{\text{2d}}=(1+T_{\text{2d}}\partial_{T_{\text{2d}}})I_{\text{ren}}^{E}=\frac {\pi^{2}\ell_{2}}{6G_{3}\mu}e^{-2\phi_{0}}\,\lambda_{0}\,T_{\text{2d}}. \tag{5.54}\] The entropy we have derived here, \(S_{\rm 2d}\), is the entropy of the near-AdS\({}_{2}\) background. That is, the entropy as a deviation away from the fixed IR point, which is controlled by \(\lambda_{0}\). It excludes the zero temperature residual entropy, since \(I_{\rm eff}\) and \(I_{\rm ren}\) do not capture that contribution. In our next and final section we will make comparisons with the warped black hole backgrounds in the canonical and quadratic ensemble. ## 6 Comparing perspectives and ensembles Having done independent analyses in the three previous sections, we now turn to our main task of comparing and contrasting our findings. We will be able to take three different perspectives: in the three-dimensional arena, we will contrast the responses of the WBHs in its two different ensembles; from the two-dimensional dual, where the lamppost is a warped CFT, we will contrast their thermodynamic response; and from the IR perspective of near-AdS\({}_{2}\) dynamics we will disentangle how these come together or apart. ### Comparing ensembles In this first portion, we will scrutinise and contrast the analysis done in Sec. 3 and Sec. 4. This contrast will not involve a holographic component yet, just how the black hole responds in the near-extremal limit from the bulk perspective. To this end, we will be emphasising similarities and differences between the canonical and quadratic ensemble. Mass gap.The response we found for both ensembles is generic: upon moving away from extremality by slightly increasing the temperature, the mass and entropy of the black hole increases quadratically and linearly in the temperature respectively. More explicitly, we have \[M^{\bullet} =M^{\bullet}_{\rm ext}+\frac{(T^{\bullet})^{2}}{M^{\bullet}_{ \rm gap}}+\mathcal{O}(\epsilon^{3})\,\] \[S^{\bullet} =S^{\bullet}_{\rm ext}+2\frac{T^{\bullet}}{M^{\bullet}_{\rm gap }}+\mathcal{O}(\epsilon^{2})\, \tag{6.1}\] where the \(\bullet\) indicates either CE or QE, and \[M^{\rm CE}_{\rm gap}\equiv\frac{6G_{3}}{\pi^{2}\ell}\,\frac{\nu(3+\nu^{2})}{ (3+5\nu^{2})}\Omega^{\rm CE}_{\rm ext}\,\qquad M^{\rm QE}_{\rm gap}\equiv\frac{6G_{3} \sqrt{1-2H^{2}}}{\pi^{2}L(1-H^{2})}=\frac{2}{\Omega^{\rm CE}_{\rm ext}}\times M ^{\rm CE}_{\rm gap}. \tag{6.2}\] For the second equality of the quadratic ensemble mass gap we used the relation (2.25) between \(H\) and \(\nu\). From (2.43), very near to extremality we also find \[T^{\text{\tiny{QE}}}=2\frac{T^{\text{CE}}}{\Omega_{\text{ext}}^{\text{CE}}}+ \mathcal{O}(\epsilon^{2}). \tag{6.3}\] Therefore, in agreement with the ties discussed at the end of Sec. 2.2, the entropies are reporting the same answer: \(S^{\text{\tiny{QE}}}=S^{\text{CE}}\) in (6.1). A very interesting difference between the ensembles comes in the parameter (scale) that controls the thermodynamic response. Note that the mass gap for the canonical ensemble depends on the angular potential at extremality (3.2); for the quadratic ensemble this potential is unity at extremality according to (4.2). This is significant since the expansions in (6.1) assume large extremal entropy, and therefore from (3.3) we have \[S^{\text{\tiny{CE}}}_{\text{ext}}\gg 1\quad\Rightarrow\quad G_{3}\Omega_{ \text{ext}}^{\text{\tiny{CE}}}\ll 1. \tag{6.4}\] That is the angular potential is small in Planck units. This brings some tension to (6.3): both temperatures differ by big factors making both ensembles fall into different regimes of near-extremality. The canonical ensemble is cooler than the quadratic ensemble. In the limit \(\nu\to 1\) (\(H^{2}\to 0\)) the mass gap of the canonical ensemble diverges, due to \(\Omega_{\text{ext}}^{\text{\tiny{CE}}}\). However, in this limit the mass gap for the QE remains finite. Another interesting limit is \(\nu\to 0\) (\(H^{2}=1/2\)); here we find that for both ensembles \(M_{\text{gap}}^{\bullet}=0\). This is not surprising since when \(\nu\to 0\) (\(H^{2}=1/2\)), we get pure Chern-Simons theory and WBHs are no longer valid solutions. Two-point function.Next we compare the two-point functions (3.32) and (4.27). We have \[\begin{split} G_{\text{\tiny{CE}}}(\omega_{\text{ir}})& =\left(8\pi\frac{\ell_{2}^{2}}{\ell^{2}}\frac{R_{0}}{r_{0}}\frac{ \ell}{\beta^{\text{\tiny{CE}}}}\,\right)^{2\Delta_{\text{CE}}-1}G_{\text{ \tiny{AdS}}_{2}}(\omega_{\text{ir}})\,\\ G_{\text{\tiny{QE}}}(\omega_{\text{ir}})&=\left( \frac{2\pi L}{\mathbf{r}_{0}}\frac{1}{\beta^{\text{\tiny{QE}}}}\right)^{2\Delta_{ \text{QE}}-1}G_{\text{\tiny{AdS}}_{2}}(\omega_{\text{ir}})\,\end{split} \tag{6.5}\] where, for concreteness, we are defining the AdS\({}_{2}\) two point function as \[G_{\text{\tiny{AdS}}_{2}}(\omega_{\text{ir}})=\frac{\Gamma(1-2\Delta_{\bullet}) \Gamma\left(\Delta_{\bullet}\right)}{\Gamma(2\Delta_{\bullet}-1)\Gamma\left(1 -\Delta_{\bullet}\right)}\frac{\Gamma\left(\Delta_{\bullet}-i\frac{\ell_{2}}{ \mathfrak{d}}\omega_{\text{ir}}\right)}{\Gamma\left(1-\Delta_{\bullet}-i\frac{ \ell_{2}}{\mathfrak{d}}\omega_{\text{ir}}\right)}, \tag{6.6}\] and recall that \(\Delta_{\text{\tiny{CE}}}=\Delta_{\text{\tiny{QE}}}\) in the near-extremal limit. The contrast between the two expressions in (6.5) is similar to our thermodynamic response. All the scales appearing in \(G_{\text{\tiny CE}}(\omega_{\text{ir}})\) are roughly order one since \(\ell_{2}\sim\ell\) and \(R_{0}\sim r_{0}\). However, in \(G_{\text{\tiny QE}}(\omega_{\text{ir}})\) we have that \(\mathpzc{r}_{0}\gg L\); recall that \(\mathpzc{r}_{0}\) controls the extremal entropy in (4.3) which should be large. If we also take into account the relation (6.3), we see that \(G_{\text{\tiny QE}}(\omega_{\text{ir}})\sim G_{\text{\tiny CE}}(\omega_{ \text{ir}})\). In other words, for \(G_{\text{\tiny QE}}(\omega_{\text{ir}})\) to be non-negligible it is natural to scale \(T^{\text{\tiny QE}}\) with \(\mathpzc{r}_{0}\). ### Comparing perspectives In this final portion we will take the task of comparing the two-dimensional perspective of Sec. 5, and its own holographic interpretation, against the three-dimensional perspective, and its holographic interpretation in terms of a WCFT. To start, let us reconcile the semi-classical entropy (6.1), with the two-dimensional counterpart obtained in Sec. 5.2. From (5.54), we obtained that the entropy in near-AdS\({}_{2}\) is given by \[S_{\text{2d}}=\frac{\pi^{2}\ell_{2}}{6G_{3}\mu}e^{-2\phi_{0}}\,\lambda_{0}\,T_ {\text{2d}}. \tag{6.7}\] Deriving this expression relies on an on-shell analysis of the effective action (5.21); after renormalizing it, one finds that the entropy comes from the boundary contribution of the Schwarzian action (5.50). In the following we will write this entropy in terms of the WBH parameters via the dictionary we decoded in (5.31)-(5.33). First, it is instructive to relate the ensemble temperatures with \(T_{\text{2d}}\) in (5.52); we find \[e^{-2\phi_{0}}\lambda_{0}T_{\text{2d}}=\frac{12G_{3}\mu}{\pi^{2}\ell_{2}}\frac {T^{\bullet}}{M_{\text{\tiny gap}}^{\bullet}}\, \tag{6.8}\] which holds for both the canonical and quadratic ensemble. In checking this relation one uses in addition, for the canonical ensemble (3.5) and (3.7), while for the quadratic ensemble (4.5) and (4.7). With this, it is simple to see that \[S_{\text{2d}}=2\frac{T^{\bullet}}{M_{\text{\tiny gap}}^{\bullet}}\, \tag{6.9}\] as expected from the universal behaviour near-extremality of black holes and the expectation that near-AdS\({}_{2}\) holography captures this correction correctly. This is another confirmation that the two-dimensional effective action (5.21) captures correctly the near-extremal regime of warped black holes. At this stage it is also important to remark that our effective theory in AdS\({}_{2}\) also captures the leading quantum correction to (6.9); this follows from the fact that in the fixed angular momentum ensemble the effective action is a Schwarzian term and the results in [37, 85] apply. The quantum entropy is therefore \[S_{\text{2d}}=2\frac{T^{\bullet}}{M_{\text{\tiny gap}}^{\bullet}}+\frac{3}{2} \log T^{\bullet}+\cdots\, \tag{6.10}\] where the dots are further corrections in \(T^{\bullet}\). The interesting perspective is to contrast our results on the gravitational side with those from a expected holographic dual. As we reviewed in Sec. 2, there is evidence that WBHs should be interpreted as a warped CFT, either in the canonical or quadratic ensemble, depending on the coordinates used. In particular, we showed that the Wald entropy of non-extremal black holes agrees with the high-temperature behaviour of the partition function of the WCFT; see (2.20), (2.37) and discussion within. However, in the present context, we are exploring a near extremal regime, which takes us to low temperatures. In [28], we derived the near-extremal behaviour of a WCFT, both in the canonical and quadratic ensemble. The key results we obtained are as follows. Adapting to the notation used here, for the canonical ensemble, the near-extremal limit of the WCFT partition function at fixed angular momentum \(J\) is \[Z_{J}^{\rm CE}(\beta)=e^{S_{0}-\beta E_{0}}Z_{\text{w-schw.}}(\tilde{\beta})\, \tag{6.11}\] where \[Z_{\text{w-schw.}}\left(\tilde{\beta}\right)=\left(\frac{\pi}{\tilde{\beta}} \right)^{3/2}\exp\left(\frac{\pi^{2}}{\tilde{\beta}}\right) \tag{6.12}\] is the thermal partition function of the warped Schwarzian sector in a WCFT, and \[\tilde{\beta}=\frac{3}{c}\sqrt{-\frac{J}{\mathsf{k}}}\,\beta. \tag{6.13}\] In (6.11) we also have the contributions for the extremal states, where we have \[\begin{split} E_{0}&=-\sqrt{-\mathsf{k}J}+\dots\,\\ S_{0}&=4\pi iP_{0}^{\text{vac}}\sqrt{-\frac{J}{ \mathsf{k}}}+\dots\.\end{split} \tag{6.14}\] The expressions (6.11)-(6.14) are valid in the large \(c\) limit; consistency of these derivations also requires that \(\beta\sim c^{\alpha}\) and \(J\sim c^{2(\alpha-1)}\), with \(1<\alpha\leq 3/2\). The dots in (6.14) are subleading corrections in \(J\) and \(c\). From (6.11)-(6.12) we can read off the leading low-temperature behaviour to be \[S_{\text{near-wcf}}^{\text{CE}}(\beta)=(1-\beta\partial_{\beta})\ln Z_{\text {w-schw.}}=2\frac{\pi^{2}c}{3}\sqrt{-\frac{\mathsf{k}}{J}}\,T+\frac{3}{2}\log T +\cdots. \tag{6.15}\] In this expression we are ignoring temperature independent contributions, since they can be viewed as subleading corrections to \(S_{0}\). The comparison with the gravitational side is excellent. First, the independent parameters here are \(\beta\) and \(J\), which we naturally match to the gravitional counterpart in Sec. 3.1: \(\beta=\beta^{\rm CE}\) and \(J=J_{\rm ext}^{\rm CE}\). With this, comparing (6.14) to the equivalent expressions in (3.3), we see that and \(S^{0}=S^{\rm CE}_{\rm ext}\) to leading order in the large \(c\) limit. The leading temperature response matches: the first term in (6.15), linear in temperature, agrees with (6.9) via (3.7). And the logarithmic correction (6.15) is exactly what we expect from the quantum corrections from the effective action (5.50) and (6.10). All in all, we find perfect agreement between the near-extremal limit of the CE black hole, the near-AdS\({}_{2}\) effective description, and the WCFT partition function in the canonical ensemble. Next, we take the perspective from the WCFT in the quadratic ensemble. The analysis in [28] reports that \[Z^{\rm QE}_{J}(\beta)=e^{S_{0}-\beta E_{0}}Z_{\mbox{\tiny{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}}}}}}}}}}( \tilde{\beta}) \tag{6.16}\] is the low temperature partition function at fix \(J\) in the quadratic ensemble of a WCFT. Here we have \[Z_{\mbox{\tiny{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}}}}( \tilde{\beta})=\left(\frac{\pi}{\tilde{\beta}}\right)^{2}\exp\left(\frac{\pi ^{2}}{\tilde{\beta}}\right)\,\qquad\tilde{\beta}=\frac{12}{c}\beta. \tag{6.17}\] It is crucial to stress that despite some similarities with (6.12), there is a different power-law in \(\beta\). In (6.16), the extremal energy and entropy are \[E_{0} =J+\ldots\, \tag{6.18}\] \[S_{0} =2\pi\sqrt{-\langle\mathscr{P}_{0}\rangle_{\rm vac}J}+\ldots\.\] Again the dots are subleading corrections in the large \(c\) limit, and these expressions are valid when \(\beta\sim c^{-\alpha}\) and \(J\sim c^{2\alpha}\) for \(\alpha>0\). With this, the low-temperature behaviour of the entropy is \[S^{\rm QE}_{\mbox{\tiny{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}}}}}}}}}}}( \beta)=(1-\beta\partial_{\beta})\ln Z_{\mbox{\tiny{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \left{{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}}}}}}} \,\, \tag{6.19}\] The comparison with the gravitional side is however problematic. Provided we identify \(\beta=\beta^{\rm QE}\) and \(J=-J^{\rm QE}_{\rm ext}\), we will find some agreement. More specifically, from (6.18) and (4.3), it is straightforward to check that \(E_{0}=M^{\rm QE}_{\rm ext}\) and \(S_{0}=S^{\rm QE}_{\rm ext}\). And it is also simple to check that the linear dependence in entropy in (6.19) completely agrees with (6.9) via (4.7). However, the logarithmic correction in (6.19) does not match those in the Schwarzian effective action in (5.50) and (6.10), and this is a problem. Basically, the near-AdS\({}_{2}\) effective theory tells us that the logarithmic corrections in the fixed \((\beta,J)\) ensemble at low temperature should be \(3/2\log T\), and the same correction in the quadratic ensemble of the WCFT does not reproduce this. Although classically the quadratic ensemble seems like a valid choice to setup a holographic dictionary, we are encountering an inconsistency since it is not accounting correctly for the near-extremal entropy. We take this as evidence that at the quantum level the quadractic ensemble does not provide a consistent description of warped black holes. Conclusions We have described several aspects of the near-extremal limit of warped black holes in TMG. Our aim was to contrast any differences or similarities between the canonical and quadratic ensemble. In this context, Sec. 3 and Sec. 4 show compatible results at the classical level, once (2.41) is taken into account. One of our main results is to carefully and in full detail construct the near-AdS\({}_{2}\) IR effective field theory description of the warped black holes, which contains the JT sector as expected. This is the same theory for both the quadratic and canonical ensemble black hole. The appeal of this theory is that, in addition to accounting correctly for classical aspects, it also accounts for the quantum corrections to the black hole entropy which depend on \(\log T\). These corrections can be contrasted with the field theoretic analysis done for WCFT in [28]: only the canonical ensemble of the WCFT reproduces this answer. We find this a useful diagnostic to discriminate between the plethora of ensembles, and asymptotic symmetry groups, that have appeared in the context of three-dimensional black holes. It is also interesting to comment on how the WAdS/CFT\({}_{2}\) proposal stands against this test. In a fixed \(J\) and \(T\) ensemble we would find agreement between the near-extremal partition function of a CFT\({}_{2}\) with (6.10): both a WCFT and CFT\({}_{2}\) report the same answer. The key test here is to analyse the grand canonical ensemble at fixed \(\Omega\) and \(T\): here is where a WCFT and a CFT\({}_{2}\) give a different \(\log T\) correction, which is explained in [28]. Along the lines of [58], it would be interesting to work out carefully the boundary conditions in the near-AdS\({}_{2}\) region to disentangle carefully what is the near-extremal partition function at fixed \(\Omega\) and \(T\). In this ensemble we expect the analysis of [63] might be relevant. Finally, we would like to remark on the unstable mode we have in the near-AdS\({}_{2}\) region. This is described around (5.16)-(5.20). Our interest here was to highlight the effects of the JT sector on the thermodynamics near extremality, and hence this operator was turned off. It would be interesting to determine if other theories that contain WBHs as solutions also have this unstable mode or not. It is quite possible that this mode is due to instabilities and pathologies of TMG, and absent in other contexts. When the mode is stable in TMG, it corresponds to a relevant operator in the dual theory; it would be interesting to understand what could a WCFT predict about the fate of the system under the presence of a relevant deformation. As argued in [37] there could be instances where it dominates over the JT sector, but the precise effect and at which scale it enters needs to be investigated. ## Acknowledgements We thank Dio Anninos, Luis Apolo and Monica Guica for useful comments and discussions. AA is a Research Fellow of the Fonds de la Recherche Scientifique F.R.S.-FNRS (Belgium). AA is partially supported by IISN - Belgium (convention 4.4503.15) and by the Delta ITP consortium, a program of the NWO that is funded by the Dutch Ministry of Education, Culture and Science (OCW). The work of AC has been partially supported by STFC consolidated grant ST/T000694/1. BM is supported in part by the Simons Foundation Grant No. 385602 and the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number SAPIN/00047-2020. SD is a Senior Research Associate of the Fonds de la Recherche Scientifique F.R.S.-FNRS (Belgium). SD was supported in part by IISN - Belgium (convention 4.4503.15) and benefited from the support of the Solvay Family. SD acknowledges support of the Fonds de la Recherche Scientifique F.R.S.-FNRS (Belgium) through the CDR project C 60/5 - CDR/OL "Horizon holography: black holes and field theories" (2020-2022), and the PDR/OL C62/5 project "Black hole horizons: away from conformality" (2022-2025).
2310.03787
Unveiling the hidden universe with JWST: The contribution of dust-obscured galaxies to the stellar mass function at $z\sim3-8$
With the advent of JWST, we can probe the rest-frame optical emission of galaxies at $z>3$ with high sensitivity and spatial resolution, making it possible to accurately characterise red, optically-faint galaxies and thus move towards a more complete census of the galaxy population at high redshifts. To this end, we present a sample of 148 massive, dusty galaxies from the JWST/CEERS survey, colour-selected using solely JWST bands. With deep JWST/NIRCam data from 1.15$\mu$m to 4.44$\mu$m and ancillary HST/ACS and WFC3 data, we determine the physical properties of our sample using spectral energy distribution fitting with BAGPIPES. We demonstrate that our selection method efficiently identifies massive ($\mathrm{\langle \log M_\star/M_\odot \rangle \sim 10}$) and dusty ($\mathrm{\langle A_V\rangle \sim 2.7\ mag}$) sources, with a majority at $z>3$ and predominantly lying on the galaxy main-sequence. The main results of this work are the stellar mass functions (SMF) of red, optically-faint galaxies from redshifts between $3<z<8$: these galaxies make up a significant relative fraction of the pre-JWST total SMF at $3<z<4$ and $4<z<6$, and dominate the high-mass end of the pre-JWST SMF at $6<z<8$, suggesting that our census of the galaxy population needs amendment at these epochs. While larger areas need to be surveyed in the future, our results suggest already that the integrated stellar mass density at $\mathrm{\log M_\star/M_\odot\geq9.25}$ may have been underestimated in pre-JWST studies by up to $\sim$15-20\% at $z\sim3-6$, and up to $\sim$45\% at $z\sim6-8$, indicating the rapid onset of obscured stellar mass assembly in the early universe.
R. Gottumukkala, L. Barrufet, P. A. Oesch, A. Weibel, N. Allen, B. Alcalde Pampliega, E. J. Nelson, C. C. Williams, G. Brammer, Y. Fudamoto, V. González, K. E. Heintz, G. Illingworth, D. Magee, R. P. Naidu, M. Shuntov, M. Stefanon, S. Toft, F. Valentino, M. Xiao
2023-10-05T18:00:01Z
http://arxiv.org/abs/2310.03787v2
Unveiling the hidden universe with JWST: The contribution of dust-obscured galaxies to the stellar mass function at z \(\sim\) 3 - 8 ###### Abstract The emergence of massive, optically-faint galaxies in infrared observations has revealed that our view of the high-redshift Universe was previously incomplete. With the advent of JWST, we can for the first time probe the rest-frame optical emission of galaxies at \(z>3\) with high sensitivity and spatial resolution, thus moving towards a more complete census of the galaxy population at high redshifts. To this end, we present a sample of 148 massive, dusty galaxies from the JWST/CEERS survey, colour-selected using solely JWST bands. With deep JWST/NIRCam data from 1.15\(\mu\)m to 4.44\(\mu\)m and ancillary HST/ACS and WFC3 data, we determine the physical properties of our sample using spectral energy distribution fitting with BAGPIPES. We demonstrate that our selection method efficiently identifies massive (\(\langle\log{\rm M_{\star}}/{\rm M_{\odot}}\rangle\sim 10\)) and dusty (\(\langle{\rm A_{V}}\rangle\sim 2.7\) mag) sources, with a majority at \(z>3\) and predominantly lying on the galaxy main-sequence. The main results of this work are the stellar mass functions (SMF) of red, optically-faint galaxies from redshifts between \(3<z<8\): these galaxies make up a significant fraction of the pre-JWST total SMF at \(3<z<4\), and dominate the high-mass end of the pre-JWST SMF at \(4<z<6\) and \(6<z<8\), suggesting that our census of the galaxy population needs amendment at these epochs. While larger areas need to be surveyed in the future, our results suggest already that the integrated stellar mass density at \(\log{\rm M_{\star}}/{\rm M_{\odot}}>9.25\) may have been underestimated by \(\sim\)20-25% at \(z\sim 3-6\), and \(\sim\)110% at \(z\sim 6-8\). keywords: galaxies: high-redshift - galaxies: evolution - infrared: galaxies - methods: observational - techniques: photometric ## 1 Introduction For decades, observational astronomers have been on a quest to determine how the galaxy population evolves through cosmic time. The Hubble Space Telescope (HST) has pioneered the study of this question: HST has observed high-redshift galaxies, primarily through their rest-frame ultraviolet (UV) emission. These high-redshift galaxies, usually referred to as 'normal' or Lyman-break galaxies (LBGs), have been studied extensively from z \(\sim\) 3 to z \(\sim\) 11, tending to have moderate star formation rates (SFR) and stellar masses, and are thought to make up the bulk of the galaxy population (e.g., Labbe et al., 2013; Schaerer et al., 2013; Bouwens et al., 2015; Finkelstein et al., 2015; Oesch et al., 2016; Faisst et al., 2020). These mostly dust un-obscured galaxies are also thought to dominate the cosmic star formation rate density (SFRD) at z \(>\) 4, while at lower redshifts the Universe was dominated by obscured star-formation (e.g., Madau and Dickinson, 2014; Zavala et al., 2021). While 'normal', un-obscured galaxies have been well-studied, our census of the galaxy population remains incomplete at z \(>\) 3 as rest-frame UV selections systematically miss massive, obscured sources (e.g., Alcalde Pampliega et al., 2019; Wang et al., 2019). Over the last decade, a significant population of optically undetected galaxies with relatively bright infrared (IR) or sub-millimetre (sub-mm) emission has been discovered in Spitzer/IRAC data, some of them with ALMA detections (e.g., Huang et al., 2011; Caputi et al., 2015; Stefanon et al., 2015; Wang et al., 2016; Franco et al., 2018; Alcalde Pampliega et al., 2019; Wang et al., 2019; Yamaguchi et al., 2019; Williams et al., 2019; Sun et al., 2021; Smail et al., 2021; Manning et al., 2022; Xiao et al., 2023). They typically have very red spectral energy distributions (SEDs) and remain undetected even in deep HST \(H\)-band observations - hence their name: HST-dark galaxies. Their SEDs are not well-constrained, with a few photometric detections and lack of spectroscopic redshifts, which result in very large uncertainties on their photometric redshifts, stellar masses, and SFRs (e.g., Caputi et al., 2012; Stefanon et al., 2015; Williams et al., 2019; Alcalde Pampliega et al., 2019). The physical properties of these galaxies were largely unconstrained until the arrival of the James Webb Space Telescope (JWST, Gardner et al., 2023). JWST has revolutionised the field of optically-faint galaxies, providing for the first time reliable physical parameters (e.g., Barrufet et al., 2023; Nelson et al., 2022; Perez-Gonzalez et al., 2023; Rodighiero et al., 2023; Labbe et al., 2023; Gomez-Guijarro et al., 2023). With its unprecedented sensitivity and resolution in the near-IR, JWST probes the rest-frame optical emission of galaxies at \(z\geq 3\), allowing one to identify the Balmer break, a good redshift and mass indicator. Additionally, the SEDs of massive galaxies are typically highly dust-attenuated with characteristic red slopes in the rest-frame optical. With its extensive photometric coverage from 1-5 \(\mu\)m, JWST's Near-Infrared Camera (NIRCam; Rieke et al., 2023) is the ideal instrument to identify sources based on these features. The early JWST era has seen the puzzling emergence of two additional populations of galaxies. The first is a population of massive sources (\(>10^{10}\) M\({}_{\odot}\)) at \(z>7\), less than 700 Myr after the Big Bang (e.g., Labbe et al., 2023). With the currently accepted theory of hierarchical structure formation within \(\Lambda\)CDM cosmology, it is challenging to explain how galaxies could accumulate this much mass through mergers or accretion alone (Boylan-Kolchin, 2023; Menci et al., 2022), while it might still be possible to reconcile such observations with theory (Mason et al., 2023; Dekel et al., 2023). One possibility is that these sources are actually active galactic nuclei (AGN), with one Labbe et al. (2023) source being spectroscopically confirmed to be an AGN with broad emission lines (Kocevski et al., 2023). A deeper investigation into massive galaxies in the early Universe is needed in order to determine their abundance and place constraints on mass build-up. The second emergent population consists of massive quiescent galaxies at high redshifts, now spectroscopically confirmed up to \(z=4.658\)(Carnall et al., 2023). Relatively little physical insight has been provided by simulations thus far to explain the emergence of quiescent galaxies at z\(>\)3, with simulations struggeling to predict observed number densities (Valentino et al., 2023; Gould et al., 2023). While it is highly likely that sub-millimetre galaxies (SMGs) evolved into massive quiescent galaxies at \(z\sim 2\)(Toft et al., 2014), their number densities are also insufficient to explain the presence of quiescent galaxies at \(z\sim 3-4\)(Valentino et al., 2020, 2023). Hence, an important step towards understanding the emergence of quenched galaxies is to look for previously-missed massive, dusty galaxies in the early Universe and determine their stellar masses and abundances. For the study of galaxy abundances, the stellar mass function (SMF) is an extremely useful statistical tool to quantify the evolution of the galaxy population as a function of stellar mass across cosmic history. Determining the SMF at various epochs in the history of the Universe allows us to track early galaxy build-up. Several studies have so far constrained high-\(z\) SMFs with ground- and space-based multi-wavelength observations (e.g., Davidzon et al., 2017; Stefanon et al., 2015, 2017, 2021; McLeod et al., 2021; Sannini et al., 2021; Weaver et al., 2022), with the shape of the total SMF being found to be accurately described by the empirically motivated Schechter (1976) function. Given that JWST is primed to find massive, dust-obscured sources that have previously been missed in the galaxy census, this raises the question of whether or not the total SMF at high-\(z\) epochs requires modification. The central question we want to address with this work is _how do massive, dusty galaxies selected with JWST affect the high-mass end of the galaxy stellar mass function in the early Universe_? In this study, we use data from the Cosmic Evolution Early Release Science (CEERS) survey (Finkelstein et al., 2022, 2022), a JWST Cycle 1 community survey in the CANDELS/EGS field. CEERS is aimed at discovering the first galaxies and observing galaxy assembly at \(z>3\). Given its deep photometric coverage with JWST/NIRCam from 1.15\(\mu\)m - 4.44\(\mu\)m, CEERS is the ideal survey to look for red, IR-bright galaxies. This paper is structured as follows: In Section 2, we discuss the photometric data used from HST and JWST and the production of the HST-JWST merged photometric catalogue. We introduce our colour selection using photometry solely from JWST. Furthermore, we describe how we create an AGN-cleaned sample of purely star-forming galaxies. In Section 3, we explain the SED fitting performed using the Python tool BAGPIPES (Carnall et al., 2018). In Section 4, we discuss the physical properties of our sample, and situate our galaxies on the galaxy main-sequence (4.3). In Section 5, we discuss the methodology used to compute the SMFs (5.1) and present the SMFs of massive, dusty galaxies at \(3<z<4\), \(4<z<6\) and \(6<z<8\) (5.2). Finally, we discuss our sample in the context of other JWST studies in Section 6, and we summarise and conclude our study in Section 7. For this work, we assume a flat \(\Lambda\)CDM cosmological model with \(\mathrm{H_{0}}=(67.8\pm 0.9)\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\Omega_{m}=0.308\pm 0.012\) as found by the Planck Collaboration et al. (2016). All magnitudes are quoted in the AB magnitude system (Oke and Gunn, 1983). Throughout this paper, we use a Kroupa (2001) initial mass function (IMF). If required for comparison, we scale mass values used in the literature from Salpeter (1955) or Chabrier (2003) to Kroupa (2001) using scale factors quoted in Madau and Dickinson (2014). ## 2 Observations and Sample Selection ### Imaging data We use data from the Cosmic Evolution Early Release Science (CEERS) programme, one of JWST's first early-release science surveys in Cycle 1, with data collected in June and December 2022 (Finkelstein et al., 2022, 2022). CEERS comprises 10 NIRCam pointings covering \(\sim\)100 arcmin\({}^{2}\) in the Extended Groth Strip (EGS) field, a CANDELS legacy field containing a wealth of ancillary HST multi-wavelength data. The NIRCam data covers a range of wavelengths from 1.15\(\mu\)m to 4.44\(\mu\)m in the following filters: F115W, F150W, F200W, F277W, F356W, F410M, and F444W (where W and M indicate a wide or medium band filter). Ancillary HST data from the ACS imager is available at wavelengths between 435nm to 814nm (in 3 filters: F435W, F606W, and F814W) and from the WFC3 imager at wavelengths between 1.05\(\mu\)m to 1.60\(\mu\)m (in 4 filters: F105W, F125W, F140W and F160W) (Koekemoer et al., 2011; Grogin et al., 2011; Stefanon et al., 2017). For this work, we use the v5 images reduced with the grizli pipeline and made publicly available by G. Brammer1, following the same steps as outlined in Valentino et al. (2023). The images include all available data over these fields taken with HST and JWST. The imaging depths as measured in circular apertures with a radius of 0.16''are listed in Table 1. They vary between 28.6 mag to 29.2 mag in the JWST wide filters and are \(\sim\) 28.3 mag in the shortest wavelength ACS imaging. Footnote 1: [https://daum-cph.github.io/dja/](https://daum-cph.github.io/dja/) ### Production of the HST-JWST photometric catalogue We use the JWST and ancillary HST images to create photometric catalogues, taking into account the wavelength-dependent point-spread function (PSF). In the following, we briefly describe how the PSF-matched photometric catalogue used in this work was produced (see Weibel et al. in prep. for details). We match the fluxes in all HST+JWST filters to the PSF resolution in the reddest JWST/NIRCam filter, F444W. For the NIRCam and WFC3 filters, we use the PSFs provided by G. Brammer for use with the CEERS grizli mosaics (Brammer, 2018)2. Footnote 2: [https://github.com/gbrammer/grizli-psf-library](https://github.com/gbrammer/grizli-psf-library) For the ACS filters, we derive effective PSFs from the science images by first identifying bright, but unsaturated stars without bright neighbouring sources or flagged pixels, from a preliminary SourceExtractor (Bertin & Arnouts, 1996). Then, we use the method EPSFBuilder from the python package photutils (Bradley et al., 2022) which is based on the model developed by Anderson & King (2000) to obtain the final effective PSFs. We compute matching kernels from each ACS and NIRCam PSF to the NIRCam/F444W PSF using the software package pypher (Boucaud et al., 2016) and convolve each flux and root mean square (rms) image with the respective kernel to match the PSF resolution in F444W. We follow a different procedure for the WFC3 filters because their PSFs are broader than the NIRCam/F444W PSF. First, we compute matching kernels from all of them and from the F444W PSF to the WFC3/F160W PSF, in the same way as described above, and produce PSF-matched flux and rms images accordingly. Then, we run SourceExtractor in dual mode, using an inverse-variance weighted stack of the unaltered F277W+F356W+F444W images as the detection image and measuring fluxes in circular apertures with a radius of 0.16''on the original images, the images that were PSF-matched to F444W as well as the images that were PSF-matched to F160W. For the final catalogue, we use the flux measurements on the original image in F444W and those on the images PSF-matched to F444W for all other filters, except the WFC3 data. For the latter, we correct the fluxes measured on the original images to match the colour between the respective filter and F444W as measured on the images PSF-matched to F160W. We scale all fluxes to the flux measured in Kron-like apertures by SourceExtractor in F444W, obtained using the default Kron parameters 2.5 and 3.5. To account for residual flux outside the Kron aperture, we measure the fraction of the energy enclosed by a circular aperture with a radius of \(\sqrt{a\,b}\) kron_radius, where \(a\), \(b\) and kron_radius characterise the Kron-ellipse, on the theoretical F444W PSF obtained from webbpsf, and divide all fluxes by that fraction. Finally, we correct all fluxes for Milky Way foreground extinction using the extinction model from Fitzpatrick & Massa (2007) through the python package extinction. To get a more realistic estimate of the rms uncertainty of our flux measurements that accounts for correlated noise, we put down circular apertures with a radius of 0.16''in 5000 random positions on the "signal-to-noise" image (i.e., the flux image divided by the rms image). We multiply the uncertainties on all fluxes, measured from the rms map respectively, by the scatter measured among those apertures. This leads to a scaling of the flux uncertainties by \(\sim\)5 - \(\sim\)35% depending on the filter - the largest correction being applied to F115W and the smallest to F444W. To identify and flag stars we used a flux ratio criterion similar to Weaver et al. (2022). We also flag objects as artefacts that are too small to be real sources (typically left-over bad pixels). The full CEERS catalogue contains over 93,000 sources. Out of these, we remove 930 sources that are either identified as stars or flagged as artefacts based on the above criteria. ### Selection of red, optically-dark/faint sources at \(z\)\(>\)\(3\) Over the last decade, numerous studies of H-dropouts and red galaxies have been conducted, with dropout and colour selections shown to be effective methods for selecting high-redshift sources. Typically, these studies combine HST+Spitzer data to select massive and dusty star-forming galaxies (e.g., Huang et al., 2011; Alcalde Pampliega et al., 2019; Wang et al., 2019; Sun et al., 2021). Several unique colour cuts have been used over the last decade using HST/WFC3 bands in the optical and Spitzer/IRAC and (recently) JWST/NIRCam bands in the near-IR (e.g., Huang et al., 2011; Caputi et al., 2012; Wang et al., 2016; Alcalde Pampliega et al., 2019; Wang et al., 2019; Sun et al., 2021; Barrrufet et al., 2023; Labbe et al., 2023; Nelson et al., 2022; Rodighiero et al., 2023; Perez-Gonzalez et al., 2023; Xiao et al., 2023). Here, we build on these and make a broad \begin{table} \begin{tabular}{c c c} Telescope/Instrument & Filter & 5\(\sigma\) depth [AB mag] \\ \hline \hline \multirow{4}{*}{HST/ACS} & F435W & 28.27 \\ & F606W & 28.36 \\ & F814W & 28.19 \\ \hline \multirow{4}{*}{HST/WFC3} & F105W & 27.96 \\ & F125W & 27.74 \\ & F140W & 26.99 \\ & F160W & 27.81 \\ \hline \multirow{4}{*}{JWST/NIRCam} & F115W & 28.63 \\ & F150W & 28.65 \\ \cline{1-1} & F200W & 28.93 \\ \cline{1-1} & F277W & 29.17 \\ \cline{1-1} & F356W & 29.17 \\ \cline{1-1} & F410M & 28.41 \\ \cline{1-1} & F444W & 28.81 \\ \end{tabular} \end{table} Table 1: 5\(\sigma\) depths in HST/ACS, HST/WFC3 and JWST/NIRCam filters. Depths are quoted in AB magnitudes. selection of red galaxies using solely JWST/NIRCam bands in order to fully exploit the increased sensitivity and resolution of JWST. By designing and implementing a colour selection capable of identifying the effects of the Balmer-break and reddened stellar continuum emission in a galaxy's photometry, we expect to select massive and dusty galaxies at high redshifts. For that, we use the Python tool BAGPIPES to investigate the evolution of colour with redshift. We generate galaxy spectra, from which we extract the photometry and compute modelled colours. We use a delayed-\(\tau\) star-formation history, ages of 1 Gyr, an \(\epsilon\)-folding time of 3 Gyr, a mass of \(10^{10}\) M\({}_{\odot}\) and metallicity of 0.5 \(Z_{\odot}\). We model galaxies at redshifts between \(z=(1.,6.)\) in steps of \(\Delta z=0.1\) and at discrete dust attenuation values of A\({}_{\rm V}=[2.,3.,4.]\) mag using a Calzetti dust model (Calzetti et al., 2000) to produce the SED tracks of massive, dusty galaxies as shown by the coloured lines in Figure 1. We also model a single SED track of a \(10^{11}\) M\({}_{\odot}\) massive galaxy with A\({}_{\rm V}=4\) mag. As the Balmer break gets redshifted beyond 1.5\(\mu\)m at \(z\gtrsim 3\), we design a colour cut that requires galaxies to be faint in F150W in comparison to longer wavelength bands. Perez-Gonzalez et al. (2023) show with a JWST-selected sample that HST-faint sources extend to higher masses than HST-dark sources. We therefore move beyond the strict HST-dark classification by including HST-faint sources in our selection, so as not to miss the most massive and bright galaxies (HST-dark classification referenced from Perez-Gonzalez et al., 2023). We also use the F444W band to get the broadest redshift range possible (as the highest redshift sources will have their Balmer break closer to F444W). Given our choice of using the F150W and F444W bands, we identify the F150W - F444W colour at which we expect to select galaxies that are (i) high redshift (\(z\gtrsim 3\)), (ii) massive (\(\log{\rm M_{\star}}/{\rm M_{\odot}}\sim 10\)), and (iii) dusty (A\({}_{\rm V}\gtrsim 2\) mag). In addition, from the SED-tracks shown in Figure 1, we estimate the F150W magnitude at which we ref the sample of low-\(z\) contaminants (\(z\lesssim 2\)) while retaining the most massive and dusty galaxies in our sample. Using the SED modelling described above, we determine a selection that is optimised to identify galaxies with A\({}_{\rm V}\gtrsim 2\) mag and \(\log({\rm M_{\star}}/{\rm M_{\odot}})\sim 10\) at \(z\gtrsim 3\), described in Equation 1: \[F150W-F444W >2.1, \tag{1}\] \[F150W >25{\rm mag}. \tag{2}\] Additionally, as the prominent feature of our galaxies is their redness, this suggests that they must have significant emission in the long wavelength bands. To ensure reliable detections, we require SNR \(>5\) in all three wide filters in the long wavelength channels: F277W, F356W, and F444W. Altogether, this colour selection is more flexible than in previous studies (i.e. Barrufet et al., 2023); we later remove the \(z<3\) contaminants after evaluating their physical properties (see Section 4.1). We find 179 galaxies that satisfy the F150W-F444W>2.1 and F150W>25mag criteria out of the \(>\)90,000 sources in our catalogue (see Figure 1). ### Identifying and removing obscured AGN In recent literature, there has been mounting evidence from JWST of a population of intermediate redshift obscured AGN that displays very red colours in the NIR (Labbe et al., 2023; Matthee et al., 2023; Barro et al., 2023; Greene et al., 2023). These so-called little red dots (LRDs) have characteristically blue rest-UV colours which possibly arise from star-forming regions, and red rest-optical colours that arise from the hot, dusty torus of the AGN (Labbe et al., 2023; Greene et al., 2023). These sources are potential contaminants in selections of red, star-forming galaxies, and it is important to address their presence in our sample. Based on the colour and compactness criteria outlined in Labbe et al. (2023) and Greene et al. (2023), we identify a parent sample of 29 potential AGN candidates. In order to further identify point-like sources, we perform a two-component PSF+Sersic fit in the F444W filter using the Galfit3(Haussler et al., 2013; Vika et al., 2015) software, identifying sources where the flux associated with the PSF component exceeds the flux associated with the Sersic component (Labbe et al., 2023). We identify 20 sources that satisfy these criteria. Footnote 3: [https://www.nottingham.ac.uk/astronomy/megamorph/](https://www.nottingham.ac.uk/astronomy/megamorph/) We remove these 20 sources from our sample during analysis (Section 4 onwards), thus considering a purely star-forming sample of galaxies. Figure A1 in Appendix A shows the postage stamps and SED of source 6583, identified as one of the 20 AGN candidates in our sample selection. Figure A2 shows the effect of AGN on the SMF, showing that in particular the SMF at \(6<z<8\) is significantly overestimated by including AGN. ## 3 SED fitting with Bagpipes to determine the physical properties of galaxies To calculate the physical properties of our sample, we use the Python tool Bayesian Analysis of Galaxies for Physical Inference and Pa Figure 1: Colour-magnitude diagram of F150W-F444W vs. F444W showing our selection method. The grey scatter points show all the sources in the CEERS catalogue while the orange scatter points are our selected galaxies. The coloured lines are SED tracks for various dust attenuation values (A\({}_{\rm V}\)s indicated in boxes), with coloured numbers indicating the redshift – solid lines correspond to \(10^{10}\) M\({}_{\odot}\) and the dashed line corresponds to \(10^{11}\) M\({}_{\odot}\). The grey arrows show upper limits for the sources with F150W lower than 2\(\sigma\) (median errors are shown by the cross in the upper right of the figure). The colour criterion F150W-F444W\(>\)2.1 (black dashed line) in principle identifies \(z>3\) sources, while the magnitude cut F150W\(>\)25 mag (dark red dashed line) is designed to rid the sample of low-\(z\) contaminants while retaining the most massive galaxies in the sample (\(\sim 10^{11}\) M\({}_{\odot}\)). Also shown for reference is the F150W = 26 mag cut that is a proxy for identifying HST-dark sources (Perez-González et al., 2023). We select 179 red galaxies with these criteria that theoretically restrict our sample to massive, dusty, high-redshift galaxies. rameter EEstimation (BAGPIPES, Carnall et al., 2018)4. BAGPIPES is capable of modelling galaxies with various star formation histories (such as delayed-\(\tau\), exponential, constant, bursts, etc.) and dust models (Cardelli et al., 1989; Calzetti et al., 2000; Charlot & Fall, 2000, etc.), using Stellar Population Synthesis (SPS; Bruzual & Charlot, 2003)) models. We choose to use a delayed-\(\tau\) SFH, which has been shown as an effective SFH to model the bulk of the stellar population, and accurately recover stellar masses (Ciesla et al., 2017). Furthermore, this SFH has been successfully used in previous studies of HST-dark galaxies and massive galaxies (Wang et al., 2016, 2019; Alcalde Pamliega et al., 2019; Perez-Gonzalez et al., 2023; Barrufet et al., 2023). Footnote 4: [https://bagpipes.readthedocs.io/en/latest/](https://bagpipes.readthedocs.io/en/latest/) We perform SED-fitting within a broad parameter space, allowing the code to explore the following ranges: redshifts between \(z=(0,10)\), a delayed-\(\tau\) SF history with \(\tau=(0.1,9)\) Gyr, masses in the range \(\log\rm M_{\bullet}/M_{\odot}=(6,13)\), metallicities between \(\rm Z=(0.2,1.2)\,Z_{\odot}\), a Calzetti dust model with \(\rm A_{V}=(0.2,4)\) mag, nebular emission with an ionisation parameter of \(\log\rm U=-2\), and a velocity dispersion of 300. The models chosen have been successfully used for similar types of galaxies, being able to fit red SEDs (Wang et al., 2016, 2019; Barrufet et al., 2023). The broad parameter space in each model allows us to explore this enigmatic galaxy population and unveil their physical properties in more detail, in particular their stellar masses. To test the suitability of our chosen \(\rm A_{V}\) range, we allow \(\rm A_{V}\) to vary from (0, 6) mag, finding that some galaxies were fit to very dusty (\(\rm A_{V}>4\) mag) solutions at low redshifts (\(z<0.75\)). Galaxies with similar properties have been reported in Caputi et al. (2012) and more recently in Bisigello et al. (2023), where BAGPIPES is used. We Figure 2: Postage stamps and SED fits of four selected galaxies from our sample of red galaxies. The stamps boxed in blue are from the previous HST/ACS and WFC3 data, and the stamps boxed in red are new from JWST/NIRCam (each stamp is \(4\times 4\) arcsec\({}^{2}\)). There is a variety in the morphological properties of our sample, ranging from spatially extended sources to compact ones. The lower panels display the SED fits: the maroon points represent the photometry and the downward arrows represent the flux upper limits. The orange lines are the SED fits from BAGPIPES and the probability density function of redshift is inlaid in the lower right part of the graphs respectively. The physical properties of these galaxies are quoted on the graphs. They are massive (\(\log\rm M_{\bullet}>9.5\)) and dusty (\(\rm A_{V}\sim 1.5-4\) mag) with redshifts ranging from \(z\sim 3-8\). compare the redshifts from BAGPIPES with redshifts derived from the Easy and Accurate Zphot from Yale (EAZY) software (Brammer et al., 2008). We find that the photo-\(z\)'s of the \(\mathrm{Av}>4\) mag sources are not in good agreement with EAZY, where EAZY typically finds higher-\(z\) solutions with lower \(\mathrm{Av}_{\mathrm{I}}\). In addition, upon visual inspection of the postage stamps, we find that several of these sources are very compact, completely dropping out of the shorter wavelength filters and thus being more likely to lie at \(z>2\) than at \(z<0.75\). The inclusion of MIRI data could potentially rule out the low-\(z\) solutions. However, this is only available over a very small portion of the field currently. We refer to Alcalde Pampliega et al. in prep. for a more detailed analysis including longer wavelength data. Additionally, given that our aim is to derive accurate stellar masses in order to calculate the stellar mass function, we test whether the derived stellar masses change significantly if we use the EAZY photometric redshifts as an input to the BAGPIPES SED fitting. We find that with EAZY-\(z\) as an input, \((\log\mathrm{M}_{\star}/\mathrm{M}_{\odot})=10.18^{+0.40}_{-0.50}\) and with the BAGPIPES-\(z\), \((\log\mathrm{M}_{\star}/\mathrm{M}_{\odot})=10.15^{+0.43}_{-0.50}\). Both derived stellar masses follow a tight 1:1 relation with an average scatter of 0.2 dex, suggesting that the final stellar mass functions will not be strongly affected by our choice of input redshift. We use the BAGPIPES-\(z\) in all SED fittings. Examples of some SED fits are shown in Figure 2, which showcases the variety in galaxy morphology and physical properties. Most of our sources have very red slopes indicating high dust attenuation. We find diversity in morphology: some sources are spatially extended, while others are extremely compact (see Figure 2). We performed a visual inspection of SEDs and postage stamps for all sources while considering their derived physical properties. We remove 11 sources from our sample due to either clearly overestimated photometric redshifts and masses (spatially extended sources that are likely at lower redshift) or sources with deblending issues. Our final sample thus contains 148 galaxies. ## 4 Physical properties of red, optically-faint galaxies JWST's outstanding sensitivity and resolution in the near-IR allow us to determine photometric redshifts and physical parameters (such as stellar masses, star formation rates, etc.) with unprecedented accuracy. This will allow us to place tighter constraints on the stellar mass build-up in the early Universe. In this section, we present the photometric redshifts and physical characteristics of our galaxies as determined with BAGPIPES at redshifts of \(3<z<8\) (see Table 2 for a list of the derived physical parameters of our full sample; AGN candidates are denoted as such but removed from the following analysis). ### Photometric redshifts We determine photometric redshifts for our sample of red galaxies using BAGPIPES (see Section 3). The redshift distribution is shown in Figure 3. \(\sim\)60% of the sample lies at \(z\gtrsim 3\) and \(\sim\) 90% at \(z\gtrsim 2\), with an average redshift of \(z_{\mathrm{mean}}=3.46\). This shows that our colour selection efficiently identifies galaxies at high redshifts. The redshift is mostly in agreement with EAZY redshifts using standard templates. Given that our colour cut is designed to select red galaxies at \(z>3\), we expect our sample to be highly incomplete below this redshift. Additionally, the magnitude cut of F150W \(>25\) mag is designed to rid our sample of low-\(z\) contaminants, further suggesting sample incompleteness at \(z<3\). Therefore, we study the physical properties of galaxies at all redshifts in our sample but calculate SMFs only for those galaxies at \(z>3\) (shaded region in Figure 3). We draw the reader's attention to a caveat of this selection technique, namely the two local peaks seen in the redshift distribution at \(z\sim 5.5\) and \(z\sim 7.5\) in Figure 3. The F444W detection is likely driven by the H\(\alpha\)+[NII] lines at \(z\sim 4.9-6.6\), and the [OIII]+H\(\beta\) Figure 4: [Top left to bottom right] Histograms of stellar masses, star formation rates, specific star formation rates, and dust attenuations of our sample of 148 red galaxies. The bold black dashed line indicates the 50h percentile while the thin dashed lines indicate the 16\({}^{\mathrm{th}}\) and 84\({}^{\mathrm{th}}\) percentiles. Our sample is massive (\((\log\mathrm{M}_{\star}/\mathrm{M}_{\odot})=10.15^{+0.43}_{-0.50}\)) and dusty (\((\mathrm{Av})=2.71^{+0.88}_{-0.91}\) mag), with moderate SFRs of \((\log\mathrm{SFR}/\mathrm{M}_{\odot}\mathrm{yr}^{-1})=1.64^{+0.43}_{-0.68}\) on average below 50 [\(\mathrm{M}_{\odot}\mathrm{yr}^{-1}\)] and specific SFRs of \(\langle\mathrm{sSFR}/\mathrm{Gyr}^{-1}\rangle=2.66^{+1.72}_{-1.42}\). Figure 3: Photometric redshift distribution for 148 red galaxies, determined with the SED fitting tool BAGPIPES. The average redshift is \(z_{\mathrm{mean}}=3.46\), with the 16\({}^{\mathrm{th}}\), 50\({}^{\mathrm{th}}\) and 84\({}^{\mathrm{th}}\) percentiles being 2.11, 3.13 and 4.65. \(\sim\)60% of the sample lies at \(z>3\). Our colour selection is therefore effective at selecting high-\(z\) candidates. lines at \(z=6.9-9.0\) (see, e.g., Oesch et al. 2023). The samples at these redshifts are thus qualitatively different from the bulk sample because their'redness' comes from emission lines rather than the continuum. However, we note that our selection includes 5\(\sigma\) detection masks in the long-wavelength filters (F277W, F356W and F444W), thus ensuring that the continuum is relatively bright over an extended wavelength range and not just in F444W. ### Physical properties of red galaxies One of JWST's most important improvements in the NIR is its increased photometric coverage at 1-5 \(\mu\)m in comparison with its predecessor, Spitzer. This allows JWST to better probe the Balmer break and thus derive more accurate photometric redshifts than previously possible. With more accurate photo-\(z\)s, through SED-fitting we can additionally derive more reliable estimates of the stellar masses of galaxies and their star-formation rates. We present the distributions of the physical properties of our sample of red, optically-faint galaxies in Figure 4. We find these galaxies to be massive, with a median stellar mass of \(\langle\log{\rm M_{\star}}/{\rm M_{\odot}}\rangle=10.15^{+0.43}_{-0.50}\). They also have high dust attenuations of \(\langle{\rm A_{V}}\rangle=2.71^{+0.88}_{-0.91}\) mag. Additionally, we find our galaxies to have moderate star-formation rates, with \(\langle\log{\rm SFR}/{\rm M_{\odot}}{\rm yr}^{-1}\rangle=1.64^{+0.43}_{-0.68}\) and \(\langle{\rm sSFR}/{\rm Gyr}^{-1}\rangle=2.66^{+4.72}_{-1.42}\). As expected, we find optically-faint galaxies to be relatively massive and dusty star-forming systems. To further illustrate the dusty nature of our galaxies we estimate them on the widely used UVJ diagram. We classify the star-forming vs. quiescent regions on the UVJ diagram following Williams et al. (2009), and further split the star-forming region into dusty and unobscured zones following the classification in Spitler et al. (2014). Figure 5 shows the UVJ classification of our galaxies and of full CEERS sample. Rest-frame colours for our red galaxies are determined by the best-fit SEDs from BAGPIPES, while for the full CEERS sample they are determined with EAZY due to less expensive computational time. Except for one galaxy laying in the quiescent region of the diagram, the sample lies in the star-forming region, with \(\sim 75\%\) of the sample laying particularly in the dusty region. Thus the UVJ classification further indicates the dust-obscured nature of our sample. ### Red galaxies on the galaxy main-sequence To place our galaxies within the context of galaxy evolution, we explore their position on the galaxy main-sequence. Figure 6 shows a plot of SFR vs. \({\rm M_{\star}}\) for our sample, comparing them to the star-forming main-sequence (MS) of galaxies at \(z\)= 2, 4, and 6 (from Speagle et al. 2014). As shown, our galaxies lie on the star-forming MS, indicative of the 'normal' nature of their ongoing star-formation. The three galaxies lying significantly below the SF main-sequence are candidate quiescent galaxies at \(z<3\). They form less than 2% of our sample. We compare our galaxy sample at \(3<z<5\) to two studies of interest from the literature: Wang et al. (2019), that studied ALMA-detected HST-dark galaxies using HST and Spitzer, and the more recent Barrufet et al. (2023) study that studied HST-dark galaxies with HST and JWST. The comparison between samples is shown in Figure 7. The high-mass end of our sample overlaps with the Wang et al. (2019) sample as our colour selection is inclusive of the Wang et al. (2019) selection criteria. Additionally, the Spitzer/IRAC sensitivity is considerably lower than JWST/NIRCam in the same range, thus resulting in detecting only the brightest and most massive galaxies. We also select lower-mass galaxies than Wang et al. (2019) as JWST can detect galaxies that are fainter in F444W, and our colour selection is less extreme than that used in Wang et al. (2019). The low-mass end of our sample overlaps with the range covered Figure 5: UV vs. V-J colours of our sample (coloured by redshift), and the CEERS sample (grey scatter points). Uncertainties are given by the 16\({}^{\rm th}\) and 84\({}^{\rm th}\) percentiles of the posterior distribution from SED-fitting with BAGPIPES. The galaxy classifications indicated by the black dashed lines are adopted from Williams et al. (2009) and Spitler et al. (2014). All our galaxies (except one) lie in the star-forming regions of the diagram, with \(\sim 75\%\) of the sample lying in the dusty star-forming region. There is a clear tendency for less dusty sources to be at higher redshift. Figure 6: Star formation rate vs. stellar mass for our sample of red, optically-faint galaxies, coloured by photo-\(z\) (circular scatter points). Uncertainties are given by the 16\({}^{\rm th}\) and 84\({}^{\rm th}\) percentile of the posterior distribution from SED-fitting with BAGPIPES. The galaxy main-sequence lines shown at \(z=2,~{}4,~{}\)and 6 are from Speagle et al. (2014) (solid coloured lines with scatter). The majority of our sample lies on the MS at redshifts of \(z<6\), suggesting that they are normal star-forming galaxies with moderate SFRs. The three sources lying significantly below the MS are candidate quiescent galaxies. by HST-dark galaxies from Barrufet et al. (2023). This study specifically looked at H-dropouts (\(H>27\) mag), with JWST/NIRCam's sensitivity permitting detections of lower-mass systems. However, this magnitude cut also limits the detection of brighter, higher-mass sources. By using a less restrictive magnitude cut at 1.5 \(\mu\)m (F150W \(>25\)mag) our selection criteria ensure we find higher-mass galaxies than in Barrufet et al. (2023) while still including the lower-mass HST-dark galaxies in this study. In Figure 7, we show that our sample of red, optically-faint sources lie on the galaxy main-sequence, similar to HST-dark galaxies (Barrufet et al., 2023). The comparison with these select studies from the literature shows that the mass-range spanned by our sample overlaps with both pre-JWST and JWST-selected HST-dark/faint galaxies. Stellar mass functions of red galaxies: finding the missing sources that dominate the high-mass end In this section, we present the SMFs of red, optically-faint galaxies at redshifts of \(3<z<8\). We describe the method used to derive the SMFs and their uncertainties. The SMFs are then presented, discussed, and compared to studies in the literature. We note that the sample statistics quoted in the previous section were for the full sample of 148 red galaxies. For the galaxies in the redshift range \(3<z<8\), the average stellar masses and dust attenuation values are \(\langle\log{\rm M_{\star}/M_{\odot}}\rangle=10.17^{+0.41}_{-0.56}\) and \(\langle{\rm A_{V}}\rangle=2.30^{+1.22}_{-0.56}\) mag, setting the stage for the exploration of the SMFs of obscured massive galaxies at these epochs. ### Determining SMFs We use the step-wise method to calculate the SMFs of our sample (Bouwens et al., 2008; Santini et al., 2021). The SMFs are approximated by binning the mass distribution, calculating the number of galaxies within each mass bin and dividing this number by the differential comoving volume of the survey. The mass resolution is judiciously chosen to have reasonable statistics within individual mass bins and to have an appropriate mass resolution in order to determine the shape of the SMF. Given that we use the bands F444W, F356W, F277W, and F150W in our selection, we determine the area overlapped by all four filters in the CEERS survey to be 83.3 arcmin\({}^{2}\). We accordingly calculate the differential comoving volume within the considered redshift bins respectively. The final MFs are calculated as shown in Equation 3, where \(\Phi_{i,j}\) is the estimated number density per \(\Delta\log M\) in a redshift bin '\(t\)' and mass bin '\(j\)'. \(N_{j}\) is the number of galaxies in the \(j\)'th mass bin, \(dV_{\rm i,\ comoving}\) is the differential comoving volume determined within the \(i\)'th redshift bin and \(f_{\star}\) is a multiplicative factor derived from a completeness simulation used to account for missing sources in our detection catalogues (described in 5.1.1). \[\Phi_{i,j}=N_{j}\ /\ ({\rm d}V_{i,\rm comoving}\cdot\Delta\log{\rm M}\cdot f _{\star}). \tag{3}\] #### 5.1.1 Completeness We measure the source detection completeness by running a simple simulation using our custom version of the publicly available software GALCAIR2(Carrasco et al., 2018; Leethochawalit et al., 2022). We first select a representative 1.5\({}^{\prime}\)\(\times\)1.5\({}^{\prime}\)cutout approximately in the middle of the CEERS image with average depth and no contamination by bright stars. Using GALCAIR2, we inject artificial sources, spanning a range of input UV magnitudes from -24.4 to -16.2 in 35 bins at a fixed redshift of \(z=6\) into the cutout. We inject 500 sources per bin in batches of maximally 100 sources at a time to avoid overcrowding and run SourceExtractor with the same settings as outlined in Section 2.2. The injected galaxies follow a Gaussian distribution in the logarithm of the effective radius, centred at 0.8 \(kpc\) and with a scatter of 0.17 dex and they have Sersic light profiles with 50% of the galaxies having a Sersic index of 1.5, and 25% having indices of 1 and 2 respectively. We further assume a flat SED (i.e., a fixed UV-slope of \(\beta=-2\)), since we only wish to estimate the completeness as a function of apparent magnitude. We repeat this experiment 10 times, therefore injecting 175,000 sources in total. To obtain the completeness of our sample, we first measure the fraction of recovered galaxies as a function of the input magnitude. Then, for each bin in apparent output magnitude, we determine the completeness as the weighted mean of the completeness values found in each input magnitude bin, weighted by the number of sources from that bin that were observed in the given output magnitude bin. Then, we additionally determine the fraction of detected sources in each apparent magnitude bin that have a measured SNR \(>5\) in all of F277W, F356W and F444W (cf. Section 2.3) and multiply that fraction with the detection completeness obtained in the previous step. Since all the observed galaxies considered in this paper have AB-magnitudes \(\lesssim 27\) in F44W, they are in a regime where the completeness is high and approximately constant as a function of the apparent magnitude (e.g., in F444W). From our analysis, we derive a mean completeness factor of \(f_{\star}=0.87\) by which we scale all our mass functions (see Equation 3). To determine the mass limit above which we are 80% mass complete, we project our mass distribution onto the SNR limit of Figure 7: SFR vs. M\({}_{\star}\) for the subset of our sample of red, optically-faint galaxies at \(3<z<5\) (circular scatter points). We show the galaxy main-sequence at \(z=4\). from Speagle et al. (2014) (solid line with scatter). We compare our sample to HST-dark galaxies from Wang et al. (2019) with \(z_{\rm median}=4\) (empty diamonds), and a sample used from Barrufet et al. (2023) at \(3<z<5\) (red squares). Our sample overlaps with the Barrufet et al. (2023) sample at the lower-mass end and with the Wang et al. (2019) sample at the high-mass end, showing that our study covers the mass-range spanned by HST-dark/faint galaxies in both the pre-JWST and JWST era. our selection (see, e.g., Pozzetti et al. 2010). Given that we select sources that are detected with a \(5\sigma\) certainty in F444W, F356W and F277W, the SNR limit of our selection is \(\rm SNR_{\rm lim}=5\sqrt{3}\). We calculate the joint SNR for all sources in our sample as \(\rm SNR^{2}_{joint}=SNR^{2}_{F277W}+SNR^{2}_{F356W}+SNR^{2}_{F444W}\). Assuming that stellar mass values linearly scale with source brightness, we find the hypothetical mass that each source would have if detected at \(\rm SNR_{\rm lim}\): \(\rm log~{}M_{\rm hypothetical}=log~{}M_{\star}-log(SNR_{joint}/SNR_{lim})\). The 80\({}^{\rm th}\) percentile of the \(\rm M_{\rm hypothetical}\) distribution provides the limit above which the sample is 80% mass complete, given the specific mass-to-light ratios and SEDs in our sample. We determine that the 80% mass complete limits are \(\rm log~{}M_{\star}/M_{\odot}=9.15\) at \(3<z<4\), \(\rm log~{}M_{\star}/M_{\odot}=9.07\) at \(4<z<6\) and \(\rm log~{}M_{\star}/M_{\odot}=9.21\) at \(6<z<8\). Therefore, in general, we find that our sample is 80% mass complete above \(\rm M_{\star}/M_{\odot}\sim 9.25\) in all redshift bins, and therefore we plot MF's above this conservative limit. We lose a negligible number of sources by limiting the sample in this manner (1 source each in the redshift bins \(3<z<4\) and \(6<z<8\)). To consider the completeness of our sample given the flux density limits of the telescope survey, we consider the widely used \(V/V_{\rm max}\) correction, used to test uniformity in the spatial distribution of sources (Schmidt (1968), see also Weaver et al. (2022a) for a detailed explanation) which particularly affects faint sources. This method considers the maximum redshift, \(z_{\rm max}\), at which a source within a \(\rm z_{low}<z<z_{\rm high}\) would still be observable before falling below the detection limit. Each source is then associated with a maximum observable differential comoving volume, \(V_{\rm max}\), associated with \(z_{\rm max}\), and the actual differential comoving volume it is detected in, \(V\), associated with \(z_{\rm high}\). If \(z_{\rm max}<z_{\rm high}\) the source is given a weight of \(V/V_{\rm max}\), and if \(z_{\rm max}>z_{\rm high}\). \(V/V_{\rm max}=1\) (as the source would anyways have been detected in the survey, and therefore does not need to be given a higher weight age). Like the step-wise method used to calculate the SMF, the \(V/V_{\rm max}\) too is non-parametric. It assumes no functional form for the SMF, but it does assume a uniform spatial distribution of galaxies. However, Weaver et al. (2022a) show that this is problematic only at \(z<1\), thus not affecting our study. We apply the \(V/V_{\rm max}\) correction to our sources, finding that given the redshift bins we choose, no galaxies in our sample require this correction. This is expected, as our galaxies are red by definition and on average massive and therefore bright in F444W. The \(V/V_{\rm max}\) correction mostly affects only faint galaxies with the propensity to be detected close to the noise threshold. #### 5.1.2 Sources of uncertainty We estimate the uncertainty of the SMFs by considering the Poisson noise \(\sigma_{\rm N}\), the uncertainty due to cosmic variance \(\sigma_{\rm cv}\), and the systematic uncertainty \(\sigma_{\rm sys}\) due to SED-fitting. Given that the calculation of the SMF is fundamentally a discrete counting process, the distribution of galaxies within a particular redshift and mass bin must follow Poissonian statistics. We calculate the uncertainty \(\sigma_{\rm N}\) by using frequentist central confidence intervals5 (see Maxwell 2011, for details). Footnote 5: [https://docs.astropy.org/en/stable/api/astropy.stats.poisson_conf_interval.html](https://docs.astropy.org/en/stable/api/astropy.stats.poisson_conf_interval.html) An added factor of uncertainty arises from cosmic variance, the field-to-field variation in galaxy number counts due to large-scale structure. It becomes an important source of uncertainty in narrow and deep surveys (Somerville et al. 2004), and is routinely included in uncertainty estimates of the stellar mass function (Davidzon et al. 2017; McLeod et al. 2021; Weaver et al. 2022a). To estimate the cosmic variance \(\sigma_{\rm cv}\), we use the CosmicVarianceCalculator v1.036 (Trenti and Stiavelli 2008), eval at the respective number density of our sample. We find relative cosmic variances for our sample to lie between 20%-30%, with the cosmic variance increasing with stellar mass. Footnote 6: [https://www.ph.unimelb.edu.au/~mrtrenti/cvc/CosmicVariance.html](https://www.ph.unimelb.edu.au/~mrtrenti/cvc/CosmicVariance.html) Uncertainties on redshifts and stellar masses can give rise to systematic offsets \(\sigma_{\rm sys}\) due to SED-fitting. In order to estimate \(\sigma_{\rm sys}\), we generate 1000 independent realizations of the SMF by sampling from the posterior distributions of physical properties derived with BAGPIPES and calculate the variance of the number densities from these realisations. This provides the SMF and uncertainty \(\sigma_{\rm sys}\) on the SMF due to the uncertainties on redshifts and masses. The final uncertainty \(\sigma_{\rm tot}\) of the SMF is the quadrature addition of the Poisson uncertainty, cosmic variance, and the systematic uncertainty, calculated via Equation 4: \[\sigma_{\rm tot}^{2}=\sigma_{\rm N}^{2}+\sigma_{\rm cv}^{2}+\sigma_{\rm sys}^{2}. \tag{4}\] In the absence of detections, upper limits are calculated as the right confidence interval of the Poisson distribution. ### SMFs at \(3<z<8\) Figure 8 and Tables 2 and 3 present the SMFs of our sample in three redshift ranges: \(3<z<4\), \(4<z<6\) and \(6<z<8\), calculated using the method outlined in the previous section. In order to determine the previously _missed_ fraction of the SMF, we compare our dust obscured SMFs to the pre-JWST total SMFs from Weaver et al. (2022a), McLeod et al. (2021) and Stefanon et al. (2021), derived from ground- and space-based observations. In particular, in deriving quantitative estimates of the missed fraction of the SMF, we compare our SMFs to the Schechter function fits from the Weaver et al. (2022a) study (see Section 6.2 for a discussion on Schechter fits vs. measured values of the SMF in Weaver et al. 2022a). We also compare our SMFs to model dust-obscured SMFs from Long et al. (2022), which are derived from semi-empirical simulations of dusty star-forming galaxies (DSFGs). The left panel of Figure 8 shows our SMF at \(3<z<4\) in comparison with Weaver et al. (2022a) (at \(z\sim 3.0-3.5\)) and McLeod et al. (2021) (at \(z\sim 3.25\)). At all masses, the SMF of our sample lies below the pre-JWST SMF from Weaver et al. (2022a) (based on COSMOS2020 observations; see Weaver et al. 2022b) and McLeod et al. (2021) (based on HST and ground-based observations). It deviates the most at the low-mass end but comes closest to the pre-JWST study at \(\rm log~{}M_{\star}/M_{\odot}\sim 10.5\), constituting close to 40% of the pre-JWST SMF from Weaver et al. (2022a) at this mass - dusty galaxies therefore significantly contribute to the SMF at the high-mass end, and this suggests that a sizeable fraction of galaxies at the high-mass end have been missing from our galaxy census in this epoch. Further, above \(\rm log~{}M_{\star}/M_{\odot}\sim 9.5\), our SMF is significantly higher than the model SMF from Long et al. (2022) (at \(z\sim 3.0-3.5\)), with the difference being most pronounced at \(\rm log~{}M_{\star}/M_{\odot}\sim 10.5\). Therefore, we could be seeing an emergent population of main-sequence dusty galaxies that are distinct from the widely studied DSFG population that are typically more strongly star-forming (and which the Long et al. (2022) simulation is based on). These results indicate that a significant population of obscured galaxies are prevalent at this redshift range. \begin{table} \begin{tabular}{c c c} \(\log{\rm M_{\star}}/{\rm M_{\odot}}\) & \(\Phi\) [\(10^{-5}\) Mpc\({}^{-3}\)dex\({}^{-1}\)] \\ & \(3<z<4\) & \(4<z<6\) \\ \hline \hline 9.5 & \(4.10^{+5.06}_{-1.24}\) & \(2.39^{+1.15}_{-1.86}\) \\ 10.0 & \(14.74^{+5.15}_{-5.34}\) & \(3.35^{+2.15}_{-1.49}\) \\ 10.5 & \(17.21^{+6.60}_{-4.03}\) & \(1.91^{+8.84}_{-1.12}\) \\ 11.0 & \(3.28^{+2.83}_{-2.46}\) & \(0.48^{+2.12}_{-0.42}\) \\ 11.5 & \(<\)1.72 & \(<\)1.00 \\ \end{tabular} \end{table} Table 2: Stellar mass function values of massive and dusty galaxies from \(3<z<4\) and \(4<z<6\), as shown graphically in the first two panels of Figure 8. Uncertainties are the quadrature addition of Poissonian noise, cosmic variance, and systematic uncertainties. \begin{table} \begin{tabular}{c c} \(\log{\rm M_{\star}}/{\rm M_{\odot}}\) & \(\Phi\) [\(10^{-5}\) Mpc\({}^{-3}\)dex\({}^{-1}\)] \\ & \(6<z<8\) \\ \hline \hline 9.625 & \(1.19^{+1.26}_{-0.81}\) \\ 10.375 & \(2.38^{+1.55}_{-1.19}\) \\ 11.0 & \(<\)1.10 \\ \end{tabular} \end{table} Table 3: Stellar mass function values of massive, dusty galaxies from \(6<z<8\), as shown graphically in the third panel of Figure 8. Uncertainties are the quadrature addition of Poissonian noise, cosmic variance, and systematic uncertainties. Figure 8: SMFs of our sample of massive and dusty galaxies in three redshift ranges: \(3<z<4,4<z<6\) and \(6<z<8\). Uncertainties shown are derived from Poisson statistics, cosmic variance and systematic uncertainties. Upper limits (downward arrows) are derived from the upper bounds of the Poisson uncertainty. We compare our results to the SMFs of the pre-JWST total galaxy population from Weaver et al. (2022a) (black dashed line), derived from COSMOS2020 observations (Weaver et al., 2022b) and to the model SMFs from Long et al. (2022) (solid grey line) derived from semi-empirical simulations for dusty star-forming galaxies. The SMF at \(3<z<4\) is additionally compared to McLeod et al. (2021) (black dotted line), derived from ground-based observations, and the \(4<z<6\) and \(6<z<8\) SMFs are compared to Stefanon et al. (2021) at \(z=5\) and \(z=7\) (black dotted line), derived from first and Spitzer imaging. At \(3<z<4\), the SMF of our sample constitutes \(\sim 50\%\) of the pre-JWST SMF from Weaver et al. (2022a) at \(\log{\rm M_{\star}}/{\rm M_{\odot}}=10.5\); at \(4<z<6\), the obscured SMF becomes comparable to the pre-JWST SMF Weaver et al. (2022a) at \(\log{\rm M_{\star}}/{\rm M_{\odot}}=11.4\). At \(6<z<8\), the obscured SMF exceeds the pre-JWST SMF Weaver et al. (2022a) at \(\log{\rm M_{\star}}/{\rm M_{\odot}}=10.375\). At both \(3<z<4\) and \(4<z<6\), our SMFs dominate the dusty model SMF predicted by Long et al. (2022) at \(\log{\rm M_{\star}}/{\rm M_{\odot}}>9.5\). ance, making it challenging to comment on SMF properties without a larger sample. ### Integrated stellar mass density The cosmic stellar mass density (SMD) is an efficient measure of stellar mass assembly. The total SMD is tightly coupled with the cosmic star-formation rate history, and thus could provide insights into early galaxy build-up such as previous epochs of star-formation and the stellar IMF of early stellar populations (Dickinson et al., 2003). Multiple works have observationally tracked the evolution of the SMD (Sark et al., 2009; Gonzalez et al., 2010; Davidzon et al., 2017; McLeod et al., 2021; Weaver et al., 2022), reaching up to \(z\sim 8-10\)(e.g., Weaver et al., 2022; Stefanon et al., 2021). The observationally determined SMD, however, can be substantially affected if a significant population of high-mass galaxies have been missing in previous observations. This work in part aims to determine the fraction by which pre-JWST studies have underestimated the SMD. We integrate the measured SMFs presented in Section 5.2 in order to get an estimate of the SMD for our galaxy sample. We find that the SMD in units of \(\left[10^{5}\,\mathrm{M_{\odot}Mpc^{-3}}\right]\) is \(51.6^{+20.0}_{-14.3}\) at \(3<z<4\), \(7.5^{+6.8}_{-2.9}\) at \(4<z<6\) and \(4.6^{+2.8}_{-2.1}\) at \(6<z<8\). The large uncertainty estimates reflect the uncertainty in the stellar mass functions where we are limited by sample size, especially at the high-mass end. In order to determine the missing SMD fraction in pre-JWST studies at the high-mass end, we compare our results with the Weaver et al. (2022) study. We integrate the Schechter function fits of the Weaver et al. (2022) mass functions (shown in Figure 8) at \(\log\mathrm{M_{\star}}/\mathrm{M_{\odot}}>9.25\) so as to perform a mass-consistent comparison with our sample. We find that at \(\log\mathrm{M_{\star}}/\mathrm{M_{\odot}}>9.25\), our sample constitutes \(24^{+9.7}_{-7.6}\) of the Weaver SMD at \(3<z<4\) and \(22^{+20.2}_{-8}\)% at \(4<z<6\). At \(6<z<8\), we find a missed fraction of \(110^{+66}_{-51}\)%, effectively doubling the SMD at this epoch. Therefore, our results indicate that the SMD could have been significantly underestimated in pre-JWST studies; this being said, it is important to note that these results are sensitive to the assumed Schechter fits of the pre-JWST measurements from (Weaver et al., 2022). In future studies, it will be imperative to include dust-obscured galaxies at the high-mass end in order to accurately trace stellar mass build-up in the early universe. ## 6 Discussion In this section, we discuss the results of our work in the context of similar studies conducted with JWST on dusty galaxies. We additionally discuss the abundance of massive galaxies that is suggested by our dust-obscured SMFs, and place this in the context of past work and future studies on galaxy censuses. ### Comparison of sample to recent literature in CEERS JWST's pilot year has seen the output of a great amount of science, with several papers and teams already providing novel insights into obscured galaxies at \(z>3\)(e.g., Barrufet et al., 2023; Nelson et al., 2022; Perez-Gonzalez et al., 2023; Rodighiero et al., 2023; Labbe et al., 2023; Akins et al., 2023). Additionally, it was shown that very dusty galaxies can sometimes contaminate extremely high redshift selections (e.g., Naidu et al., 2022; Zavala et al., 2023; Arrabal Haro et al., 2023). Here, we discuss our sample in comparison with some select studies in the CEERS field: Barrufet et al. (2023), Perez-Gonzalez et al. (2023), Labbe et al. (2023), and Naidu et al. (2022). Barrufet et al. (2023) studied HST-dark galaxies in the CEERS field, identifying massive, obscured galaxies at \(z>3\) and into the Epoch of Reionisation. Of the 30 HST-dark sources in their study, we identify 12 in our sample, likely due to the different colour selection. Our SMF results support the findings of Barrufet et al. (2023) that suggest that a significant fraction of massive, obscured sources were previously missing from our galaxy census at \(z>3\). Perez-Gonzalez et al. (2023) studied HST-dark and -faint galaxies in the first four NIRCam pointings of the CEERS field, using a selection based on F150W-F356W colours. We find -half of their sources in our sample (65 out of 138 of their galaxies). Comparing their total sample to our study, we find similar redshift ranges (\(\langle z\rangle=3.68^{+1.60}_{-1.00}\) in their study, \(\langle z\rangle=3.46^{+2.04}_{-1.35}\) in ours) and stellar masses (\(\langle\log\mathrm{M_{\star}}/\mathrm{M_{\odot}}\rangle=10.20^{+0.46}_{-0.73}\) in their study, \(\langle\log\mathrm{M_{\star}}/\mathrm{M_{\odot}}\rangle=10.15^{+0.43}_{-0.50}\) in ours). We note however that our redshift distribution has a longer high-end tail, where we find more sources at \(z\leq 6\) than the Perez-Gonzalez et al. (2023) study. This is most likely because we use the longer wavelength F444W filter in our colour selection, where we are possibly picking up the [OIII] line at \(z\sim 7\). Using a selection based on blue rest-UV and red rest-optical colours, Labbe et al. (2023) found six massive galaxies (\(\mathrm{M_{\star}}/\mathrm{M_{\odot}}>10^{10}\)) at \(7.4<z<9.1\). We identify two of their sources in our sample (IDs 48444 and 67066). We most likely do not select the remaining four sources in Labbe et al. (2023) due to their blue rest-UV colour selection. Additionally, one of the Labbe et al. (2023) sources originally identified as a massive galaxy at \(z=8.13\) has now been spectroscopically determined to be a likely AGN candidate at \(z=5.64\)(Kocevski et al., 2023); we do not find this source in our sample. Naidu et al. (2022) proposed a luminous candidate \(z\approx 17\) or \(z\approx 5\) galaxy, dubbed "Schrodinger's Galaxy", now confirmed to be an obscured source at \(z=4.912\pm 0.001\)(Arrabal Haro et al., 2023). We find this galaxy in our sample (ID 81918) at \(z=4.79^{+0.05}_{-0.08}\) with a dust attenuation of \(\mathrm{A_{V}}=1.74^{+0.11}_{-0.17}\) mag. Such studies show that there is increasing evidence for a population of massive, obscured galaxies at high redshifts, close to and into the Epoch of Reionisation (see also Fudamoto et al., 2021). ### Abundance of massive galaxies at high-mass end of SMFs The SMFs of JWST-detected dust-obscured galaxies in our study point toward an abundance of galaxies at the massive end of the pre-JWST SMF, possibly leading to an excess of the galaxy population with respect to the pre-JWST determined SMF. This excess with respect to the Schechter fit at the massive end of the SMFs at \(z\sim 3-5\) was shown in Weaver et al. (2022) with a sample of \(2\mu\)m-selected sources from the COSMOS2020 dataset, with hints of this population being star-forming galaxies with significant dust content. Our results qualitatively reinforce the conclusion that dust-obscured galaxies contribute significantly to the high-mass end of the SMF. In particular, our study suggests that this abundance is due to dust-obscured galaxies with largely main-sequence star-forming properties. In future galaxy censuses, it will be necessary to explore how the SMF measurements at the high-mass end compare with the Schechter formalism of our description of galaxy evolution. ## 7 Summary and conclusion In this work, we used data from the JWST/CEERS survey (Finkelstein et al., 2022, 2022) in the CANDELS/EGS field to identify red, optically-faint galaxies at high redshifts in order to determine the obscured stellar mass function at various epochs in the first two billion years of the Universe's history. Some key results are summarised in the following: * Using a colour criterion designed to select red, optically-faint galaxies, we show that we efficiently select massive and dusty galaxies (\((\log{\rm M_{\bullet}/M_{\odot}})=10.15^{+0.43}_{-0.50}\) and \((\rm A_{V})=2.71^{+0.88}_{-0.91}\) mag) with a majority lying at \(z>3\) (see Figures 3 and 4). * Our sample contains predominantly star-forming galaxies, largely lying on the star-forming main-sequence at \(z\lesssim 6\). They therefore represent a "normal" population of galaxies, without extreme starburst properties (see Figures 5 and 6). Our sample overlaps with the Wang et al. (2019) sample at the high-mass end and the Barrufet et al. (2023) sample at the low-mass end, showing that our sample of red galaxies has similar star-forming properties as HST-dark galaxies (see Figure 7). * Our analysis of the obscured galaxy SMF (see Figure 8) shows that in the pre-JWST era, we have missed a significant fraction of galaxies, particularly at the high-mass end of the SMF and at redshifts of \(z>3\). The SMF of red, optically-faint galaxies make up close to 40% of the previously-measured SMF (from the Schechter fits in Weaver et al. 2022) at a mass of \(\log{\rm M_{\bullet}/M_{\odot}}\sim 10.5\) in the \(3<z<4\) epoch, and becomes comparable to the previously-measured SMF at \(\log{\rm M_{\bullet}/M_{\odot}}\sim 11.0\) at \(4<z<6\). At \(6<z<8\), our SMF overtakes the pre-JWST SMF around \(\log{\rm M_{\bullet}/M_{\odot}}\sim 10.375\). * Our results at \(6<z<8\) highlight the importance of accounting for massive, obscured galaxies in the final stages of the Epoch of Reionisation. * Our SMFs show a rapid evolution at \(z\sim 4\) at masses of \(9.5\lesssim\log{\rm M_{\bullet}/M_{\odot}}\lesssim 11.0\), suggesting the onset of rapid dust-obscured stellar mass growth in this epoch. * The derived stellar mass density of our sources at \(\log{\rm M_{\bullet}/M_{\odot}}>9.25\) suggests that the missing SMD fraction could be a factor of \(\sim\)20-25% at \(z\sim 3-6\). At \(z\sim 6-8\), we find a missing fraction of \(\sim\)110% at \(z\sim 6-8\), effectively doubling the SMD at this epoch. These findings point towards an emergent population of massive, obscured galaxies from \(z\sim 3\) up to and into the Epoch of Reionisation, supporting the findings of early JWST studies (e.g., Barrufet et al. 2023; Labbe et al. 2023; Akins et al. 2023). The strong evolution of the SMF at \(z\sim 4\) suggests that this is a period of rapid stellar mass growth in obscured galaxies. Interestingly, \(z\sim 4\) is also roughly when the obscured SFRD is thought to overtake the un-obscured SFRD, dominating the cosmic star-formation history at later epochs (e.g., Zavala et al. 2021; Bouwens et al. 2020, 2021). Our results indicate that obscured stellar mass assembly occurred as early as \(z\sim 8\), suggesting that the build-up of dusty galaxies could begin close to 600 Myrs after the Big Bang. To further explore the beginning of obscured stellar mass assembly and push the observable redshift boundary farther back, studying the SMF by collating all public JWST surveys is critical. Including surveys such as COSMOS-Web (Casey et al., 2022), PRIMER (Dunlop et al., 2021) and UNCOVER (Bezanson et al., 2022) will satisfy the need of the hour: larger sample sizes. These surveys, and others to come with JWST, will surely result in us establishing a complete census of the massive, dust-obscured galaxy population in the early Universe. ## Acknowledgements The work presented in this paper is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #1345. This work has received funding from the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number MB22.00072, as well as from the Swiss National Science Foundation (SNSF) through project grant 200020_207349. The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant No. 140. The authors would like to thank Rui Marques-Chaves, Ivan Kramarenko and Damien Korber for useful discussions that helped improve the quality of this work. RG gratefully acknowledges support from the Inlaks Shivdasani Foundation. YF acknowledges support from NAOJ ALMA Scientific Research Grant number 2020-16B and support by JSPS KAKENHI Grant Number JP23K13149. VG gratefully acknowledges support by the ANID BASAL project FB210003 and from ANID FONDECYT Regular 1221310. RPN: Support for this work was provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51515.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. MS acknowledges support from the CIDE-GENT/2021/059 grant, from project PID2019-109592GB-I00/AEI/10.13039/501100011033 from the Spanish Ministerio de Ciencia e Innovacion - Agencia Estatal de Investigacion. MS also acknowledges the financial support from the MCIN with funding from the European Union NextGenerationEU and Generalitat Valenciana in the call Programa de Planes Complementarios de l+D+i (PRTRR 2022) Project (VAL-JPAS), reference ASFAE/2022/025. The work of CCW is supported by NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. Telescope facilities: JWST (NIRCam), HST (ACS and WFC3) Several publicly available softwares have facilitated this work. We extend our thanks to the authors of the following softwares: IPython (Perez & Granger, 2007), Jupyter (Kluyer et al., 2016), astropy (Astropy Collaboration et al., 2013, 2018), matplotlib (Hunter, 2007), numpy (Oliphant, 2015), photutils (Bradley et al., 2022), scipy (Virtanen et al., 2020), EAZY (Brammer et al., 2008), BAGPIPES (Carnall et al., 2018), GalfitM (Haussler et al., 2013; Vika et al., 2015), grizli (Brammer, 2018), SExtractor (Bertin & Arnouts, 1996), pypher (Boucaud et al., 2016), extinction (Fitzpatrick & Massa, 2007), GLASTAR2 (Carrasco et al., 2018; Leethochawalit et al., 2022). ## Data availability The JWST and HST raw data products used in this work are available via the Mikulski Archive for Space Telescopes ([https://mast.stsci.edu](https://mast.stsci.edu)). The combined mosaics are available on github ([https://github.com/gbrammer/grizli/blob/master/docs/grizli/image-release-v6.rst](https://github.com/gbrammer/grizli/blob/master/docs/grizli/image-release-v6.rst)). Additional data presented in this work will be made available by the authors upon request. ## References * Akins et al. (2023) Akins H. B., et al., 2023, arXiv e-prints, p. arXiv:2304.12347 * Alcalde Pampllega et al. (2019) Alcalde Pampllega B., et al., 2019, ApJ, 876, 135 * Anderson & King (2000) Anderson J., King I. R., 2000, PASP, 112, 1360 * Arrabal Haro et al. (2023) Arrabal Haro P., et al., 2023, arXiv e-prints, p. arXiv:2303.15431 * Astropy Collaboration et al. (2013) Astropy Collaboration et al., 2013, A&A, 558, A33 * Astropy Collaboration et al. (2018) Astropy Collaboration et al., 2018, AJ, 156, 123 * Barro et al. (2023) Barro G., et al., 2023, arXiv e-prints, p. arXiv:2305.14418 * Barrufet et al. (2023) Barrufet L., et al., 2023, MNRAS, 522, 449 * Bertin & Arnouts (1996) Bertin E., Arnouts S., 1996, A&AS, 117, 393 * Bezanson et al. (2022) Bezanson R., et al., 2022, arXiv e-prints, p. arXiv:2212.04026 * Bisigello et al. (2023) Bisigello L., et al., 2023, arXiv e-prints, p. arXiv:2302.12270 * Boucaud et al. (2016) Boucaud A., Bocchi M., Abergel A., Orieux F., Dole H., Hadj-Youcef M. A., 2016, A&A, 596, A63 * Bouwens et al. (2008) Bouwens R. J., Illingworth G. D., Franx M., Ford H., 2008, ApJ, 686, 230 * Bouwens et al. (2015) Bouwens R. J., Illingworth G. D., Oesch P. A., Caruana J., Holwerda B., Smit R., Wilkinson S., 2015, ApJ, 811, 140 * Bouwens et al. (2020) Bouwens R., et al., 2020, ApJ, 902, 112 * Bouwens et al. (2021) Bouwens R. J., et al., 2021, AJ, 162, 47 * Boylan-Kokchin (2023) Boylan-Kokchin M., 2023, Nature Astronomy, * Bradley et al. (2022) Bradley L., et al., 2022, astropy/photutils: 1.5.0, doi:10.5281/zenodo.6825092, [https://doi.org/10.5281/zenodo.6825092](https://doi.org/10.5281/zenodo.6825092) * Brammer (2018) Brammer G., 2018, Gkrammer/Grizli: Preliminary Release, Zenodo, doi:10.5281/zenodo.1146905 * Brammer et al. (2008) Brammer G. B., van Dokkum P. G., Coppi P., 2008, ApJ, 686, 1503 * Bruzual & Charlot (2003) Bruzual G., Charlot S., 2003, MNRAS, 344, 1000 * Calzetti et al. (2000) Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682 * Caputi et al. (2012) Caputi K. I., et al., 2012, ApJ, 750, L20 * Caputi et al. (2015) Caputi K. I., et al., 2015, ApJ, 810, 73 * Cardelli et al. (1989) Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245 * Carnall et al. (2018) Carnall A. C., McLure R. J., Dunlop J. S., Dave R., 2018, MNRAS, 480, 4379 * Carnall et al. (2023) Carnall A. C., et al., 2023, arXiv e-prints, p. arXiv:2301.11413 * Carrasco et al. (2018) Carrasco D., Trenti M., Mutch S., Oesch P. A., 2018, Publ. Astron. Soc. Australia, 35, e022 * Casey et al. (2022) Casey C. M., et al., 2022, arXiv e-prints, p. arXiv:2211.07865 * Chabrier (2003) Chabrier G., 2003, PASP, 115, 763 * Charlot & Fall (2000) Charlot S., Fall S. M., 2000, ApJ, 539, 718 * Ciesla et al. (2017) Ciesla L., Elbaz D., Fensch J., 2017, A&A, 608, A41 * Davidzon et al. (2017) Davidzon I., et al., 2017, A&A, 605, A70 * Dekel et al. (2023) Dekel A., Sarkar K. C., Birnboim Y., Mandelker N., Li Z., 2023, MNRAS, 523, 3201 * Dickinson et al. (2003) Dickinson M., Papovich C., Ferguson H. C., Budavari T., 2003, ApJ, 587, 25 * Dunlop et al. (2021) Dunlop J. S., et al., 2021, PRIMER: Public Release IMaging for Extragalactic Research, JWST Proposal. Cycle 1, ID. #1837 * Faist et al. (2020) Faist A. L., et al., 2020, ApJS, 247, 61 * Finkelstein et al. (2015) Finkelstein S. L., et al., 2015, ApJ, 814, 95 * Finkelstein et al. (2022) Finkelstein S. L., et al., 2022, arXiv e-prints, p. arXiv:2211.05792 * Finkelstein et al. (2022b) Finkelstein S. L., et al., 2022b, ApJ, 940, L55 * Fitzpatrick & Massa (2007) Fitzpatrick E. L., Massa D., 2007, ApJ, 663, 320 * Franco et al. (2018) Franco M., et al., 2018, A&A, 620, A152 * Fudamoto et al. (2021) Fudamoto Y., et al., 2021, Nature, 597, 489 * Gardner et al. (2023) Gardner J. P., et al., 2023, PASP, 135, 068001 * Gomez-Guijarro et al. (2023) Gomez-Guijarro C., et al., 2023, arXiv e-prints, p. arXiv:2304.08517 * Gonzalez et al. (2010) Gonzalez V., Labbe I., Bouwens R. J., Illingworth G., Franx M., Kriek M., Brammer G. B., 2010, ApJ, 713, 115 * Gould et al. (2023) Gould K. M. L., et al., 2023, arXiv e-prints, p. arXiv:2302.10934 * Greene et al. (2023) Greene J. E., et al., 2023, arXiv e-prints, p. arXiv:2309.05714 * Grogin et al. (2011) Grogin N. A., et al., 2011, ApJS, 197, 35 * Huisler et al. (2013) Huisler B., et al., 2013, MNRAS, 430, 330 * Huang et al. (2011) Huang J. S., Zheng X. Z., Rigopoulou D., Magdis G., Fazio G. G., Wang T., 2011, ApJ, 742, L13 * Hunter (2007) Hunter J. D., 2007, Computing in Science and Engineering, 9, 90 * Kocevski et al. (2023) Kocevski D. D., et al., 2023, arXiv e-prints, p. arXiv:2302.00012 * Koekemoer et al. (2011) Koekemoer A. M., et al., 2011, ApJS, 197, 36 * Kroupa (2001) Kroupa P., 2001, MNRAS, 322, 231 * Labbe et al. (2013) Labbe I., et al., 2013, ApJ, 777, L19 * Labbe et al. (2023a) Labbe I., et al., 2023a, arXiv e-prints, p. arXiv:2306.07320 * Labbe et al. (2023b) Labbe I., et al., 2023b, Nature, 616, 266 * Leethochawalit et al. (2022) Leethochawalit N., Treti M., Morishita T., Roberts-Borsani G., Treu T., 2022, MNRAS, 509, 5836 * Long et al. (2022) Long A. S., Casey C. M., Lagos C. d. P., Lambrides E. L., Zavala J. A., Champagne J., Cooper O. R., Cooray A. R., 2022, arXiv e-prints, p. arXiv:2211.02072 * Madau & Dickinson (2014) Madau P., Dickinson M., 2014, ARA&A, 52, 415 * Manning et al. (2022) Manning S. M., et al., 2022, ApJ, 925, 23 * Mason et al. (2023) Mason C. A., Trenti M., Treu T., 2023, MNRAS, 521, 497 * Matthee et al. (2023) Matthee J., et al., 2023, arXiv e-prints, p. arXiv:2306.05448 * Maxwell (2011) Maxwell E. A., 2011, arXiv e-prints, p. arXiv:1102.0822 * McLeod et al. (2021) McLeod D. J., McLure R. J., Dunlop J. S., Cullen F., Carnall A. C., Duncan K., 2021, MNRAS, 503, 4413 * Menci et al. (2022) Menci N., Castellano M., Santini P., Merlin E., Fontana A., Shankar F., 2022, ApJ, 938, L5 * Naidu et al. (2022) Naidu R. P., et al., 2022, arXiv e-prints, p. arXiv:2208.02794 * Nelson et al. (2022) Nelson E. J., et al., 2022, arXiv e-prints, p. arXiv:2208.01630 * Oesch Yamaguchi Y., et al., 2019, ApJ, 878, 73 * Zavala et al. (2021) Zavala J. A., et al., 2021, ApJ, 909, 165 * Zavala et al. (2023) Zavala J. A., et al., 2023, ApJ, 943, L9 ## Appendix A AGN identification and effect on the SMF Given that our study focuses on star-forming galaxies, it is of importance to remove AGN from our sample. We identify and remove AGN candidates, the so-called little red dots (LRDs) as described in Section 2.4. Figure 13 shows the postage stamps and SED of one such AGN candidate, galaxy 6583. This is a very compact source, as is characteristic of LRDs, with a red slope beyond \(2\mu\)m and a blue slope below this. As shown, the short-wavelength end of the slope does not fit well with BAGPIPES, possibly because BAGPIPES cannot perform multi-component SED-fitting and additionally does not contain AGN templates. This results in inaccurate photometric redshifts and derived physical properties of LRDs. Further, Figure 14 shows the effect of LRDs on the stellar mass function of our sample. While the SMF of the full sample overlaps neatly with the AGN-cleaned sample at \(3<z<4\), the difference between SMFs is more pronounced at \(4<z<6\) and differs the most at \(6<z<8\). This highlights the importance of addressing the presence of LRDs in our sample, so as not to overestimate the mass functions and stellar mass density at high-redshifts. ## Appendix B Physical properties of galaxies from SED fitting Section 3 describes the SED-fitting performed with BAGPIPES. Here, we present the derived physical properties for the 168 galaxies in our sample: 148 star-forming galaxies and 20 AGN candidates. Table B1 presents the IDs, RA, Dec, photometric redshifts, stellar masses, SFRs and dust attenuations of all galaxies. AGN candidates are indicated by a \(\dagger\).
2306.11465
Safe, Efficient, Comfort, and Energy-saving Automated Driving through Roundabout Based on Deep Reinforcement Learning
Traffic scenarios in roundabouts pose substantial complexity for automated driving. Manually mapping all possible scenarios into a state space is labor-intensive and challenging. Deep reinforcement learning (DRL) with its ability to learn from interacting with the environment emerges as a promising solution for training such automated driving models. This study explores, employs, and implements various DRL algorithms, namely Deep Deterministic Policy Gradient (DDPG), Proximal Policy Optimization (PPO), and Trust Region Policy Optimization (TRPO) to instruct automated vehicles' driving through roundabouts. The driving state space, action space, and reward function are designed. The reward function considers safety, efficiency, comfort, and energy consumption to align with real-world requirements. All three tested DRL algorithms succeed in enabling automated vehicles to drive through the roundabout. To holistically evaluate the performance of these algorithms, this study establishes an evaluation methodology considering multiple indicators such as safety, efficiency, and comfort level. A method employing the Analytic Hierarchy Process is also developed to weigh these evaluation indicators. Experimental results on various testing scenarios reveal that the TRPO algorithm outperforms DDPG and PPO in terms of safety and efficiency, and PPO performs best in terms of comfort level. Lastly, to verify the model's adaptability and robustness regarding other driving scenarios, this study also deploys the model trained by TRPO to a range of different testing scenarios, e.g., highway driving and merging. Experimental results demonstrate that the TRPO model trained on only roundabout driving scenarios exhibits a certain degree of proficiency in highway driving and merging scenarios. This study provides a foundation for the application of automated driving with DRL in real traffic environments.
Henan Yuan, Penghui Li, Bart van Arem, Liujiang Kang, Yongqi Dong
2023-06-20T11:39:55Z
http://arxiv.org/abs/2306.11465v1
# Safe, Efficient, Comfort, and Energy-saving Automated Driving ###### Abstract Traffic scenarios in roundabouts pose substantial complexity for automated driving. Manually mapping all possible scenarios into a state space is labor-intensive and challenging. Deep reinforcement learning (DRL) with its ability to learn from interacting with the environment emerges as a promising solution for training such automated driving models. This study explores, employs, and implements various DRL algorithms, namely Deep Deterministic Policy Gradient (DDPG), Proximal Policy Optimization (PPO), and Trust Region Policy Optimization (TRPO) to instruct automated vehicles' driving through roundabouts. The driving state space, action space, and reward function are designed. The reward function considers safety, efficiency, comfort, and energy consumption to align with real-world requirements. All three tested DRL algorithms succeed in enabling automated vehicles to drive through the roundabout. To holistically evaluate the performance of these algorithms, this study establishes an evaluation methodology considering multiple indicators such as safety, efficiency, and comfort level. A method employing the Analytic Hierarchy Process is also developed to weigh these evaluation indicators. Experimental results on various testing scenarios reveal that the TRPO algorithm outperforms DDPG and PPO in terms of safety and efficiency, and PPO performs best in terms of comfort level. Lastly, to verify the model's adaptability and robustness regarding other driving scenarios, this study also deploys the model trained by TRPO to a range of different testing scenarios, e.g., highway driving and merging. Experimental results demonstrate that the TRPO model trained on only roundabout driving scenarios exhibits a certain degree of proficiency in highway driving and merging scenarios. This study provides a foundation for the application of automated driving with DRL in real traffic environments. ## I Introduction Automated vehicles (AVs) promise to mitigate a myriad of uncontrollable factors associated with human operations, including human error and subjective judgment. The technology underpinning automated driving constitutes an amalgamation of multiple disciplines, with the system primarily composed of perception, planning, decision-making, and control modules. The decision-making module, governing actions such as throttle and braking control, vehicle steering, and signal light operation, is particularly critical. Its task is not only to define the driving trajectory but also to respond to unexpected scenarios, making it the key element. Deep reinforcement learning (DRL), an intersection of deep learning's capabilities of capturing features and reinforcement learning's decision-making aptitude, has been widely acclaimed in the field of automated driving. It has been witnessed that DRL even outperformed human decision-making in numerous applications. A typical example would be AlphaGo [1], the first artificial intelligence to defeat a human professional Go player, employing a DRL algorithm. Through six million rounds of learning and environmental interaction, AlphaGo honed its capability to triumph over world champions. The existing studies in the domain of automated driving with DRL have broadly addressed various control tasks and driving scenarios. ### DRL for Different Driving Tasks\(|\) DRL has been deployed in a variety of control tasks regarding driving. Sallab et al. [2] employed DRL to investigate lane-keeping tasks, utilizing Deep Q-Networks (DQN) for discrete action control and the Deep Deterministic Actor-Critic (DDAC) approach for continuous actions. Wang et al. [3] delved into lane-changing tasks, highlighting the capability of DRL to manage anomalous scenarios. The same research group also integrated Long Short-Term Memory (LSTM) with DQN to tackle ramp merging [4]. Their architecture accounts for the influence of interactive environments on long-term rewards to establish the optimal policy. Ngai and Yung [5] utilized DRL to train AVs for overtaking maneuvers. Their findings suggest that Q-learning enables the agent to make judicious decisions, preventing collisions with surrounding objects. Moreover, the agent can complete overtaking within the stipulated time, maintaining a stable heading angle during the process. Moreira [6] conducted tests on several DRL algorithms, e.g., Soft Actor-Critic (SAC), Deep Deterministic Policy Gradient (DDPG), and Twin Delay Deep Deterministic Policy Gradient (TD3), for automated parking. The proposed reward function was determined by the angle between the agent's driving direction and the correct direction. Results indicate that the TD3 algorithm, with its rapid convergence rate, is most suited to the automated parking scenario. ### DRL for Various Driving Scenarios Diverse automated driving scenarios have been studied using DRL. Fayjie et al. [7] utilized DQN for decision-making to train car driving in urban environments. They used _Unity_ to design a city-like structure with buildings, trees, and street lights, and utilized lidar and camera data as the state space validating neural networks' effectiveness in such settings. Konstantinos et al. [8] examined automated driving in highway scenarios. Tram et al. [9] applied DQN to predict vehicle trajectories at intersections, highlighting the superior success rate of the Deep Recurrent Q Network. Kamran et al. [10] and Jiang et al. [11] focused on driving through unsignalized intersections, using DQN and progressive value-expectation estimation multi-agent cooperative control (PVE-MCC), respectively. Chen et al. [12] addressed on-ramp merging with a multi-agent DRL, considering mixed traffic conditions of human-driven and automated vehicles running together. ### _Roundabout Driving_ When it comes to the roundabout, which is integral to urban traffic infrastructure, Elvik [13] has shown that roundabouts can effectively reduce the probability of serious traffic accidents. However, the intricate interweaving with other road users and exit selection at roundabouts pose significant challenges to automated driving. Given the impracticality of manually recording all possible scenarios in the state space, DRL emerges as a suitable approach for automated driving decision-making through roundabouts. However, very limited studies have tackled roundabouts by DRL, only three were identified, i.e., Garcia et al. [14] used a Q-Learning algorithm, Capasso et al. [15] utilized an Asynchronous Advantage Actor-Critic (A3C)-based vehicle mobility planning module, and Wang et al. [16] proposed SAC algorithm combined with interval prediction and self-attention mechanism for roundabouts driving. There are still noticeable gaps in the complex roundabout driving scenarios, especially when it comes to employing and comparing different DRL algorithms in the context of mixed traffic of human-driven and AVs and considering integrated rewards. Furthermore, the domain adaption possibilities, i.e., evaluating the feasibilities of transferring the algorithms trained in the roundabout driving to other scenarios (e.g., highway driving) remains unexplored. As a preliminary exploration, this study attempts to tackle these critical research gaps and tries to harness DRL to facilitate the navigation and control of AVs through roundabouts, underpinned by a carefully designed reward function that accounts for the unique challenges presented in this complex traffic scenario. For that, an integrated rewards function considering safety, efficiency, comfort level, and the energy consumption is developed. Three state-of-the-art DRL algorithms, i.e., DDPG, Proximal Policy Optimization (PPO), and Trust Region Policy Optimization (TRPO), are then implemented. Experiments show that TRPO outperforms other DRLs in tackling automated driving through roundabouts and the ablation study also demonstrated the transferability of the developed model in handling other driving scenes. ## II Methodology ### _System Architecture_ This study aims to develop safe, efficient, comfortable, and energy-saving driving models for AVs passing through roundabouts. Different DRL algorithms including DDPG, TRPO, and PPO were implemented and tested using a well-designed rewards function. The overall proposed system architecture is shown in Figure 1. The deep reinforcement learning was implemented through the PyTorch deep learning framework and the _highway-env_[17] simulation platform. The DRL algorithms are instantiated via the _stable-baselines3_[18] reinforcement learning library. _Highway-env_ is a Python library comprising a collection of environments for automated driving, encompassing several typical road scenarios: highway, merge, roundabout, parking, intersection, and racetrack. Predominantly, this study trained and tested the DRL models on the roundabout scenario, while scenarios such as highway merging are used to test and verify the model's versatility. ### _Drl_ DRL is a specialized machine learning algorithm designed to aid agents in decision-making. Through interactive training between the agent and its environment, DRL can enhance the agent's decision-making capacities. Specifically, in each training process timestep, the agent performs an action, after which the environment determines the agent's state and provides a reward value. This reward value assesses the agent's state at that timestep. Subsequently, the agent adjusts the policy network's parameters by computing or estimating the cumulative reward value. This process allows the model to maximize the achievable reward, optimize the decision-making strategy, and determine subsequent actions. This study implements, customizes, and compares three DRLs, i.e., DDPG, TRPO, and PPO, regarding roundabout driving. ### _Environment, State, and Action Settings_ #### Ii-C1 Environment The roundabout environment, a subclass of _AbstractEnv_ in the _highway-env_ library, simulates a vehicle navigating through roundabouts. It allows customization of road shape, parameters, and vehicle behavior, as well as the reward function and termination conditions of reinforcement learning. The driving task for the trained agent is to achieve safe, quick, and efficient driving while avoiding collisions and adhering to a predetermined route as closely as possible. To train the autonomous vehicle's interaction capabilities with surrounding traffic, several vehicles are randomly added to the roundabout environment. These vehicles, defined by the Figure 1: Illustration of the overall system architecture highway-env_ library, can demonstrate simple driving behavior and collision avoidance in low-density traffic. A schematic diagram of the roundabout with surrounding vehicles is shown in Figure 2. #### Ii-C2 State space In DRL, state space encapsulates observable data used by the agent to determine corresponding actions via neural network processing. Regarding automated driving, state space consists of real-time information, such as vehicle speed, acceleration, heading angle, and surrounding traffic conditions. This study adopts seven aspects of features to represent the state spaces shown in Table 1. #### Ii-C3 Action space The highway-env environment offers three types of action spaces: _Discrete Action_, _Discrete Meta Action_, and _Continuous Action_. This study employs a hybrid approach using both discrete and continuous actions to train distinct driving tasks. _Discrete Meta Action_ discretizes continuous vehicle behavior into meta-behaviors, such as acceleration, deceleration, lane changes, and maintaining speed. Each meta-action, defined by its duration and a sequence of basic actions, facilitates efficient exploration and learning of complex behaviors while preserving action continuity and interpretability. _Continuous Action_ involves throttle and steering controls. The throttle ranges from -1 (maximum brake) to 1 (maximum throttle), and the steering control ranges from -1 (maximum left turn) to 1 (maximum right turn). In this study, the continuous action space is mainly used as an action space. Its actions need to ensure that the vehicle can drive along the lane on the roundabout, reach the destination exit, and have the ability to avoid other vehicles on the road. ### _Rewards Function_ In this study, a reward function is crafted specifically for autonomous vehicles' navigating roundabouts. The effectiveness of the reward function is determined by evaluating the driving safety, efficiency, comfort, and energy consumption through the analysis of performance indicators. #### Ii-D1 Safety rewards Vehicle safety is paramount in autonomous driving, hence it accounts for substantial weight in the reward function. In the roundabout driving context, safety is primarily influenced by two factors, i.e., lane-center positioning and time-to-collision (TTC). The lane-centering reward, indicated by \(R_{LC}\), can be computed as \[R_{LC}=1-\left(\frac{l_{lateral}}{l_{lateral}/2}\right)^{2} \tag{1}\] where \(l_{lateral}\) is the vehicle's offset to the center of the lane, and \(l_{width}\) is the lane width. The TTC reward is computed as \[R_{TTC}=1-\frac{3}{TTC} \tag{2}\] If the Time-to-Collision (TTC) exceeds 3 seconds, the TTC reward will fall within the range of 0 to 1. A larger TTC results in a reward closer to 1. Conversely, when TTC is less than 3, the reward becomes negative. And in the event of an imminent collision, the TTC reward will approach \(-\infty\). The total safety reward is a weighted sum of the lane center reward and the TTC reward. The TTC reward constitutes 70% of the \(R_{safe}\), while the lane center reward makes up the remaining 30%. The total safety reward can be expressed as: \[R_{safe}=0.7\times R_{TTc}+0.3\times R_{LC} \tag{3}\] #### Ii-D2 Efficiency rewards The efficiency reward motivates the AV to move forward, avoiding stationary actions. It mainly rewards high speeds within set limits. When the vehicle's speed is less than or equal to the speed limit, the efficiency reward is set to the ratio of the vehicle's current speed to the speed limit as \[R_{efficient}=\frac{v_{ego}}{v_{limit}} \tag{4}\] When the vehicle's speed is greater than the speed limit, the reward value decreases as the speed increases. \[R_{efficient}=1-\frac{v_{ego}-v_{limit}}{v_{max}-v_{limit}} \tag{5}\] where \(v_{ego}\) is the current speed, \(v_{limit}\) is the speed limit on the road, and \(v_{max}\) is the maximum achievable speed value of the vehicle. #### Ii-D3 Comfort rewards Vehicle comfort, a key performance indicator for automated driving, significantly impacts user experience. This study focuses on smooth acceleration, deceleration, and steering. The reward function considers the rate of change in acceleration/braking and steering. Lower rates of change, indicating smoother movements, yield higher rewards, while higher rates of change result in lower rewards. The calculation of the _Comfort_ reward value is as follows \[diff_{throttle}=\frac{d\ throttle_{t}}{dt} \tag{6}\] \[diff_{steering}=\frac{d\ a_{steering}}{dt}\] (7) \[R_{comfort}=1-\frac{diff_{throttle}+diff_{steering}}{4} \tag{8}\] #### Ii-D Fig. 2: Roundabout with surrounding vehicles where \(diff_{\mathit{t}}\)\(\mathit{t}\)\(\mathit{t}\) is the rate of change of the throttle or brake, \(a_{\mathit{steering}}\) is the input value of the throttle or brake, \(diff_{\mathit{steering}}\) is the rate of change of the steering wheel, and \(a_{\mathit{steering}}\) is the input value of the steering wheel. #### Iii-B4 Energy consumption rewards Jimenez [19] indicates that Vehicle Specific Power (VSP) can indirectly reflect vehicle energy consumption, demonstrating a roughly linear positive correlation with specific power. Hence, specific power values can be used to approximate energy consumption. Parameters for this model were calibrated by Jimenez [19]. In this study, the slope resistance term is omitted since road slope is not considered. \[VSP=\ v\times(1.1a+0.132)+0.000302v^{3} \tag{9}\] For the setting of the reward function, this study considers the maximum specific power value of the vehicle and uses it as a standard to normalize the value of the specific power at the current moment to the range from 0 to 1, and thus \[R_{\mathit{energy}}=1-\frac{VSP}{VSP_{\mathit{max}}} \tag{10}\] #### Iii-B5 Total integrated rewards In the roundabout setting, AVs will enter from any of the four entrances with a predefined exit destination. A destination reward is implemented for the agent to learn to navigate towards its objective when performing continuous actions. This reward is Boolean, i.e., it is set to 1 if the vehicle reaches the target exit and 0 otherwise: \[R_{\mathit{arrive}}=\left\{\begin{matrix}1&\mathit{if}\ \mathit{the}\ \mathit{ vehicle}\ \mathit{has}\ \mathit{reached}\ \mathit{the}\ \mathit{target}\ \mathit{exit}\\ \mathit{else}\end{matrix}\right. \tag{11}\] The total integrated reward function combines the aforementioned sub-reward functions through a weighted sum. Having closely similar weights for all four sub-reward functions would overcomplicate the reward function and hinder satisfactory model training. Emphasis is placed on safety and efficiency by assigning larger weights, as they are critical elements. The total reward function is calculated as \[R_{\mathit{total}}=0.6\ R_{\mathit{safe}}+0.25\ R_{\mathit{efficient}}\] \[+0.1\ R_{\mathit{comfort}}+0.05\ R_{\mathit{energy}}+R_{\mathit{arrive}} \tag{12}\] ## III Experiment This study implemented DDPG, TRPO, and PPO, trained them on the _highway-env_ platform, and evaluated and compared their performances. The model training and testing are conducted on a laptop with a 12\({}^{\text{th}}\) Gen Intel Core i9-12900H CPU and an NVIDIA GeForce RTX 3070 Ti GPU. In the implementation, model fine-tuning and hyperparameter optimization play a vital role in enhancing the performance of reinforcement learning algorithms. Model fine-tuning adjusts the algorithm model's specifics and structure, while hyperparameter optimization involves selecting and adjusting the hyperparameters within the algorithm for improving performance. Typical techniques for model fine-tuning include neural network structure adjustment, e.g., tweaking the number of layers, neurons, and activation function, to boost the algorithm's efficacy. In this research, all these three DRL algorithms adopt similar network structures. Specifically, both the actor and critic networks of DDPG are designed with two hidden layers, each containing 64 neurons. TRPO and PPO utilize the Multi-Layer Perceptron (MLP) neural network with two hidden layers, each containing 64 neurons. In reinforcement learning, hyperparameters are parameters that cannot be optimized iteratively during training and need to be set manually beforehand. Hyperparameter tuning involves adjusting these parameters to enhance algorithm performance. This study employs grid search to optimize hyperparameter values, preserving or excluding hyperparameter combinations based on the decrease or increase of the reward function during training. ## IV Results and Analysis This study conducted a thorough quantitative comparison of the DDPG, PPO, and TRPO algorithms. The evaluation considers factors such as convergence speed during training, driving efficiency, comfort, lane deviation, and collision rate of autonomous vehicles. Due to differences in reward function discount factors across algorithms, this study extracted these metrics during the model testing phase rather than directly comparing average reward values from training. ### Comparison of Convergence Speed Evaluating the convergence speed of algorithms is crucial in deep reinforcement learning, given the dependency on high-performance computing resources and the time-intensive nature of training. This study compared the training progression of DDPG, TRPO, and PPO algorithms as shown in Figure 3 illustrating the variations in average reward values over time for each algorithm. From the observations in Figure 3, all three selected algorithms, i.e., TRPO, PPO, and DDPG, manage to elevate the reward value to approximately 1000 and then maintain a stable range. TRPO, with the quickest convergence speed, reaches a stable maximum reward in about 300 episodes. PPO Figure 3: The training reward value of the three DRL algorithms (a) DDPG, (b) PPO, (c) TRPO follows, converging to a similar reward value in approximately 400 episodes. DDPG, with its more parameter-heavy nature, converges slower, stabilizing only after nearly 2300 episodes. ### Comparison of Model Performance The trained model can be invoked for testing using PyTorch's _model.load()_ function. The testing records details about the AV's state at each time step, such as throttle and steering states, and collision status. These data enable calculating various performance evaluation metrics like efficiency, safety, and comfort level. Automated vehicle evaluation typically involves a comprehensive evaluation system that encompasses three stages, i.e., simulation testing, closed road testing, and open lead testing. Each stage requires specific evaluation metrics and weights, along with effective evaluation methods to ensure safety and improve performance. This study performs simulation testing of automated roundabout driving in the aforementioned environment. In the testing phase, the trained model is invoked. The AVs navigate the roundabout environment based on actions outputted by the invoked model, given the observed state space. For each DRL algorithm, the model is tested over 50 iterations from entering to exiting the roundabout. An observation function extracts the average collision rate, lane-centering loss value, efficiency, comfort, and energy consumption level during these tests. The average values of these five metrics throughout the 50 rounds of testing will be used as the performance indicators. The calculation of the collision rate is shown in Equation (13), \[\textit{Collision Rate}=1-\frac{num_{collision}}{T}\times 10^{3} \tag{13}\] where \(num_{collision}\) is the number of vehicle collisions during the entire simulating test, \(T\) is the total simulation time step of the 50 rounds of testing (larger than 5000). This calculation converts the collision performance into a score of 0 to 1. The impact of the above five evaluation indicators on automated driving is different, and thus the weight of each indicator needs to be further analyzed. For that, this study utilized the Analytic Hierarchy Process (AHP) method to determine the weights of the five testing indicators. Details of the AHP process are provided in the supplementary materials. The final estimated weight values are shown in TABLE II. Each test metric is computed as its average value across all 50 rounds of testing, normalized to a range between 0 and 1. TABLE III shows the testing results of the three selected DRL algorithms. The results show that TRPO outperforms the other two DRLs in collision rate, lane-centering loss, and efficiency metrics, though it lags slightly in comfort level and energy consumption compared to the other two algorithms. Overall, TRPO achieved the highest integrated test score, surpassing both DDPG and PPO. DDPG, while defective in terms of collision rate, demonstrates better lane-centering and efficiency performance than PPO, yet falls behind TRPO. While PPO excels in comfort and energy consumption, it lags behind TRPO in terms of the other three metrics. Despite individual algorithm strengths in certain aspects, overall, TRPO performs the best. For model characteristics and their verification, DDPG uses a deep Q-network to estimate the optimal action-value function, differing from TRPO and PPO, which utilize natural policy gradient algorithms with distinct optimization constraints. For exploration, DDPG applies noise-induced action perturbations suitable for continuous action spaces, although possibly resulting in slower convergence. In contrast, TRPO and PPO use stochastic policies, usually providing more effective global optimal solutions. Unlike DDPG's instability due to hyperparameter sensitivity, TRPO and PPO exhibit robustness and stability thanks to their conservative optimization strategies. To sum up, TRPO excels in collision rate, lane-centering, and efficiency, and delivers the best overall testing score; PPO is distinct in comfort and energy consumption, and follows TRPO regarding the overall testing score; while DDPG may be hampered by its sensitivity to hyperparameters and less effective exploration strategies leading to the worst overall testing performance. ### Ablation Study: Model Adaptability in Other Scenarios To test the adaptability of the trained TRPO model across other driving scenarios, it was deployed and tested on _highway driving_ and _merging_ maneuvers on _highway-env_. The TRPO model only trained on roundabout scenarios, showed a certain degree of competence in these new driving tasks. Subjective evaluation by ten experts was done to rate the model's performance across three dimensions: lane keeping, car following, and lane changing (scored 1-3, with 3 being the best). Average scoring results are presented in TABLE IV showing the model's proficient lane-keeping and car-following capabilities in the highway driving scenario. And regarding lane-changing tasks, the model did not perform well. It is understandable, as compared with high driving and merging, there are merely and different lane changes in the training of roundabout driving. \begin{table} \begin{tabular}{|c|c|} \hline **Indicator** & **Weight value** \\ \hline Average collision rate test value & 0.4764 \\ \hline Average lane-centering loss & 0.2853 \\ \hline Average efficiency & 0.1428 \\ \hline Average comfort level & 0.0634 \\ \hline Average energy consumption level & 0.0320 \\ \hline \end{tabular} \end{table} TABLE III: MODEL TESTING RESULTS \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}{c} \(\text{slidicator}\) \\ \(\text{Algoris}\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\text{Collision}\) \\ \(\text{rate}\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\text{Lane-}\) \\ \(\text{cartering}\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\text{Emfactory}\) \\ \(\text{cartering}\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\text{Confert}\) \\ \(\text{cartering}\) \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} \(\text{Total}\) \\ \(\text{score}\) \\ \end{tabular} } \\ \hline DDPG & 0.43 & 0.8653 & 0.8872 & 0.8846 & 0.8058 & 0.6606 \\ \hline PPO & 0.68 & 0.8385 & 0.8784 & **0.9836** & **0.8103** & 0.7769 \\ \hline TRPO & **0.73** & **0.9322** & **0.9295** & 0.8627 & 0.7995 & **0.8267** \\ \hline \end{tabular} \end{table} TABLE IV: Subjective EVALUATION RESULTS on model ADAPTABILITY Limited by space, the details of the ablation study are elaborated in the supplementary materials available at [https://drive.google.com/drive/folders/1LialsmoiifZXioEB](https://drive.google.com/drive/folders/1LialsmoiifZXioEB) XYg-14KRx14HMr. In this shared folder, demo videos are also provided to better visualize the results. ## V Conclusion This study presents a deep reinforcement learning (DRL) based framework for automated driving through complex roundabout scenarios with surrounding human driven vehicles. Based on the _highway-env_ platform, this study designed the corresponding state and action space, together with an integrated multi-factor reward function considering safety, efficiency, comfort, and energy consumption. Using _stable-baselines3_, the study customized and implemented three DRLs, i.e., DDPG, PPO, and TRPO, to achieve automated driving through roundabouts. The models were trained with simulation and fine-tuned by hyperparameter optimization using a grid search approach. To verify the model performance, this study constructed an evaluation methodology considering different indicators, e.g., safety (collision rate and lane-centering loss), efficiency, comfort level, and energy consumption. Testing results demonstrated that the implemented DDPG, PPO, and TRPO models could all tackle roundabout driving, while, particularly, PPO performed well in terms of comfort level and energy consumption, while TRPO excelled in terms of safety and efficiency, and performed the best in terms of the integrated overall testing score. To gauge the model's robustness across different driving scenarios, this study tested the TRPO model trained only on roundabout driving in other various driving tasks. The model maintained good performance in highway driving and merging scenarios, albeit not as remarkable as in the roundabout context. With these findings, this paper provides preliminary evidence for developing automated driving with DRL in complex and real traffic environments.
2301.10904
GPU-based Private Information Retrieval for On-Device Machine Learning Inference
On-device machine learning (ML) inference can enable the use of private user data on user devices without revealing them to remote servers. However, a pure on-device solution to private ML inference is impractical for many applications that rely on embedding tables that are too large to be stored on-device. In particular, recommendation models typically use multiple embedding tables each on the order of 1-10 GBs of data, making them impractical to store on-device. To overcome this barrier, we propose the use of private information retrieval (PIR) to efficiently and privately retrieve embeddings from servers without sharing any private information. As off-the-shelf PIR algorithms are usually too computationally intensive to directly use for latency-sensitive inference tasks, we 1) propose novel GPU-based acceleration of PIR, and 2) co-design PIR with the downstream ML application to obtain further speedup. Our GPU acceleration strategy improves system throughput by more than $20 \times$ over an optimized CPU PIR implementation, and our PIR-ML co-design provides an over $5 \times$ additional throughput improvement at fixed model quality. Together, for various on-device ML applications such as recommendation and language modeling, our system on a single V100 GPU can serve up to $100,000$ queries per second -- a $>100 \times$ throughput improvement over a CPU-based baseline -- while maintaining model accuracy.
Maximilian Lam, Jeff Johnson, Wenjie Xiong, Kiwan Maeng, Udit Gupta, Yang Li, Liangzhen Lai, Ilias Leontiadis, Minsoo Rhu, Hsien-Hsin S. Lee, Vijay Janapa Reddi, Gu-Yeon Wei, David Brooks, G. Edward Suh
2023-01-26T02:24:01Z
http://arxiv.org/abs/2301.10904v3
# GPU-based Private Information Retrieval for On-Device Machine Learning Inference ###### Abstract On-device machine learning (ML) inference can enable the use of private user data on user devices without remote servers. However, a pure on-device solution to private ML inference is impractical for many applications that rely on embedding tables that are too large to be stored on-device. To overcome this barrier, we propose the use of private information retrieval (PIR) to efficiently and privately retrieve embeddings from servers without sharing any private information during on-device ML inference. As off-the-shelf PIR algorithms are usually too computationally intensive to directly use for latency-sensitive inference, we 1) develop a novel algorithm for accelerating PIR on GPUs, and 2) co-design PIR with the downstream ML application to obtain further speedup. Our GPU acceleration strategy improves system throughput by more than \(20\times\) over an optimized CPU PIR implementation, and our co-design techniques obtain over \(5\times\) additional throughput improvement at fixed model quality. Together, on various on-device ML applications such as recommendation and language modeling, our system on a single V100 GPU can serve up to \(100,000\) queries per second-a \(>100\times\) throughput improvement over a naively implemented system-while maintaining model accuracy, and limiting inference communication and response latency to within \(300\)KB and \(<100\)ms respectively. ## I Introduction Privacy is an important consideration for real-world machine learning (ML) applications as both regulations [6, 9] and corporate policies [3, 10] require more privacy protection of user data. For example, recent privacy policies for mobile platforms [3, 10] limit the type of user data that can be used for server-side computation [4]. Future ML services will need to increasingly rely on on-device compute to utilize most private user data. On-device ML inference is a promising solution to stronger privacy regulations and policies [11, 39, 50, 76], as it enables model inference without requiring clients to share raw private input features with the service provider. Unfortunately, a completely on-device ML inference solution is impractical for many applications such as recommendation or language modeling, as these applications often require access to an embedding table that is too large to store on device. For example, recommendation models access tables that often take gigabytes or even terabytes of memory [8, 40, 61, 60, 77]. Similarly, language models access word embeddings, which may take up to gigabytes of storage [23, 58, 63]. Large embedding tables pose a dilemma: storing large embedding tables on device is impractical given device limitations while storing them in the cloud and directly accessing them in the clear could leak private information. To address this issue, we propose using private information retrieval (PIR) to privately query large embedding tables stored on centralized servers. In this work, we consider distributed point function (DPF)-based PIR, in which private embedding lookups are performed by constructing and evaluating DPFs on two non-colluding servers (Figures 1 and 3). A two-server DPF-PIR scheme is attractive as it is much more efficient in terms of computation and communication relative to other single-server PIR schemes [35, 56]. Despite their advantages, DPF-based PIR protocols still exhibit massive computational overhead [25, 36], making them difficult to deploy in real applications. The computational overhead stems from evaluating the DPFs on the servers, which entails executing a significant number of expensive cryptographic operations [25, 36]. For example, expanding a typical DPF for a table with one million entries requires performing at least one million AES-128 encryption operations. These costs are amplified during ML inference where a model may access multiple embedding entries simultaneously [40, 41]. The computation and communication requirements of DPF-based PIR make deploying it to real-world ML applications a considerable challenge. ### _Our Contributions_ In this work, we develop a system to efficiently and privately serve embeddings for on-device ML. Embedding accesses for on-device ML have several unique properties and requirements relative to other applications that might use PIR: 1) embedding table entries are often short, on the order of 128-256 bytes, 2) multiple embedding table entries are often accessed together in a batch as part of a single model inference, and 3) throughput, latency, and model quality are all critical to an application's success. We leverage these properties to design a novel GPU acceleration scheme for efficiently performing PIR on GPUs, and, additionally, co-design PIR with the ML application to facilitate better trade-offs between model quality and system performance. Our technical contribution is similar in its nature to how recent studies co-optimize algorithms and GPU implementations to significantly improve the performance of other cryptographic primitives such as fully-homomorphic encryption (FHE) [28, 31, 34, 56]. Our key contributions are discussed in more detail below. **GPU-accelerated PIR** We develop a set of novel optimizations to efficiently perform PIR on GPUs. Our optimizations enable high-throughput, low-latency DPF execution, allowing us to scale to tables with millions of entries. We observe that DPF evaluation is heavily compute-bound due to their heavy cryptographic instruction mix, and leverage the fact that GPUs are especially well suited to perform these computationally heavy operations. Yet, performing PIR on a GPU requires exploiting multiple types of parallelism in PIR while carefully balancing computation, communication, and memory usage. Our GPU acceleration, over an optimized CPU baseline [12], obtains \(>1,000\times\) speedup over single-threaded CPU execution, and \(>20\times\) speedup over multicore execution. To the best of our knowledge, this work represents the first to explore high-performance GPU implementations of DPFs. We note that our GPU implementation accelerates the state-of-the-art DPF algorithm [36], which exhibits an optimal communication cost of \(O(\log(n))\) and an optimal computation complexity of \(O(n)\). Beyond private embedding table accesses for ML, our GPU PIR can be used to accelerate any PIR applications such as checking compromised passwords. **ML + PIR Co-Design** To further improve performance, we develop strategies utilizing application-specific data access patterns to co-design PIR with the ML application. Traditional batch PIR algorithms [18, 45, 48], which allow privately obtaining multiple entries together, may impact ML inference quality because they only retrieve entries probabilistically, dropping some queries spuriously. We co-design a new batch PIR algorithm for ML tasks to obtain better model quality vs system performance tradeoffs. We comprehensively evaluate the resulting performance improvements and model quality of our new batch PIR scheme on applications including WikiText2 language model [57], Movielens recommendation [43], and Taobao recommendation [15]. We find that by utilizing application-specific data access patterns, we can increase the ML inference throughput by up to \(100\times\) over a straightforward PIR system design on a multi-core CPU, while maintaining the model quality and limiting inference communication and latency within \(300\) KB and \(100\) ms, respectively. ## II Private On-Device ML Inference ### _Private On-Device Inference: Threat Model_ The goal of the private on-device inference is to perform ML inference using data on a user device without revealing them to a server owned by a service/cloud provider. We assume that the computation part of the ML model can run on the user device given the increasing trend on hardware accelerators and optimizations for client SoCs, but the _embedding tables_ for categorical/sparse features (described below) are too large to be placed on individual devices. The embedding tables are still placed on servers and accessed remotely. In this model, the user/client device and its software are trusted while remote servers and communication channels are untrusted. Like other multi-party computation (MPC) settings [28, 31, 34], we consider a two-server model where there are two non-colluding servers; each server may try to obtain private user data from the embedding table accesses from the clients, but do not communicate/collude with the other. This work only considers the confidentiality of the user data and does not consider the integrity/correctness of the inference result. Figure 1 compares Fig. 1: Left: the traditional non-private approach to ML inference. Right: our proposed approach for private on-device ML inference. Our proposed approach stores large embedding tables on two non-colluding servers. Using PIR, a client privately obtains embeddings from the two servers which are subsequently used as inputs to their neural network. the traditional cloud-based ML services and the proposed on-device ML. ### _Key Challenge: Large Embedding Tables_ Unfortunately, the embedding tables that many ML models employ are too large to be sent and stored on individual devices [8, 40, 60, 61, 77], making a pure on-device inference solution impractical. An embedding table is a large table that maps categorical features into dense vectors that encode semantic information. For example, categorical (sparse) features may include include a user's click or search history. The value of a categorical feature is used as an index to an embedding table where each row of the table holds the vector corresponding to that categorical feature value (Figure 2). Embedding tables have as many rows as the number of possible values in the categorical feature space so their size can grow quickly. For example, **recommendation models** use several user and product input features to predict whether a user is likely to interact (e.g., click or purchase) with the product [61, 77]. These models may use user data such as the list of products the user recently purchased [77]. As the number of products can be on the order of millions, the corresponding embedding table can reach several GB to TB in size [39, 40, 60]. Even though some systems may use a hash to reduce the size of an embedding table by sharing embedding table entries among multiple feature values, the embedding tables in real-world recommendation systems are still quite large. The large memory requirements of embedding tables prevent them from being stored on-device, and hosting them on the server may be the only practical solution. Another example is **language models** that empower applications such as next-word prediction, language translation, and speech recognition. Language models map words into a latent embedding space using word embedding tables [57]. As there may be hundreds of thousands of different words, with each embedding vector being hundreds of bytes long, it quickly becomes impractical to store the entire word embedding table on-device, especially for natural language translation models supporting multiple languages [62, 32]. We emphasize that although there are alternative techniques to compresses these features (e.g., character embeddings, transformers, sentence level representations, etc.), word embeddings are more efficient to train in a regime with less training data [32]. Table I summarizes the size of the embedding tables of some popular datasets/models, the size of which ranges from several MBs to hundreds of GBs. ### _Our Approach: On-Device ML Inference with PIR_ To enable private on-device ML applications that require access to large embedding tables, we propose using private information retrieval (PIR) [27, 31]. PIR allows a user to query a table without revealing which index was accessed to the table holder, i.e., the server that hosts the embedding table. We propose to keep large embedding tables on the application provider's server, and to use PIR to query the table upon an embedding access by a client's device (Figure 1). We choose to use PIR rather than oblivious RAM (ORAM) [38, 69, 75], another popular cryptographic technique to hide an access pattern to memory, because ORAM is designed to protect accesses from a single entity. In order to use ORAM to hide accesses to embedding tables, the accesses from multiple user devices need to be serviced by a trusted ORAM controller, either a trusted proxy or secure hardware, while PIR allows private accesses from many individual user devices without such a trusted party. Additionally, recent work [31] shows PIR outperforms the state-of-the-art ORAM constructions, despite its O(n) asymptotic complexity. Alternatively, hardware-based protection such as a trusted execution environment (TEE) [2, 5, 13, 70] combined with ORAM [33, 55, 64, 65, 66, 73, 26, 74] may be used as a way to protect server-side accesses to embedding tables. However, the TEE-based approach places additional trust on hardware vendors and the security of today's TEE protection mechanisms. Moreover, efficient hardware-based ORAM requires a custom hardware memory controller, which is not available today. Our PIR approach enables private embedding table accesses using today's commercial off-the-shelf hardware like GPUs without needing to trust any hardware in the cloud. Within the scope of PIR techniques, we use a PIR protocol based on a distributed point function (DPF) [36, 25], which protects accesses using two non-colluding servers. DPF-based PIR methods are more efficient in terms of communication and computation than single-server PIR schemes that employ homomorphic encryption [29, 56, 55]. A key challenge in employing DPF-based PIR is its high computational intensity \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Application**} & **\# of** & \multirow{2}{*}{**Entry Size**} & **Embedding** \\ & **Embedting** & & **Table Size** \\ \hline **Critee 1 TB** & \(>\)4,000,000,000 & \(\sim\)128B & \(>\)476 GB \\ **Recommendation** & & \(\sim\)45,000,000 & \(\sim\)128B & \(\sim\)5 GB \\ \hline **FastText Word Vectors** & \(\sim\)2,000,000 & \(\sim\)1024B & \(>\)1.9 GB \\ **Language Model** & & & \\ **Teobao** & & & \\ **Recommendation** & & & \\ \hline **WikiText2** & \(\sim\)131,000 & \(\sim\)512B & \(\sim\)64 MB \\ **(Language Model)** & & & \\ \hline **Movielens-20M** & & & \\ **(Language Model)** & \(\sim\)27,000 & \(\sim\)128B & \(\sim\)3 MB \\ \hline \end{tabular} \end{table} TABLE I: Embedding table sizes for some popular datasets and models spanning across language and recommendation. Fig. 2: Embedding tables map indices – numerical ids that represent information like user’s text messages or search history – into vectors of features that are inputs to a neural network model. due to their heavy cryptographic instruction mix. In the following section, we describe how the DPF-based PIR can be efficiently accelerated on GPUs. ## III Accelerating PIR using GPUs Algorithms for PIR exhibit significant computational overhead due to their heavy cryptographic instruction mix and cannot be immediately adopted for private on-device inference. Below, we 1) briefly introduce PIR and DPF, 2) analyze their computational workload to understand how GPUs may accelerate them, and 3) describe our GPU acceleration algorithm. ### _Fundamentals of PIR and DPF_ Private information retrieval (PIR) based on distributed point functions (DPF) allows a user to access an index in a table shared across two non-colluding servers without leaking the index to the table holders. In DPF-PIR, the client sends a key that represents the index it wants to privately query. The server, upon receiving the key, performs expensive cryptographic operations to service the user's query. #### Iii-A1 Naive PIR Assume a client \(C\) seeks to privately access entry \(T[i]\in\mathbb{F}_{p}^{D}\) from a table \(T\in\mathbb{F}_{p}^{L\times D}\) that is duplicated across two non-colluding servers, \(S_{1}\) and \(S_{2}\). Here, \(L\) is the number of entries in the table, \(D\) is the vector length of each entry, and \(\mathbb{F}_{p}\) is an integer field with modulus \(p\). A simple but highly inefficient approach is for the client \(C\) to generate and send a random vector \(r_{1}\in\mathbb{F}_{p}^{L}\) and a second vector \(r_{2}\in\mathbb{F}_{p}^{L}\) to \(S_{1}\) and \(S_{2}\), such that they add up to an indicator vector \(I(i)\) whose entries are all 0's except at the \(i^{th}\) position where it is 1 (\(r_{1}+r_{2}=I(i)\)). Upon receiving the vectors, the servers individually compute and return \(T\times r_{1}\) and \(T\times r_{2}\) to the client, from which the client can retrieve \(T\times(r_{1}+r_{2})=T\times I(i)=T[i]\). Information theoretic privacy is ensured as \(r_{1}\) and \(r_{2}\) are the _secret shares_ of the indicator vector that do not leak any information about \(i\) individually [68]. This simple approach incurs large communication overhead because the size of \(r_{1}\) and \(r_{2}\) is proportional to the size of table \(T\), making the communication overhead \(O(L)\). #### Iii-A2 Dpf-Pir The generalization of the approach described above is a cryptographic primitive known as a **distributed point function** (DPF). Broadly, a DPF is an algorithmic construct that allows a client to **generate** two compact keys \(k_{a}\), \(k_{b}\), such that when the keys are **expanded** across a set of indices, they yield secret shares of the indicator vector \(I(i)\). Formally, a DPF consists of two algorithms * \(Gen(1^{\lambda},i\in 1..L)\rightarrow(k_{a},k_{b})\) takes security parameter \(\lambda\) and input \(i\), and generates two keys \(k_{a}\), \(k_{b}\). * \(Eval(k,j)\rightarrow\mathbb{F}_{p}\) takes a key \(k\) and an evaluation index \(j\) and outputs a field element. such that, \(Eval(k_{a},j)+Eval(k_{b},j)=\begin{cases}1&j=i\\ 0&j\neq i\end{cases}\). A DPF should be computationally secure, meaning that given just one of the keys and no other information, it should be difficult to recover \(i\) without doing computation proportional to \(O(2^{\lambda})\). Given a DPF, client \(C\) can generate keys \(k_{a}\), \(k_{b}\) using \(Gen\) and send the keys to \(S_{1}\) and \(S_{2}\), respectively. The two servers, upon receiving \(k_{a}\) and \(k_{b}\), compute \(T\times Eval(k_{a},\{0,1,\ldots,L\})\) and \(T\times Eval(k_{b},\{0,1,\ldots,L\})\) and return the result, from which the client can obtain \(T\times(Eval(k_{a},\{0,1,\ldots,L\})+Eval(k_{b},\{0,1,\ldots,L\}))=T\times I(i)=T[i]\). Figure 3 depicts the overall DPF-PIR scheme. There are many different implementations of DPFs, each with different computation/communication tradeoffs. We consider the DPF construct described in [36], which obtains optimal asymptotic communication complexity of \(O(\lambda\log(L))\) and optimal evaluation computation complexity of \(O(\lambda L)\). The evaluation of DPF involves expanding a GGM-style [37] computation tree. Concretely, in this DPF algorithm, key \(k\) is decomposed into two codewords \(\{C_{1}\in\mathbb{F}_{2^{\lambda}}^{2\times\log(L)},C_{2}\in\mathbb{F}_{2^{ \lambda}}^{2\times\log(L)}\}\), and the computation tree for DPF evaluation is specified by the following recurrence relation: \[Eval(k,i)=Expand(k,i,d=\log(L))\] \[=PRF_{Expand(k,\lfloor i/2\rfloor,d-1)}(i\text{ mod }2)+\] \[C_{Expand(k,\lfloor i/2\rfloor,d-1)}\text{ mod }_{2}[i\text{ mod }2,d]\] where \(PRF\) is a pseudorandom function (i.e AES-128). Thus, computing \(Eval(k_{a},\{0,1,\ldots,L\})\) involves evaluating each node of this computation tree (depicted in Figure 4). In terms of computation overhead, evaluating a single node requires a single \(PRF\) call and an addition, hence, the overall computation overhead of evaluating the entire set \(\{0,1,\ldots,L\}\) is \(O(\lambda L)\). Communication overhead is proportional to the size of the keys, and, as these keys consist of codewords that are logarithmic in length, this amounts to \(O(\lambda\log(L))\) total communication. In practice, \(\lambda\) is typically a 128-bit field integer to ensure sufficient computational security. A figure depicting the DPF computation tree and its recurrence relation is shown in Figure 4. After computing the leaf nodes of the tree, the output is a vector of 128-bit field values; the final secret shares of the entry are obtained by performing an integer dot product between the computed 128-bit field values and the table. In practice, the dot products for multiple queries can be performed together as a single matrix-matrix multiplication. We refer to [36] for details on key generation. Fig. 3: DPF based PIR scheme. ### _Workload Analysis of DPF-PIR_ DPF-PIR consists of two key computational operations: 1) expanding the DPF keys to obtain secret shares of the one-hot indicator vector and 2) performing a dot product between the vectors against the table of entries. These two operations exhibit significant compute parallelism that make them well suited for GPU execution. However, there are also significant challenges to parallelization. Below, we analyze the parallelism inherent in these workloads and the challenges associated with parallelizing them. #### Iv-B1 Parallelism within a Single DPF Evaluating a single DPF involves the expansion of a balanced binary tree to its leaf nodes (Figure 4). The final leaf nodes represent secret shares of the final indicator vector that should be dot-protected with the table. As computing each node of the DPF evaluation tree relies only upon the completion of evaluating its parent node, we can parallelize the computation of the nodes provided that their parent node is already computed (Figure 5, right). Optimistically, each node at the same level of the DPF evaluation tree can be computed simultaneously, allowing \(2^{i}\) nodes to be derived in parallel at the \(i\)-th level. However, parallelizing a single DPF is not trivial, due to its tree computation structure. In particular, there is less parallelism near the root of the computation tree due, making smaller DPFs difficult to paralellize. Conversely, towards the leaf nodes of the tree, there is a significant number of nodes, and computing them all simultaneously may run into memory limitation issues on larger tables. #### Iv-B2 Parallelism across Multiple Queries A batch of DPFs may be expanded simultaneously by expanding multiple keys in a lockstep manner (Figure 5, left). Batching helps expose parallelism near the root of the computation tree where single-query parallelism is limited. However, batching also significantly increases memory usage near the leaves when caching the outputs of the nodes. Hence, It is important to ensure that execution maintains within the memory limitations, and to balance memory resources to ensure maximal parallelism. #### Iv-B3 Computation and Memory Characteristics DPF-PIR consists of two main steps: the key expansion and the following dot product with the table. Both can be highly compute-bound, which makes GPU an ideal platform for acceleration. As seen in Figure 4, the size of a DPF key (\(k_{a}\) and \(k_{b}\)) is \(O(\log(L))\) for a table of \(L\) entries, making the memory access of reading the key and writing the output \(O(\log(L))+O(D)\). In comparison, the computation needed to expand the key is \(O(L)\), making the key expansion heavily compute-bound if \(D\ll L\). For recommendation and language models, \(L\) can be up to 4 billion (Table I) while \(D\) is usually between 16 and 1024 (Table I), making DPF-PIR extremely compute intensive. While the dot product with the table is not necessarily compute-bound by itself, it can become compute-bound if we batch enough queries. We furthermore observe that the major bottleneck of the execution lies in the key expansion, due to its heavy cryptographic instruction mix. Finally, we note that DPF evaluation and matrix multiplication exhibit significantly different computational patterns, and naively performing one after the other leads to significant inefficiencies. ### _Designing Efficient GPU Algorithm for DPF-PIR_ This subsection describes our efficient GPU algorithm for accelerating DPF-PIR. Our base GPU algorithm leverages batched execution of multiple queries to a single table. The base algorithm is successively improved with a series of optimizations we propose that manage GPU memory limitations and balance memory and compute. #### Iv-C1 Base Algorithm - Batch DPF Execution Our base approach leverages the fact that each DPF-PIR query to the same table can be expanded in an embarrassingly parallel fashion. We have each GPU thread-block handle expanding a separate DPF, with each thread of each thread-block expanding a different set of indices. Concretely, each thread of a thread-block individual computes a leaf node of the DPF evaluation tree, starting from the root node and traversing down to the leaf node. This base approach is shown in Figure 5(a). Although effective, the base approach has one key limitation: each thread-block expands the same node multiple times near the root, leading to significant redundant computations (Figure 5(a), overlapped regions). This observation leads us to our first optimization. #### Iv-C2 Optimization 1 - Eliminating Redundant Computations We optimize away the redundant computations near the root by introducing global memory buffers that cache and reuse the Fig. 4: Evaluating a distributed point function (DPF) involves expanding a binary computation tree all the way to the leaf nodes. The figure above displays the computation pattern of evaluating a DPF, where each node is a compute operation, and each edge represents a data dependency. Fig. 5: Opportunities for parallelism. Left: multiple DPFs can be expanded simultaneously. Right: within a single DPF, each node at the same level may be computed simultaneously. the output of the previous level of the tree, and synchronize the thread-blocks across levels (Figure 5(b)). Our optimization operates in a breadth-first manner, where each thread of a thread-block evaluates a different node for each level of the tree, and writes the result to a global memory buffer for the next level to use without recomputation. Synchronization barriers are inserted between levels for coordination. While this optimization significantly improves over baseline, it requires a large global memory buffer of size \(O(BL)\), where \(L\) is the number of table entries (i.e., number of nodes) and \(B\) is the batch size. As \(L\) may be large for certain applications, \(B\) can be limited. Hence, on large tables parallelism is limited. The next optimization tackles this problem. #### Iv-B3 Optimization 2 - Optimizing Memory Usage To limit memory usage but enable efficient parallel execution of workloads, we switch to a hybrid breadth-first and depth-first evaluation strategy. Instead of expanding _all_ the nodes in each level in a fully breadth-first manner, we expand only \(K\) nodes at a time in a depth-first fashion (Figure 5(c)). In this approach, the global memory buffer acts like a stack, where each depth-first traversal down the tree pushes \(K/2\) extra nodes to the stack. The memory overhead is reduced to \(O(BK\log(L)/2)\) instead of \(O(BL)\), as the size of the stack grows by \(K/2\) nodes each level down, and there are \(\log(L)\) levels in the tree. The expansion factor \(K\) is empirically selected to expose enough parallelism while remaining within memory constraints. #### Iv-B4 Optimization 3 - Matrix Multiplication Fusion We observe that the DPF and the subsequent matrix multiplication can be fused together for additional performance improvement. Upon reaching the leaf nodes, instead of writing the output to a buffer to later be used for the subsequent matrix multiplication, we can immediately compute a dot product between the partial output and the corresponding table entry for that index (Figure 5(d)). We accumulate these partial results in a thread-local register, and run a parallel tree-sum reduction at the end to obtain the final result. Fusing the DPF with the following matrix multiplication allows us to reduce memory usage and interleave compute and memory operations, overall leading to higher performance. #### Iv-B5 Optimization 4 - Cooperative Groups for Very Large Tables We also leveraging cooperative groups [14] to accelerate DPF-PIR for very large tables (Figure 5(e)). On very large tables (e.g., with 4 million entries), there is sufficient parallelism to fully utilize the GPU, and batching multiple queries yields little performance benefits, and may in fact degrade latency. Hence, for very large tables, we employ cooperative groups [14] to coordinate all thread-blocks towards evaluating a single DPF tree. This technique obtains benefits when the table is very large, which may be the case in large recommendation systems [61, 77]. #### Iv-B6 Optimization 5 - PRF Selection Finally, we explore different pseudorandom functions (PRFs) to improve computational efficiency on a GPU. Standard implementations of DPF-PIR on CPUs utilize AES-128 [52]. However, unlike CPUs, GPUs lack hardware intrinsics that accelerate AES operations and hence may experience degraded latency and throughput. Thus, we explore different PRFs, including SHA-256 [42], Salsa20 [22], HighwayHash [17], and Siphash [20]. Among these, we observe that Salsa20 with 20 rounds (Salsa20-20) provides a good balance between high performance and strong security (it is a variant of a standard cryptographic stream cipher used in TLS [7]), which we use as our default PRF. However, other methods like Salsa20 with 12 rounds (Salsa20-12) or HighwayHash have better performance, potentially with lesser security guarantees. Depending on the use case, these alternative PRFs may be selected for better performance. Fig. 6: Optimizations for accelerating DPF-PIR: a) parallelize DPFs across thread-blocks, b) eliminate redundant computation, c) reduce memory use, d) interleave matmul and DPF evaluation, e) parallelize with cooperative groups for large evaluations. ## IV Accelerating Batch Retrieval with ML Co-Design In many recommendation/language models, each inference requires multiple lookups to the same embedding table. For example, recommendation models may lookup the same table hundreds of times to perform a single inference [40] (e.g., a user can have multiple clicked items, if the clicked-item history is used as a feature). Multiple lookups increase costs linearly as DPF-PIR only retrieves one entry at a time. To accelerate multiple lookups to a single table, we adopt partial batch retrieval (PBR) [67], an algorithm that accelerates the retrieval of multiple entries. PBR comes at a cost; with some probability, queries are spuriously dropped, which may negatively affect model quality. Additionally, naively adopting PBR is still quite computationally expensive. Hence, we co-design PBR with ML inference to improve system performance while maintaining the model quality. ### _Background: Batch Private Information Retrieval_ Batch private information retrieval (batch-PIR) is a set of techniques to retrieve multiple private entries from a single table. In this work, we adopt the method proposed in [67], partial batch retrieval (PBR), which operates by segmenting table \(T\) into \(\frac{L}{B}\) bins of size \(B\), and issuing individual DPF-PIR queries to each bin (Figure 6(a)). This approach saves computation by a factor of \(\frac{L}{B}\) in the best-case scenario where the client retrieve \(\frac{L}{B}\) entries that are spread across each bin. PBR, unlike other batch-PIR methods, relies However, there is a complex tradeoff space between the communication efficiency and the accuracy of the retrieval. A large \(B\) can reduce the accuracy of the retrieval if multiple desired entries fall in the same bin. Conversely, a smaller \(B\) yields fewer conflicts, but increases communication costs. This tradeoff naturally affects model quality as dropped queries affect the model's inference. ### _Co-Designing the ML Model and Batch-PIR_ To improve batch-PIR efficiency while minimizing effect of retrieval failures, we propose several co-designs that improve the tradeoff between model accuracy and performance. #### Iv-B1 Frequency-Based Hot Table Split Many ML applications access embedding tables following a power-law distribution, where a small number of _hot_ indices account for the majority of lookups [16, 41]. We leverage this observation and add a small _hot table_ that holds the top-\(K\) frequently accessed indices in addition to the large _full table_ that holds all the embedding entries (Figure 6(b)). The hot table is constructed statically using the observed statistics from the training dataset, and a small hash table is placed on a client device to provide the hot table index for the categorical feature values that are in the hot table. However, simply using the hot table as a traditional cache is insecure as it leaks the number of queries to the hot/full tables. To avoid this information leakage, we predetermine a fixed number of queries \(Q_{hot}\) and \(Q_{full}\) to issue to the hot and full tables, respectively. The hot table leverages the fact that indices that are queried frequently can be stored all in the compact table; multiple queries issued to the hot table benefit from the lower PIR costs from the reduced table size. Because \(K\), \(Q_{hot}\), and \(Q_{full}\) are all fixed at design time, each inference does not leak additional information about the client. #### Iv-B2 Access Pattern-Aware Embedding Collocation Embedding table access patterns in ML applications tend to exhibit co-occurrence [30, 54]; certain indices are often accessed together in a single ML inference. This property allows us to collocate frequently accessed entries in the same row of the table, so that a single query can retrieve multiple embeddings that might be accessed together (Figure 6(c)). Collocation can be done by profiling the training dataset and collocating the top-\(C\) embeddings that are most frequently retrieved with each embeddings. \(C\) is empirically selected. In the best-case scenario, collocation can reduce the number of queries by \(C+1\). Collocation increases computation and communication costs as larger rows are retrieved for each query. However, with the right hyperparameters, this cost may be outweighed by the increase in query hits from the additionally retrieved entries. Note that the cost of DPF computation for most tree nodes except for the leaves is independent of the size of an embedding table entry. As a result, the co-location only increases the cost at the leaves. ## V Evaluation ### _Evaluation Setup_ We evaluate our GPU-based DPF-PIR and compare it with a state-of-the-art CPU implementation [12]. We run all GPU experiments on a NVIDIA V100 GPU, and all CPU experiments on an Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz with 28 cores. The CPU baseline is an optimized DPF-PIR implementation from Google Research [12], which uses AES-NI CPU hardware acceleration. Unless otherwise specified, We default to a security parameter of \(\lambda=128\) bits, with each table entry containing 256 bytes. For the machine learning evaluation, we use public datasets for language and recommendation models. For the language model, we train an LSTM on the Wikitext2 corpus [57]; the embedding table has 131,000 entries (Table I). For the Movielens [43] recommendation, we train a 2-layer neural network; the embedding table for the Movielens dataset has 27,000 entries I. For the Taobao recommendation [15], we train a 2 layer fully-connected model combining both user Fig. 7: Techniques used to co-design PIR + ML. a) Partial Batch Retrieval, b) splitting the table into a smaller hot table, and c) collocating frequently accessed entries. click-history and user features; the embedding table for Taobao has \(\sim\) 900,000 entries (Table I). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Application & Method & Model Quality & \begin{tabular}{c} Communication \\ (KB) \\ \end{tabular} & \begin{tabular}{c} PIR Latency \\ (ms) \\ \end{tabular} & \begin{tabular}{c} Throughput \\ (queries/s) \\ \end{tabular} & \begin{tabular}{c} Throughput \\ Speedup \\ \end{tabular} \\ \hline \multirow{2}{*}{Language Model} & CPU+PR & 92 ppl & 239 & 62 & 5.7 \\ \cline{2-6} & GPU+Co-design & 92 ppl & 425 & 8 & **1,230** \\ \hline \multirow{2}{*}{Movielens Recommendation} & CPU+PR &.785 auc & 154 & 30.6 & 44 \\ \cline{2-6} & GPU+Co-design &.785 auc & 52 & 82.4 & **4,200** \\ \hline \multirow{2}{*}{Taobao Recommendation} & CPU+PR &.595 auc & 2.8 & 160 & 8,000 \\ \cline{2-6} & GPU+Co-design &.595 auc & 5.4 & 150 & **256,000** \\ \hline \end{tabular} \end{table} TABLE II: Representative ML inference performance points for CPU PIR vs our system with GPU accelerated PIR and ML co-design. Inference latency and communication are fixed to be \(<200\) ms and \(<300\) KB respectively, while maximizing throughput. Fig. 8: Gains of GPU-DPF + co-design over a CPU implementation without PIR+ML co-design. Latency fixed to be \(<200\)ms and communication \(<500\)KB. Baseline model quality is shown in teal. For next word prediction, lower perplexity (ppl) is better; for recommendation models, higher area under curve (auc) is better. 1 kq/s = 1,000 queries per second. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Table Size & \begin{tabular}{c} Upload Communication (Bytes) \\ \end{tabular} & Strategy & Throughput (queries / s) & Latency (ms) \\ \hline \multirow{3}{*}{16K} & GPU & GPU & 60.347 & 3.2 \\ \cline{2-5} & 896 & CPU 1-thread & 22 & 9 \\ \cline{2-5} & & CPU 32-thread & 281 & 7.1 \\ \hline \multirow{3}{*}{64K} & \multirow{3}{*}{1024} & GPU & 15,258 & 18.5 \\ \cline{2-5} & & CPU 1-thread & 5 & 36 \\ \cline{2-5} & & CPU 32-thread & 688 & 2.9 \\ \hline \multirow{3}{*}{1M} & \multirow{3}{*}{1280} & GPU & 1358 & 1.4 \\ \cline{2-5} & & CPU 1-thread & 1.3 & 638 \\ \cline{1-1} \cline{3-5} & & CPU 32-thread & 21.2 & 36 \\ \hline \multirow{3}{*}{4M} & \multirow{3}{*}{1408} & GPU & 468 & 4.18 \\ \cline{2-5} & & CPU 1-thread &.78 & 2579.8 \\ \cline{1-1} \cline{3-5} & & CPU 32-thread & 12 & 160.1 \\ \hline \end{tabular} \end{table} TABLE III: Throughput / latency comparison of our GPU acceleration algorithm vs single and multi-core CPU implementations. Both use AES-128 as their PRF, and a security parameter of 128. The CPU DPF baseline is taken from [12] and is an optimized CPU implementation that uses AES-NI hardware intrinsics. Fig. 9: Throughput vs latency for our GPU acceleration strategy across different optimizations and table sizes. Our optimizations include eliminating redundancy, memory optimization, cooperative groups. 1 kq/s = 1,000 queries per second. ### _End-to-End System Performance Speedups_ We begin by showing the final system throughput improvement of combining GPU acceleration with co-design over using state-of-the-art CPU-based PIR for ML inference. Figure 8 shows the system performance numbers across different applications. In Figure 8, we fix a latency budget of \(<100\) ms and a communication budget of \(<300\) KB per inference. As seen, accelerating PIR with GPU and co-designing it with ML inference leads to significant throughput improvement at the same accuracy. We highlight some of the representative datapoints in a tabular format in Table II. Accelerating PIR with GPUs and co-designing PIR with ML obtains up to over \(100\times\) increase in throughput at the same model quality. ### _Detailed Analysis of Each Optimization_ Below, we evaluate each of the optimizations separately to show the isolated effects of each optimization we propose. #### Iv-C1 Effects of Each GPU Optimization Figure 9 shows an ablation of the throughput and the latency of our GPU-DPF acceleration with each successive optimization. As shown, each successive optimization increases the latency-throughput pareto frontier. Particularly, on smaller table sizes (\(<\) 64K entries, Figure 8(a)), we see that the base approach along with eliminating redundancy (**base+e**, red) is generally able to obtain fairly good performance, achieving 4-5\(\times\) throughput improvement over the baseline (**base**, black). However, on larger table sizes (\(>\) 64K entries, Figure 8(b)- 8(c)) **base+e** is unable to improve the throughput after some point (disconnected red line around \(2^{6}\)ms) as memory constraints limit the batch size. On these larger tables, our memory optimization (**base+e+m**, blue) provides a far better latency-throughput tradeoff. Finally, for extremely large table sizes (\(>4M\) entries, Figure 8(c)), cooperative groups is able to obtain significantly better latency (**base+e+m+c**, green). We additionally highlight some of these representative performance points in Table III #### Iv-C2 Performance Impact of Matrix Multiplication Fusion Table IV shows the performance benefits of fusing the subsequent matrix multiplication with the DPF expansion on the memory-efficient GPU-PIR acceleration strategy, across different table sizes. Generally, fusing and interleaving the two kernels offer significant (\(>1.5\times\)) speedups in throughput and latency as it allows more efficient scheduling of compute and memory operations. #### Iv-C3 Performance Impact of Embedding Entry Size We evaluate the impact of varying the size of the table entries on PIR performance. Recall that the size of the table entries affects the amount of work done in the subsequent matrix multiplication, and hence may affect throughput and latency. We show the impact of table entry size on latency and throughput in Figure 10. Generally, we find that we can retrieve tables that have entry sizes of up to 512 bytes with little to no performance degradation. This effect is helped significantly by fusing the DPF/matrix multiplication, as this allows us to interleave the memory operations required for the matrix multiplication along with PRF evaluations. We see increasing performance degradation with table entry sizes greater than 512 bytes. The minimal performance impact up to 512 bytes also illustrates the opportunity for the proposed co-location optimization. #### Iv-C4 Comparison against CPU We compare our GPU-PIR implementation against an optimized CPU implementation from Google Research [12]. Note that, Google Research's CPU implementation of DPFs uses AES-128 for its PRF, and utilizes AES-NI hardware intrinsics to accelerate PRF computation. Figure 11 compares the throughput attained by the memory-efficient GPU DPF acceleration strategy against a 1-threaded and 32-threaded CPU version, on different table sizes. Generally, using AES-128 as the PRF, our GPU implementation consistently attains \(>17\times\) speedup over the 32-threaded CPU implementation, which fully utilizes the entire multi-core CPU. With a different choice of PRF, in this case Salsa20 with 12 rounds, we see over \(100\times\) speedup over multithreaded CPU execution. We show the same data in Table III. Generally, our GPU-based PIR can accelerate PIR by more than an order of magnitude over the CPU. #### Iv-C5 PRF Evaluation We evaluate the performance impact of using different functions for the PRF when computing the DPF. Table V shows the results on a table of size 1M, using the memory-efficient DPF acceleration strategy, with a batch size of 512, and a security parameter of 128-bits. Generally, we find that lightweight PRFs can significantly improve the GPU-PIR performance over AES-128. For example, the Salsa20 stream ciphers with different numbers of rounds provide good performance while appropriate security. While Salsa20 with \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{Table Size} & Throughput (queries/s) & Latency (ms) & Throughput (queries/s) & Latency (ms) & \multirow{2}{*}{Throughput Improvement} & \multirow{2}{*}{Latency Improvement} \\ & No Fusion & No Fusion & With Fusion & With Fusion & \\ \hline 16K & 195000 & 1.49 & 348000 &.9 & 1.8 \(\times\) & 1.6 \(\times\) \\ \hline 64K & 49000 & 5.8 & 89000 & 3.55 & 1.8 \(\times\) & 1.6 \(\times\) \\ \hline 1M & 2898 & 92 & 5578 & 56.5 & 1.9 \(\times\) & 1.6 \(\times\) \\ \hline 4M & 759 & 369 & 1395 & 226 & 1.8 \(\times\) & 1.6 \(\times\) \\ \hline \end{tabular} \end{table} TABLE IV: Impact of fusing and interleaving DPF expansion with matrix multiplication kernels, across different table sizes. Each entry in the table is 256 bytes. Fig. 10: Performance impact of table entry size on PIR performance, for a table size of 1,048,576. 8 rounds has been broken [21], Salsa20 with 12 and 20 rounds has no known attack. In particular, we believe that Salsa20 with 20 rounds will be a good choice as its variant, ChaCha20, is used as a standard in TLS. Thus, choice of PRF has significant performance implications and must be chosen depending on the security and performance requirements of the target application. ### _PIR + ML Co-Design_ Private on-device ML inference often requires the private retrieval of a batch of embeddings from the same table. We evaluate our techniques that co-design ML inference and batch PIR, and demonstrate how our co-design techniques significantly improve model quality vs system performance tradeoffs. #### V-D1 Computation vs Model Quality We show the tradeoff between computation and model quality given a fixed communication limit. Figure 12 shows this tradeoff across various applications when communication is fixed to be less than 300KB. As shown, the co-design with both a hot table and embedding entry collocation obtains up to 2-3\(\times\) improvement in computation at a fixed model quality. #### V-D2 Communication vs Model Quality We show the tradeoff between communication and model quality given a fixed computation limit. Figure 13 shows this tradeoff across various applications, with computation fixed to be less than 100,000 PRF calls per batched inference (with the exception of Taobao, where the computation limit is fixed to be less than 5,000,000 PRF calls per batched inference). Again, co-designing PIR with the ML model obtains up to 2-4\(\times\) improvement in communication at a fixed model quality. #### V-D3 Communication vs Computation We show the trade-off between computation and communication with the fixed model quality. Figure 14 shows this tradeoff across various applications, with model quality fixed to be within \(2\%\) of the full precision baseline. The co-design optimizations obtain significantly better tradeoffs than plain batch-PIR. #### V-D4 End-to-End Performance vs Model Quality Finally, we show the end-to-end throughput vs model quality across different applications, with communication fixed to be below 300KB and latency fixed to be below 100ms. Figure 15 shows the benefits of co-design on final throughput vs model accuracy, and demonstrates up to 2.5-4\(\times\) performance improvement. ## VI Related Work **Privacy-preserving Computation Techniques** The previous work on privacy-preserving ML investigated running ML models on an untrusted cloud using various secure computing techniques such as FHE [49, 59], MPC [51, 53, 72], and trusted execution environments (TEEs) [46, 47]. Yet, these studies mainly focused on protection computation in convolutional neural networks (CNNs) without embedding tables. This work investigates how we can address privately accessing large embedding tables in recommendation and language models through PIR. ORAM [33, 38, 66, 69] with a TEE or a trusted third-party will be another way to protect embedding table accesses on an untrusted cloud. Here, we use PIR in order to allow accesses from many client devices without any secure hardware or trusted third-party. **Private Information Retrieval** PIR can be broadly categorized into single-server protocols based on homomorphic encryption [29, 35, 56] and two-server protocols based on DPFs. We use a two-server PIR protocol for its efficiency. The two-server PIR protocols may run by using two different cloud providers or forming a consortium of multiple companies that need to provide privacy-preserving ML services to their users. Distributed point functions [24, 25, 36] are commonly used for efficient two-server PIR. An early PDF algorithm [24] demonstrated \(O(\sqrt{n})\) communication complexity and \(O(n)\) computation complexity. More recent advancements [25, 36] reduce the communication and computation costs to \(O(\log(n))\) and \(O(n)\), respectively. A major contribution of our work is the efficient implementation of the \(O(\log(n))\) DPF algorithm on a GPU. To our knowledge, this work represents the first thorough study of efficiently parallelizing DPFs on GPUs. Parallelization DPFs on GPUs significantly improves the state-of-the-art for the DPF performance and makes DPFs far more practical in real-world use cases. **Batch Private Information Retrieval** Various approaches for batch PIR [18, 19, 45, 48, 67] have been proposed. These methods typically use a form of bucketing and hashing to reduce storage costs. Our work utilizes partial batch retrieval [67], which performs batch PIR with simple bucketing. This approach, which is generally more communication efficient \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{PRF} & Function & Latency & Throughput \\ & Type & (ms) & (queries/s) \\ \hline \multirow{2}{*}{AES-128} & Block Cipher & \multirow{2}{*}{591} & \multirow{2}{*}{965} \\ & (Ctr Mode) & & \\ \hline \multirow{2}{*}{Salsa20-20} & Stream & \multirow{2}{*}{174} & \multirow{2}{*}{3640} \\ & Cipher & & \\ \hline \multirow{2}{*}{Salsa20-12} & Stream & \multirow{2}{*}{113} & \multirow{2}{*}{5598} \\ & Cipher & & \\ \hline \multirow{2}{*}{SHA-256} & Hash & \multirow{2}{*}{659} & \multirow{2}{*}{921} \\ & (HMAC) & & \\ \hline \multicolumn{2}{|c|}{SipHash} & PRF & 82.3 & \multicolumn{1}{c|}{7447} \\ \hline HighwayHash & PRF & \multicolumn{1}{c|}{320} & \multicolumn{1}{c|}{1973} \\ \hline \end{tabular} \end{table} TABLE V: Performance evaluation of memory-efficient GPU DPF with different PRF functions, on a table of size 1,048,576, with batch size 512, and a security parameter of 128 bits. Fig. 11: Comparison of throughput performance attained by GPU DPF acceleration against an optimized CPU baseline. 1 kg/s = 1,000 queries per second. than the ones proposed previously, comes at an expense: queries may be dropped if they fall in the same bucket. We show that the noise tolerance of ML allows the use of such probabilistic PIR protocols, and the careful co-design of batch Fig. 14: Pareto curve of tradeoff between communication with model accuracy fixed to be within 2% of the baseline. Fig. 12: Pareto curve of tradeoff between computation and accuracy with communication fixed at less than 300KB per inference. Ppl: lower is better; auc: higher is better. The full precision baseline model quality is shown as teal line. Fig. 13: Pareto curve of tradeoff between communication and accuracy wwith computation fixed at less than 100K PRFs (with exception of Taobao, where computation is fixed at less3 than 5M PRFs). Ppl: lower is better; auc: higher is better. The full precision baseline model quality is shown as teal line. Fig. 15: System throughput vs model quality with and without co-design across applications on a single V100 GPU. Communication is fixed to be \(<300\)KB per inference, and latency to be \(<100\)ms. Ppl: lower is better; auc: higher is better. 1 kg/s = 1,000 queries per second. PIR and ML can make this simple batch PIR practical with minimal accuracy loss. **On-device ML** On-device ML has garnered significant attention in recent years and span applications including recommendation [39, 44], speech recognition [1], translation [71], etc. Our work considers on-device ML for privacy, and enables the use of large server-side embedding tables for on-device inference. ## VII Conclusion We present a system for efficiently and privately serving embeddings for on-device ML applications. Our main contribution is a system that employs: 1) efficient GPU acceleration for DPF-based PIR schemes, and 2) co-design of ML with batch PIR. Together, on various on-device ML applications including recommendation and language modeling, our system on a single V100 GPU can serve up to \(100,000\) queries per second--a \(>100\times\) speedup over a naively implemented system on a multi-core CPU--while maintaining model accuracy, and limiting communication to be within \(300\)KB and response latency to \(<100\)ms, respectively.
2307.05739
Unveiling the connectivity of complex networks using ordinal transition methods
Ordinal measures provide a valuable collection of tools for analyzing correlated data series. However, using these methods to understand the information interchange in networks of dynamical systems, and uncover the interplay between dynamics and structure during the synchronization process, remains relatively unexplored. Here, we compare the ordinal permutation entropy, a standard complexity measure in the literature, and the permutation entropy of the ordinal transition probability matrix that describes the transitions between the ordinal patterns derived from a time series. We find that the permutation entropy based on the ordinal transition matrix outperforms the rest of the tested measures in discriminating the topological role of networked chaotic R\"ossler systems. Since the method is based on permutation entropy measures, it can be applied to arbitrary real-world time series exhibiting correlations originating from an existing underlying unknown network structure. In particular, we show the effectiveness of our method using experimental datasets of networks of nonlinear oscillators.
Juan A. Almendral, I. Leyva, Irene Sendiña-Nadal
2023-07-11T19:07:17Z
http://arxiv.org/abs/2307.05739v1
# Unveiling the connectivity of complex networks using ordinal transition methods ###### Abstract Ordinal measures provide a valuable collection of tools for analyzing correlated data series. However, using these methods to understand the information interchange in networks of dynamical systems, and uncover the interplay between dynamics and structure during the synchronization process, remains relatively unexplored. Here, we compare the ordinal permutation entropy, a standard complexity measure in the literature, and the permutation entropy of the ordinal transition probability matrix that describes the transitions between the ordinal patterns derived from a time series. We find that the permutation entropy based on the ordinal transition matrix outperforms the rest of the tested measures in discriminating the topological role of networked chaotic Rossler systems. Since the method is based on permutation entropy measures, it can be applied to arbitrary real-world time series exhibiting correlations originating from an existing underlying unknown network structure. In particular, we show the effectiveness of our method using experimental datasets of networks of nonlinear oscillators. ## I Introduction Time series analysis has garnered significant research attention in recent decades. However, the exponential growth in data generation from various social, technological, and natural sources observed in the last years has posed a challenge for researchers seeking to extract valuable information from these datasets. Among the array of new tools developed for this purpose, the ordinal methods derived from the seminal work of Bandt and Pompe (1998) have emerged as particularly intriguing. In this approach, the original data series undergoes a process of coarse-graining, wherein it is replaced by a reduced set of symbols representing the order permutations of consecutive data points. Statistical properties and correlations of those ordinal permutation series effectively capture much of the dynamical information inherent to the original system. Moreover, its analysis is faster, computationally affordable and more robust to noise than raw data analysis. As a result, the applications of ordinal methods continue to expand (2008), encompassing diverse fields such as neuronal (2008) and brain dynamics (2008), laser dynamics (2008), and sports data analysis (2008), among others. Recently, the field of ordinal methods has advanced to incorporate "ordinal transition networks" (OTN). Initially proposed in Ref. (2008), this concept introduces an additional layer of temporal correlation to the analysis by examining the statistics of ordinal patterns and their transitions. The time series is now represented as a network, where each ordinal pattern corresponds to a node, and the possible transitions among them are the links. This innovative tool has demonstrated its potential in detecting subtle dynamical changes (2008; 2009), and its associated ordinal transition entropy has proven to be more robust than standard permutation entropy when dealing with noisy signals (2008); (2009). Furthermore, the statistics of self-transitions within an OTN (2008; 2009) offers an effective means of characterizing diverse time series dynamics. Notably, OTN complexity can accurately reproduce the results of Lyapunov exponents even for small embedding sizes (2008). The versatility of this method extends to various applications, such as distinguishing between different consciousness states (2008), analyzing EEG data (2008), investigating stock markets (2008); (2009), and examining transportation data (2009). In combination with complex network techniques for nonlinear time series analysis, such as visibility or recurrence networks (2008); (2009); (2009); (2010); (2011); (2012), this approach presents a valuable addition to the set of available tools. Ordinal methods offer good potential for various applications, particularly in finding correlations between time series. The multivariate extension of these methods enables the synthesis of information from multiple data sources, resulting in a unified set of symbols (2008); (2008); (2008). This approach proves useful in detecting phase transitions within the collective state of small groups of coupled chaotic nodes. Furthermore, by incorporating delays into the analysis, multivariate ordinal methods can unveil the directionality of the coupling relationships (2008). Recent developments have introduced the concept of an ordinal network based on pattern co-occurrence between time series (2008). This approach facilitates the inference of correlations between different time series. Moreover, the notion of _ordinal synchronization_(2010) demonstrates the capability to detect phase and anti-phase synchronization even in noisy real data. These examples highlight the significant potential of ordinal methods in studying dynamical ensembles and networks. However, it is important to note that most of these applications currently remain confined to proof-of concept studies involving small networks [24; 25]. In addition, many of these approaches rely on multivariate pairwise correlations to extract information [4]. Nevertheless, it is crucial to realize that each element within a networked ensemble undergoes an information flux that alters its dynamics, effectively encoding valuable information regarding its topological role and the collective state. Ordinal methods serve as an ideal tool for unveiling these dynamical changes, enabling the creation of centrality rankings for nodes without solely relying on pairwise correlations [30; 31]. Building upon this premise, our work extends the application of these methods to analyze the synchronization process in complex networks. Our findings demonstrate that ordinal transition methods outperform conventional ordinal patterns' statistics when it comes to detecting subtle dynamical changes and discriminating nodes based on their topological roles. These initial results, using synthetic networks of chaotic Rossler systems and data from experiments with nonlinear electronic circuits, illuminate new possibilities for using ordinal methods in various applications, including functional brain data analysis [4], power grids, mobility networks, or any other domains involving the close interplay between structural and functional relationships within large-scale dynamical ensembles. ## II Model and Methods ### Model We consider a network of \(N\) identical Rossler dynamical systems [32] whose dynamics are governed by the following equations: \[\dot{\mathbf{x}}_{i}=\mathbf{f}(\mathbf{x}_{i})-\tilde{\sigma}\sum\mathcal{L }_{ij}\mathbf{h}(\mathbf{x}_{j}), \tag{1}\] with \(i=1,\ldots,N\); \(\mathbf{x}_{i}=(x_{i},y_{i},z_{i})\) the vector state of the node \(i\); \(\mathbf{f}\left(\mathbf{x}\right)\) and \(\mathbf{h}\left(\mathbf{x}\right):\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}\), being \(\mathbf{f}(\mathbf{x})=[-y-z,x+ay,b+z(x-c)]\) the vector flow of the Rossler system, and \(\mathbf{h}(\mathbf{x})=[0,y,0]^{T}\) the coupling function. We set \(a=b=0.2\) and \(c=9.0\) to get a phase-coherent chaotic attractor. The coefficients \(\mathcal{L}_{ij}=k_{i}\delta_{ij}-a_{ij}\) are the elements of the Laplacian matrix whose adjacency matrix \(A:=(a_{ij})\) encodes the connectivity among the nodes of the network: \(a_{ij}=1\), if \(i\) and \(j\) are connected, and \(a_{ij}=0\) otherwise. Thus the degree of node \(i\) is \(k_{i}=\sum_{j}a_{ij}\). The constant \(\tilde{\sigma}=\frac{\sigma}{k_{\max}}\) is the coupling strength normalized by the maximum degree present in the network, that is, \(k_{\max}=\max(k_{i})\). This normalization is introduced to properly compare observables between different network realizations [33]. The system of \(N\) equations described by (1) has been numerically integrated using a Runge-Kutta method of 4th order with a time discretization of \(0.005\). In all simulations, the time evolution is extended up to \(12,000\) time units, discarding the first half, which is considered a transient. In ordinal methods, how the raw data is projected into an ordinal series depends on the particularities of the data, their sampling, or their continuous or discrete nature, without affecting the rest of the procedure [9]. In our case, to extract information about the temporal organization of each nodal dynamics \(\mathbf{x}_{i}(t)\), we first computed the two-dimensional Poincare section \(\mathcal{P}\equiv\{[x_{i}(t_{m}),z_{i}(t_{m})]\in\mathbb{R}^{2}|\,\dot{y}_{i }(t_{m})=0,\ddot{y}_{i}(t_{m})>0\}\)[34]. This allows us to map the whole attractor \(\mathbf{x}_{i}\) of node \(i\) into the one-dimensional time series \(\mathcal{S}_{i}\equiv\{y_{i}(t_{m}),m=1,\ldots,M\}\), generated at the times \(t_{m}\) the attractor crosses the section \(\mathcal{P}\). Then, we construct the order relations of \(D\) successive data points in the sampled time series \(\mathcal{S}_{i}\) in the following manner. Once the terms in the sequence \(\mathcal{S}_{i}\) are split into disjoint blocks of size \(D\), we create a symbolic sequence in which each element is replaced by a number in \([1,\ldots,D]\), corresponding to its relative ranking respect to its \(D-1\) neighbours in the block. Therefore, each block is mapped into one of the \(D!\) possible permutations in which \(D\) different elements can be arranged. We refer to these permutations as ordinal patterns, using the notation \(\pi_{\ell}\) with \(\ell=1,\ldots,D!\). As an example, let us consider the series \(\{2.3,3.4,-2.7,0.4,1.6,2.9,-2.8,-0.5,3.1,2.4,\ldots\}\). We first split the series into disjoint blocks of size \(D=3\): \(\{2.3,3.4,-2.7\}\), \(\{0.4,1.6,2.9\}\), \(\{3.1,-0.5,3.8\}\), \(\{2.4,\ldots\}\). Then, we derive the ordinal pattern for each block. It can be done from maximum to minimum or the other way around. In the first case, the ordinal patterns would be \(\{2,1,3\}\), \(\{3,2,1\}\), \(\{2,3,1\}\), \(\ldots\), which are arbitrarily denoted as \(\pi_{5}\), \(\pi_{1}\), \(\pi_{2}\), \(\ldots\) (see Fig. 2 for our notation of the six possible permutations). Finally, we define the probability of occurrence of a given pattern \(\pi_{\ell}\) as \(p_{\ell}=\#(\pi_{\ell})/L\), being \(\#(\pi_{\ell})\) the number of times the ordinal pattern \(\pi_{\ell}\) appears in the sequence \(\mathcal{S}_{i}\) and \(L=\lfloor M/D\rfloor\) the total number of blocks of size \(D\) in which we divide the series \(\mathcal{S}_{i}\) (\(\lfloor\,\rfloor\) is the floor function). Note that this procedure is only meaningful if \(M\gg D!\). ### Methods In this Section, we present the methods employed to characterize the statistical complexity of a nodal dynamics. Our ultimate objective is to establish a relationship between the dynamical behaviour of each node and its structural connectivity within the network. To achieve this, we compare the ordinal permutation entropy based on the probability distribution of ordinal patterns and the ordinal transition entropy based on the transition probabilities between consecutive non-overlapping ordinal patterns. Permutation entropy has previously been identified as a reliable indicator of the topological role of a node within a dynamical network [30; 31]. However, our study reveals that analysing the transition probabilities between ordinal patterns offers a more effective and informative measure for assessing a node's degree centrality. Ordinal permutation entropy Given the probability distribution of the ordinal patterns \(\pi_{\ell}\) of size \(D\), with \(\ell=1,\ldots,D!\), we define the normalized permutation entropy as the Shannon entropy evaluated on the ordinal pattern probability distribution: \[\mathcal{H}_{0}=-\frac{1}{\ln D!}\sum_{\ell}p_{\ell}\ln p_{\ell}, \tag{2}\] with the criterion \(0^{0}=1\) to deal the case \(p_{\ell}=0\). According to Bandt and Pomp [1], \(3\leq D\leq 7\) values provide reliable information on the natural complexity of time series coming from chaotic dynamical systems as long as \(M\gg D!\). However, unobserved ordinal patterns have been reported in chaotic dynamical systems, no matter how large the time series is, due to the underlying temporal correlations [35]. #### ii.1.2 Ordinal transition entropy In addition to the probability \(p_{\ell}\) of each ordinal pattern \(\pi_{\ell}\), the transition probability \(p_{\ell m}\), from the ordinal pattern \(\pi_{\ell}\) to \(\pi_{m}\), may reveal information into the finer temporal organization of a dynamical system [7]. We define the ordinal transition probability (OTP) matrix \(T:=(p_{\ell m})\) as \[p_{\ell m}=\frac{\#(\pi_{\ell},\pi_{m})}{\#(\pi_{\ell})} \tag{3}\] being \(\#(\pi_{\ell},\pi_{m})\) the number of times the pair \(\pi_{\ell}-\pi_{m}\) consecutively occurs in the time series. Note that, in case \(\#(\pi_{\ell})=0\) for some pattern \(\pi_{\ell}\), we can define \(p_{\ell m}=0\). The total number of blocks \(L\) of size \(D\) must now be \(L\gg D!^{2}\) so that the OTP matrix \(T\) is statistically significant. Equation (3) is a proper stochastic matrix whose weights encode an OTN among ordinal patterns, including self-transitions, into which the time series of each nodal dynamics can be mapped. Hence, the complexity of this OTN will depend on the diversity of both ordinal patterns and transitions occurring among them. Since \(\sum_{m}p_{\ell m}=1\), we can define the node permutation entropy \(\mathcal{H}_{\pi_{\ell}}\) associated with the ordinal pattern \(\pi_{\ell}\), a node of the OTN, which quantifies the randomness of the local transitions from the ordinal pattern \(\pi_{\ell}\) to any other pattern [36; 37], as \[\mathcal{H}_{\pi_{\ell}}=-\frac{1}{lnD!}\sum_{m=1}^{D!}p_{\ell m}\ln p_{\ell m}. \tag{4}\] We characterize the transitional complexity of the OTN at the global level with a _network_ permutation entropy obtained as the average of the node permutation entropies given by Eq. (4). Depending on how the average is performed, we consider using either the first moment of the distribution of the \(\mathcal{H}_{\pi_{\ell}}\) values as in [36] \[\mathcal{H}_{\mathrm{T}}=\frac{1}{D!}\sum_{\ell=1}^{D!}\mathcal{H}_{\pi_{\ell}} \tag{5}\] or, alternatively, as defined in [38]: \[\hat{\mathcal{H}}_{\mathrm{T}}=\sum_{\ell=1}^{D!}p_{\ell}\mathcal{H}_{\pi_{ \ell}} \tag{6}\] which characterizes the weighted average (over the stationary probabilities \(p_{\ell}\) of each pattern \(\pi_{\ell}\)) of the diversity of consecutive ordinal patterns. Other measures to characterize ordinal transition networks can be found in Refs. [39; 22]. #### ii.1.3 Synchronization measures In addition to characterizing the nodal dynamics by the randomness of the ordinal patterns and their transitions, we evaluate the dynamical network's collective state for increasing coupling values since the chosen networked system (1) is known to evolve from a totally incoherent state when \(\sigma=0\) to a regime where the phases are locked while the amplitudes vary chaotically and uncorrelated, up to a regime of complete synchronization for very large \(\sigma\). We compute the time-averaged phase order parameter \[R=\frac{1}{N}\langle|\sum_{j=1}^{N}\mathrm{e}^{\mathrm{i}\theta_{j}}|\rangle_{t} \tag{7}\] with the phase \(\theta_{j}\) of the \(j\)-oscillator defined as \(\theta_{j}=\arctan(y_{j}/x_{j})\)[40], and synchronization error \[E=\frac{2}{N(N-1)}\langle\sum_{i\neq j}\|\mathbf{x}_{i}-\mathbf{x}_{j}\| \rangle_{t}, \tag{8}\] which account for the level of phase (\(0\leq R\leq 1\)) and total synchronization (\(E\geq 0\)) respectively. When the network is in complete synchrony, \(R=1\) and \(E=0\). Here, \(\langle\rangle_{t}\) stands for the time average along a sufficiently large time series. ## III Results ### Star network Let us start with a star configuration of \(N\) coupled Rossler systems. This network topology has \(N-1\) nodes of degree \(k_{\mathrm{leaf}}=1\) connected to a central one, the hub, with \(k_{\mathrm{hub}}=N-1\), thus, offering two types of nodal dynamics with the maximum topological distance possible. Figure 1 illustrates the transition to synchronization of a \(N=9\) star as the coupling strength \(\sigma\) increases. Along this route, the initially identical dynamics exhibited by the hub and the leaves start to differentiate due to the coupling interaction. This differentiation becomes evident when examining the OTP matrix \(T\) for \(D=3\), which has six ordinal patterns (corresponding to the following permutations: \(\pi_{1}\equiv 321\), \(\pi_{2}\equiv 312\), \(\pi_{3}\equiv 231\), \(\pi_{4}\equiv 132\), \(\pi_{5}\equiv 213\), \(\pi_{6}\equiv 123\)). The colormap panels depict the OTP matrices \(T\) of the hub (b-e) and one of the leaves (f-i), representing four different values of \(\sigma\) corresponding to various synchronization stages. When \(\sigma=0\) (panels b, f) and \(\sigma=0.2\) (panels e, i), the hub and the leaves' OTP matrices exhibit the same color coding. This similarity arises because they describe the transition probabilities between ordinal patterns of the same intrinsic dynamics, given by the flow \(\mathbf{f}\) in Eq. (1) when the systems are uncoupled or coupled but synchronized. In these panels, white and red dots indicate unobserved ordinal transitions (\(p_{\ell m}=0\)). These transitions may be absent either because the chosen chaotic dynamics include one forbidden ordinal pattern, white dots (caused by the forbidden pattern \(\pi_{1}\)), or because unreachable ordinal patterns exist from certain initial states, red dots (for instance, a \(\pi_{3}\) pattern cannot follow a \(\pi_{5}\) pattern). As soon as the hub and the leaf interact (panels c, g and d, h), the colormap changes differently for each of them. New transitions appear while others disappear. Notably, all ordinal transitions become nearly equiprobable for the hub, which is indicative of noisy dynamics--a characteristic feature. To closely inspect how those transitions between ordinal patterns evolve along the synchronization process for each type of node in a star graph, we plot in each panel of the top row of Fig. 2 the node permutation entropy \(\mathcal{H}_{\pi_{\ell}}\) of each ordinal pattern \(\pi_{\ell}\) for the hub and one of the leaves and the corresponding ordinal pattern frequencies \(p_{\pi_{\ell}}\) at the bottom row as a function of the coupling strength \(\sigma\). The most remarkable differences between hub and leaf come from those transitions starting at patterns \(\pi_{1}\), \(\pi_{4}\), and \(\pi_{5}\), since the gap between the node permutation entropies \(\mathcal{H}_{\pi}\) between hub and leaf is the largest, while for the rest of patterns is less pronounced. In particular, the pattern \(\pi_{1}\), which is forbidden in the isolated dynamics, not only emerges due to the interaction but also becomes Figure 1: OTP matrices for a star graph of \(N=16\) identical Rössler systems along the route to synchronization. (Top panel) Phase order \(R\) (left axis) and synchronization error \(E\) (right axis) as a function of the coupling strength \(\sigma\). (Color panels) OTP matrices \(T\), with ordinal patterns \(\pi_{\ell}\) (\(\ell=1,\ldots,6\) since \(D=3\)), of the hub (top row) and one of the leaves (bottom row), for the coupling values marked with dotted lines in the top panel along the synchronization process. Ordinal transitions with zero probability are marked with dots: in white, those caused for \(\pi_{1}\) being a forbidden ordinal pattern, and in red, the transitions that, despite being between existing ordinal patterns, they actually do not occur. Time series length \(M=1000\). Rössler parameters: \(a=b=0.2\) and \(c=9.0\). much more entropic in the hub's dynamics than in the leaves. In addition, note the differences in the probability frequency \(p_{\pi}\) of each pattern that will have an effect on the network permutation entropy of the OTN as defined in Eq. (6). The primary objective of this work is to evaluate whether an entropic measure based on the information encoded in the OTN can outperform the predictive power of the entropic quantifiers based on just the probability distribution of the ordinal patterns. To examine this, we compare in Fig. 3 how the ordinal permutation entropy \(\mathcal{H}_{0}\) [Eq. (2)] and the network permutation entropies \(\mathcal{H}_{\mathrm{T}}\) [Eq. (5)] and \(\hat{\mathcal{H}}_{\mathrm{T}}\) [Eq. (6)] differentiate between the hub and leaf dynamics for two star networks of \(N\)=9 and \(N\)=31 nodes. The network permutation entropy \(\mathcal{H}_{\mathrm{T}}\) (panel (b)) effectively separates the hub and leaf dynamics right from the onset of phase synchronization, and maintains this distinction over a broader range of coupling strengths compared to the two other entropies. This is linked to the results shown in Fig. 2, in which those patterns with the greatest differences between hub and leaf node permutation entropies (\(\pi_{1}\), \(\pi_{4}\) and \(\pi_{5}\)) are those for which the probabilities of occurrence are smaller than for the rest (\(\pi_{2}\), \(\pi_{3}\), and \(\pi_{6}\)). Note that the scale is not the same for all the panels. Consequently, the weighted version of the network permutation entropy, \(\hat{\mathcal{H}}_{\mathrm{T}}\), is biased by the most frequent patterns \(\pi_{2}\), \(\pi_{3}\), and \(\pi_{6}\) which are the ones with the most similar hub and leaf node permutation entropies and there, less sensitive to distinguish between the nodes' different roles in the collective dynamics. Furthermore, a noteworthy observation is that the differentiation in the \(\mathcal{H}_{T}\) of the hub is more pronounced, and occurs at a lower coupling strength, in the case of the larger star with \(N=31\), compared to the smaller one with \(N=9\). On the other hand, the values for the leaf nodes in both stars are similar, which is expected as they have the same degree, \(k_{\mathrm{leaf}}=1\). This finding implies that the network permutation entropy \(\mathcal{H}_{T}\) has the potential for effectively discerning topological roles within more complex ensembles, as we will explore in the next Section. ### Scale-free network Once we have evidence that the network permutation entropy \(\mathcal{H}_{T}\) can uncover the information stored in the OTN and discriminate the different roles that nodes have in star networks, we move forward to test this measure in the more challenging task of analyzing the synchronisation process of a scale-free network. Precisely, we consider the network dynamics of \(N=300\) nodes, as described by Eq. (1), whose connectivity follows a scale-free degree distribution [41]. Given a coupling value \(\sigma\), for each node \(i\) we compute the corresponding ordinal permutation entropy \(\mathcal{H}_{0}^{(i)}\) and the network permutation entropy \(\mathcal{H}_{\mathrm{T}}^{(i)}\). Since we expect that the nodes with the same degree \(k\) will have the same dynamical role within the network, we define a \(k\)-class average for the network permutation entropies as [30]: \[\langle\mathcal{H}_{\mathrm{T}}\rangle_{k}=\frac{1}{N_{k}}\sum_{\{i|k_{i}=k\} }\mathcal{H}_{\mathrm{T}}^{(i)}, \tag{9}\] where \(N_{k}\) is the number of nodes with degree \(k\) and \(\langle\rangle_{k}\) is just to denote how the measure has been obtained as an ensemble average of the given measure at the node level restricted to nodes with the same connectivity \(k\). Similarly, we define a \(k\)-class average for the ordinal permutation entropies of those nodes with the same degree: \(\langle\mathcal{H}_{0}\rangle_{k}\). The results are presented in Fig.4, which compares \(\langle\mathcal{H}_{0}\rangle_{k}\) (a,c) and \(\langle\mathcal{H}_{\mathrm{T}}\rangle_{k}\) (b,d). It is clear that the net Figure 2: Evolution of the node permutation entropies \(\mathcal{H}_{\pi_{\ell}}\) (top row) and the corresponding probabilities \(p_{\pi_{\ell}}\) (bottom row) of each ordinal pattern \(\pi_{\ell}\), for \(D=3\), as a function of the coupling \(\sigma\) (crosses for the hub and dots for one of the leaves). Notice that the scale is not the same for all the panels at the bottom row. In the upper right corner of the panels in the top row, it is shown schematically with three black dots the permutation of the corresponding ordinal pattern (for instance, \(\pi_{2}\) is 312 and \(\pi_{3}\) is 231). Data is for the same \(N=16\) star of Rössler systems and parameters as in Fig. 1. work permutation entropy surpasses the ordinal permutation entropy in its ability to sort nodes according to their degree. Upon increasing the normalized coupling strength \(\sigma/k_{\text{max}}\), both entropies exhibit a distinct separation based on node degrees. However, the differences between \(k\)-classes are more pronounced in the case of \(\langle\mathcal{H}_{T}\rangle_{k}\). It is worth noting that, for weak values of the coupling network, hubs exhibit higher entropy values, similar to the behavior of the central node of a star network. However, as the synchronization progresses further, the ranking of the degree classes reverses. This change in behaviour throughout the synchronization process reflects an interesting fact: in weakly coupled networks, highly connected nodes perceive the information from the network as a source of noise, thereby increasing their entropy above that of the low-connected nodes. However, beyond this point, the highly connected nodes take the lead in driving the transition to coherence, while the other nodes remain unsynchronized [42; 43], resulting, as a consequence, in an exchange of entropy trends. The results shown in Figs. 4(b,d) shed light on understanding this entropy-based centrality ranking. We plot the ordinal permutation entropy (b) and the network permutation entropy (d) as a function of \(k\) for various coupling values. The entropies of the degree classes demonstrate a quasi-linear relationship with \(k\), displaying a positive slope for weak coupling (solid lines) and a negative slope for coupling strengths close to the system's synchronization (dashed lines). Therefore, network permutation entropy measures stand out as a superior choice. Consequently, a centrality ranking can be established solely based on this entropy without prior knowledge of the underlying structure or costly pairwise computations of the observed time series. To assess the method's validity in a more realistic environment with available ground truth structural information, we analysed the experimental datasets of networks of nonlinear electronic circuits provided by Ref. [44]. These datasets comprise the time series of the output voltage of \(N=28\) electronic circuits coupled in 20 different network configurations and monitored along their synchronization process for 100 coupling levels, ranging from disconnection (isolated nodes) to values producing a network state of complete synchrony. Please refer to the reference [44] for a full description of the experiments. Therefore, these experimental datasets provide the ideal testbed for our inference method and to predict the circuits connectivity by means of the network permutation entropy of each timeseries' circuit. In order to do so, we choose a weak coupling condition (level 9 over 100) and, for only one of the network configurations [the one that is used as a ground truth reference, plotted in Fig. 5 (a)], we calculate the average \(k\)-class network permutation entropy \(\langle\mathcal{H}_{\text{T}}\rangle_{k}\). The output of this calibration procedure is a function that maps the node degree classes of the network used as a ground truth and the corresponding assigned network permutation entropies. One possible way is to produce a piecewise function \(k_{a}(\mathcal{H}_{\text{T}})\) such that the sequence of intervals are defined by interpolating the entropies measured in the experiment used as calibration for the degrees \(k\) and \(k+1\), that is, \(T_{H}(k)=[\langle\mathcal{H}_{\text{T}}\rangle_{k+1}-\langle\mathcal{H}_{ \text{T}}\rangle_{k}]/2\) for \(k=1,\ldots,k_{\text{max}}-1\). Now, for each node \(i\) in any network different from the one used as a reference, we blindly assign a degree \(k_{a}\) as a function of their dynamics using the following map: \[k_{a}^{i}=\left\{\begin{array}{lcl}1&\text{if}&\mathcal{H}_{\text{T}}^{i}<T _{H}(1)\\ k&\text{if}&T_{H}(k)<\mathcal{H}_{\text{T}}^{i}<T_{H}(k+1);\quad k=2,\ldots,k_{ \text{max}}-1\\ k_{\text{max}}&\text{if}&\mathcal{H}_{\text{T}}^{i}>T_{H}(k_{\text{max}}-1) \end{array}\right. \tag{10}\] Since the real degree \(k_{r}^{i}\) of the node \(i\) is available in the dataset, we can compare the predicted value \(k_{a}^{i}\) with the real one. In Fig. 5(b), we plot the assigned degree versus the real one averaged for all the nodes in the 19 networks. Notice that these networks are very small and relatively sparse, with a maximum degree \(k_{\text{max}}=7\) and, therefore, the degree sequence spans a much narrower interval than in the SF networks used in the simulations shown in Fig. 4. Remarkably, even in this degree-constrained Figure 3: Comparison between (a) the ordinal permutation entropy \(\mathcal{H}_{0}\), and (b) the weighted \(\hat{\mathcal{H}}_{\text{T}}\) and (c) unweighted \(\mathcal{H}_{\text{T}}\) network permutation entropies along the route to synchronization, for the hub (blue crosses) and one of the leaves (red dots), of a star graph of size \(N=9\) (dotted curves) and \(N=31\) (solid curves). Insets show, in a log-linear scale, the absolute difference between the entropies of hub and leaf (\(\Delta\mathcal{H}_{0}=\mathcal{H}_{0_{\text{hub}}}-\mathcal{H}_{0_{\text{ leaf}}}\), and the same for \(\Delta\hat{\mathcal{H}}_{\text{T}}\) and \(\Delta\mathcal{H}_{\text{T}}\)). scenario and despite the noise inherent to an experimental environment, we obtain that for the \(91\%\) of the nodes \(|k_{r}^{i}-k_{a}^{i}|\leq 1\), constituting a very high confidence level. ## IV Conclusions Ordinal measures provide a valuable collection of tools for analyzing correlated data series. However, the use of these methods to understand information interchange in coupled networks and the interaction between dynamics and structure during the synchronization process remains relatively unexplored. In this study, using networks of coupled Rossler systems in chaotic regime, we assessed the performance of the standard ordinal permutation entropy \(\mathcal{H}_{0}\) compared to the network permutation entropy \(\mathcal{H}_{T}\), which captures information about transitions between ordinal patterns, and applied the proposed methodology to infer the connectivity of experimental datasets of networks of nonlinear circuits. Whereas there exist other measures, such as statistical permutation complexity [30] and ordinal structurality [31], which have demonstrated their usefulness as proxies for degree distributions, our findings highlight the ordinal transition entropy as a more effective method for distinguishing topological roles, and producing more satisfactory outcomes, particularly for lower embedding dimensions. Many methods focused on the structure-function relationship are primarily intended to infer the detailed connectivity network, down to the level of the individual links, from time series. However, in many cases, knowledge of centrality roles alone is sufficient for designing successful interventions in the dynamics. Therefore, we Figure 4: Comparison between the \(k\)-class ordinal permutation entropy \(\langle\mathcal{H}_{0}\rangle_{k}\) (a,c) and the \(k\)-class network permutation entropy \((\mathcal{H}_{\mathrm{T}})_{k}\) (b,d) for heterogeneous scale-free networks of \(N=300\) nodes and mean degree 4: (a,b) as a function of the normalized coupling strength \(\frac{\sigma}{k_{max}}\), for several values of degree \(k\) class; (c,d) as a function of \(k\), for several values of \(\sigma\). In panel (d), solid lines refer to weak coupling values while dashed ones refer to couplings favoring a state close to synchronization. The synchronization error \(E\) (rescaled for clarity) and the Kuramoto parameter \(R\) have been added in panel (a) as a reference. Each curve is the result of averaging over 50 network instances. Shaded bands indicate the confidence interval around the mean value computed as three times the standard error of the mean. Figure 5: Inference of the nodes’ degree of networks of electronic circuits based on the \(k\)-class network permutation entropy \(\langle\mathcal{H}_{\mathrm{T}}\rangle_{k}\) of the timeseries reported in Ref. [44]. (a) Structure connectivity of the electronic circuit network used as a ground truth. (b) Average assigned degree \(k_{a}\) versus the real degree \(k_{r}\) obtained when using a single network as a training reference. anticipate that our results, which do not rely on pairwise correlations between timeseries, will be of particular interest in the context of functional networks and other scenarios in which the underlying structural information is inaccessible. ###### Acknowledgements. This research was supported by the Spanish Ministerio de Ciencia e Innovacion under Project PID2020-113737GB-I00.
2302.09905
The battery capacity of energy-storing quantum systems
The quantum battery capacity is introduced in this letter as a figure of merit that expresses the potential of a quantum system to store and supply energy. It is defined as the difference between the highest and the lowest energy that can be reached by means of the unitary evolution of the system. This function is closely connected to the ergotropy, but it does not depend on the temporary level of energy of the system. The capacity of a quantum battery can be directly linked with the entropy of the battery state, as well as with measures of coherence and entanglement.
Xue Yang, Yan-Han Yang, Mir Alimuddin, Raffaele Salvia, Shao-Ming Fei, Li-Ming Zhao, Stefan Nimmrichter, Ming-Xing Luo
2023-02-20T11:06:43Z
http://arxiv.org/abs/2302.09905v3
# The Battery Capacity of Energy-storing Quantum Systems ###### Abstract The quantum battery capacity is introduced in this letter as a figure of merit that expresses the potential of a quantum system to store and supply energy. It is defined as the difference between the highest and the lowest energy that can be reached by means of the unitary evolution of the system. This function is closely connected to the ergotropy, but it does not depend on the temporary level of energy of the system. The capacity of a quantum battery can be directly linked with the entropy of the battery state, as well as with measures of coherence and entanglement. Quantum thermodynamics is a blossoming field that aims to bridge the gap between quantum physics and thermodynamics. The growing interest in quantum technologies has created fertile ground for the theoretical and experimental study of quantum batteries, i.e., of quantum devices that can store and release energy in a controllable manner [1; 2; 3; 4; 5; 6; 7]. Thanks to their capability of exploiting coherence, quantum batteries could facilitate faster, higher-power charging than their classical counterparts. A central quantity in the study of quantum batteries is the ergotropy [8], which represents the amount of energy that can be extracted from a given quantum battery state by means of cyclic modulations of the battery's Hamiltonian (or, equivalently, by unitary evolution). As the battery releases or stores energy [9; 10; 11; 12; 13; 14], its ergotropy may change from zero (in which case the battery is said to be in its zero-charge _passive state_) [15; 16], to a maximum value \(\mathcal{C}(\hat{\varrho};\hat{H})\) that can be calculated from the eigenvalues of the battery's density matrix \(\hat{\varrho}\) and from the energy levels of the Hamiltonian \(\hat{H}\). In this paper, we discuss the _quantum battery capacity_\(\mathcal{C}(\hat{\varrho};\hat{H})\) as a figure of merit linking its work storage capacity to quantum features such as quantum entropies [17; 18; 19], or quantum coherences [20; 21; 22; 23; 24]. Although most of the properties of the battery capacity can be derived from the properties of the ergotropy, we argue that the battery capacity is in some sense a more fundamental quantity as it does not change when the battery is unitarily charged or discharged. Furthermore, at variance with the ergotropy, as a spectral functional of state \(\hat{\varrho}\) and Hamiltonian \(\hat{H}\), the battery capacity can be a simpler quantity from a theoretical point of view. The fact that the battery capacity depends on the state only through its eigenvalues makes it easy to operationally connect it with entropy and coherence measure for general battery system with equally spaced energy levels. More recently, composite quantum systems have been considered for work storage [25; 26; 27; 28; 29; 30; 31], tapping into the resource of quantum entanglement. The amount of work that can be extracted from a composite quantum system is usually bigger if we are allowed to perform global operations on the system, than if we can only act locally on its subsystems. This advantage is reflected in a bigger value of the ergotropy (called the _ergotropic gap_[25; 31]), and in different statistics of work extraction with respect to random unitary transformations [32; 33]. This advantage of global operations is also reflected in a gap in battery capacity; here we show, inspired by the known results for the ergotropic gap, that also the battery capacity gap can serve as a witness of bipartite entanglement and genuine multipartite entanglement. _Extracting and injecting work in a quantum battery.--_ Consider an isolated \(d\)-dimensional quantum battery system equipped with a bare Hamiltonian \(\hat{H}\) that determines its energy spectrum and an initially prepared state \(\hat{\varrho}\) that determines how much useful energy charge the battery can carry. Our aim is to assess the amount of charge that can be added or removed from the battery in control protocols that do not involve heat exchange with a thermal environment. When the battery is subjected to a cyclic driving of the system Hamiltonian, its state undergoes an unitary evolution \(\hat{\varrho}\rightarrow\hat{U}\hat{\varrho}\hat{U}^{\dagger}\) and its mean energy changes by \[W_{\hat{U}}(\hat{\varrho};\hat{H})\equiv\mathrm{Tr}[\hat{\varrho}\hat{H}]- \mathrm{Tr}[\hat{U}\hat{\varrho}\hat{U}^{\dagger}\hat{H}], \tag{1}\] which we identify, following the paradigm introduced in [15; 16], with the amount of work _extracted_ from the battery. The work extraction functional (1) is bounded by the inequalities \[\mathcal{E}(\hat{\varrho};\hat{H})\geq W_{\hat{U}}(\hat{\varrho};\hat{H})\geq \mathcal{A}(\hat{\varrho};\hat{H})\;, \tag{2}\] where the quantities \(\mathcal{E}(\hat{\varrho};\hat{H})\) and \(\mathcal{A}(\hat{\varrho};\hat{H})\) are called the _ergotropy_ and the _antiergotropy_[12] of the quantum state \(\hat{\varrho}\) with respect to the Hamiltonian \(\hat{H}\): \[\mathcal{E}(\hat{\varrho};\hat{H}) \equiv \max_{\hat{U}\in\mathbf{U}(d)}W_{\hat{U}}\left(\hat{\varrho}; \hat{H}\right)\;; \tag{3}\] \[\mathcal{A}(\hat{\varrho};\hat{H}) \equiv \min_{\hat{U}\in\mathbf{U}(d)}W_{\hat{U}}(\hat{\varrho};\hat{H})\;. \tag{4}\] Here, \(\mathbf{U}(d)\) represents the unitary group of \(d\times d\) matrices. Let \(\hat{U}^{(\downarrow)}\) and \(\hat{U}^{(\uparrow)}\) denote, respectively, the unitary transformation that realize the maximum (3) and the minimum (4) of the work extraction functional. The state \(\hat{\varrho}^{\downarrow}\equiv\hat{U}^{(\downarrow)}\hat{\varrho}\hat{U}^{( \downarrow)\dagger}\) is called the _passive state_ associated with \(\hat{\varrho}\), and it is the state with lowest energy in the unitary orbit of \(\hat{\varrho}\). If a state is passive, then no more energy can be extracted from it using unitary transformations; it has zero ergotropy. Thus, \(\mathcal{E}\geq 0\) describes how much work can be discharged from the battery state. Conversely, the state \(\hat{\varrho}^{\uparrow}\equiv\hat{U}^{(\downarrow)}\hat{\varrho}\hat{U}^{( \downarrow)\dagger}\) is known as the _active state_ associated with \(\hat{\varrho}\)[12; 34; 35], and it is the state with the highest energy in the unitary orbit of \(\hat{\varrho}\). A state \(\hat{\varrho}^{\uparrow}\) is active if and only if no further energy can be injected into it by means of unitary evolution; it has zero antiergotropy. Hence, \(\mathcal{A}\leq 0\), and its magnitude quantifies by how much the battery state can be charged. Letting \(\lambda_{0}\leq\lambda_{1}\leq\ldots\leq\lambda_{d-1}\) denote the eigenvalues of the quantum state \(\hat{\varrho}\), and \(\epsilon_{0}\leq\epsilon_{1}\leq\ldots\leq\epsilon_{d-1}\) the eigenenergies of the Hamiltonian \(\hat{H}\), the energy content of the extremal states becomes \[\text{Tr}[\hat{\varrho}^{\downarrow}\hat{H}] = \sum_{i=0}^{d-1}\lambda_{i}\epsilon_{d-1-i}\;; \tag{5}\] \[\text{Tr}[\hat{\varrho}^{\uparrow}\hat{H}] = \sum_{i=0}^{d-1}\lambda_{i}\epsilon_{i}\;. \tag{6}\] Accordingly, the ergotropy (antiergotropy) is obtained by subtracting the energy content of the passive (active) state from the initial mean energy \(\text{Tr}[\hat{H}\hat{\varrho}]\). The ergotropy is a sublinear and convex functional [1], \[\mathcal{E}(t\hat{\varrho}+(1-t)\hat{\tau};\hat{H})\leq t\mathcal{E}(\hat{ \varrho};\hat{H})+(1-t)\mathcal{E}(\hat{\tau};\hat{H})\;. \tag{7}\] Using (7) and the identity \(\mathcal{A}(\hat{\rho};\hat{H})=-\mathcal{E}(\hat{\rho};-\hat{H})\) it is also immediate to see that the opposite holds for the antiergotropy, \[\mathcal{A}(t\hat{\varrho}+(1-t)\hat{\tau};\hat{H})\geq t\mathcal{A}(\hat{ \varrho};\hat{H})+(1-t)\mathcal{A}(\hat{\tau};\hat{H})\;. \tag{8}\] _The quantum battery capacity._--Both the ergotropy and antiergotropy of a quantum system are not constant during an isentropic thermodynamic cycle. However, their difference is constant during any unitary evolution. Here we call it the _battery capacity_ of the system. **Definition 1**.: _The battery capacity of a quantum state \(\hat{\varrho}\) with respect to a Hamiltonian \(\hat{H}\) is given by_ \[\mathcal{C}(\hat{\varrho};\hat{H})=\mathcal{E}(\hat{\varrho};\hat{H})- \mathcal{A}(\hat{\varrho};\hat{H})=\text{Tr}[\hat{\varrho}^{\uparrow}\hat{H} ]-\text{Tr}[\hat{\varrho}^{\downarrow}\hat{H}]\;. \tag{9}\] The battery capacity represents the amount of work that a quantum system can transfer during any thermodynamic cycle that keeps the battery's evolution unitary (as is the case for a quantum battery which is thermally isolated, but mechanically coupled to work source or a load). We can write \(\mathcal{C}(\hat{\varrho};\hat{H})\) as the difference between the energies of the two extremal states in the unitary orbit of \(\hat{\varrho}\): the active state \(\hat{\varrho}^{\uparrow}\), which realizes the maximum possible energy (6), and the passive state \(\hat{\varrho}^{\downarrow}\), with energy (5); see Fig. 1. Equivalently, the work capacity of a state \(\hat{\varrho}\) is equal to the ergotropy of the active state associated with \(\hat{\varrho}\), or the minus the antiergotropy of the relative passive state: \(\mathcal{C}(\hat{\varrho};\hat{H})=\mathcal{E}(\hat{\varrho}^{\uparrow};\hat{H} )=-\mathcal{A}(\hat{\varrho}^{\downarrow};\hat{H})\). It is apparent from the definition that \(\mathcal{C}(\hat{\varrho};\hat{H})\) is an unitarily invariant functional of the state, i.e, \(\mathcal{C}(\hat{\varrho};\hat{H})=\mathcal{C}(\hat{U}\hat{\varrho}\hat{U}^{ \uparrow};\hat{H})\). The battery capacity thus admits a simple expression in terms of the eigenvalues \(\{\lambda_{i}\}\) of the density Figure 1: Pictorial representation of the charging and discharging of a two-level quantum battery under cyclic evolution. Given an initial state \(\hat{\varrho}\) with eigenvalues \(\lambda_{\min}\) and \(\lambda_{\max}\geq\lambda_{\min}\), the energy of the battery can vary between energy of the passive state \(\lambda_{\min}E\) and the energy of the active state \(\lambda_{\max}E\). matrix and of the energy levels \(\{\epsilon_{i}\}\) of the Hamiltonian. From (5), (6), and from the definition (9), we deduce \[\mathcal{C}(\hat{\varrho};\hat{H}) = \sum_{i=0}^{d-1}\epsilon_{i}\left(\lambda_{i}-\lambda_{d-1-i}\right) \tag{10}\] \[= \sum_{i=0}^{d-1}\lambda_{i}\left(\epsilon_{d-i-1}-\epsilon_{i} \right)\;.\] Moreover, from (9), (7), and (8) the battery capacity is, like the ergotropy, a convex and sublinear functional, \[\mathcal{C}(t\hat{\varrho}+(1-t)\hat{\tau};\hat{H})\leq t\mathcal{C}(\hat{ \varrho};\hat{H})+(1-t)\mathcal{C}(\hat{\tau};\hat{H}). \tag{11}\] Finally, the invariance with respect to unitary transformation results in a property that neither the ergotropy nor the antigotropy have: **Proposition 1**.: _The battery capacity is a Schur-convex functional of \(\hat{\varrho}\). That is, if a state \(\hat{\varrho}\) is majorized by \(\hat{\tau}\), (\(\hat{\varrho}\prec\hat{\tau}\)), then \(\mathcal{C}(\hat{\varrho};\hat{H})\leq\mathcal{C}(\hat{\tau};\hat{H})\)._ Proof.: We know from Ref.[36] that the passive-state energy \(\mathrm{Tr}[\hat{\varrho}^{\dagger}\hat{H}]\) is a Schur-concave functional of \(\hat{\varrho}\), implying that \(-\mathrm{Tr}[\hat{\varrho}^{\dagger}\hat{H}]\) is Schur-convex [37]. Given that the passive state of \(\hat{\varrho}\) with respect to the Hamiltonian \(\hat{H}\) is the active state of \(\hat{\varrho}\) with respect to \(-\hat{H}\), it follows that the energy of the active state \(\mathrm{Tr}[\hat{\varrho}^{\dagger}\hat{H}]\) is Schur-convex. Therefore the battery capacity, as the sum of two Schur-convex functionals, is Schur-convex. An important lower bound on the battery capacity can be given in terms of the state purity and the spread of the battery's energy spectrum, as measured by the variance of the Hamiltonian, \(\sigma_{\hat{H}}^{2}=\mathrm{Tr}[\hat{H}^{2}]-\mathrm{Tr}[\hat{H}]/d\). **Proposition 2**.: _Letting \(\sigma_{\hat{H}}=\sqrt{\sigma_{\hat{H}}^{2}}\) and \(\sigma_{\hat{\varrho}}=\sqrt{\mathcal{Tr}[\rho^{2}]-1/d}\), the capacity of a \(d\)-dimensional battery is bounded by_ \[\mathcal{C}(\hat{\varrho};\hat{H})\geq 2\frac{\sigma_{\hat{H}}\sigma_{\hat{ \varrho}}}{\sqrt{d^{2}-1}}\;. \tag{12}\] See Appendix A for the proof. _Battery capacity of many battery copies._--It is possible to extract more work from, or charge more work to, an ensemble of \(n\) identical copies of a quantum battery by using global operations on the whole ensemble [27; 28; 11; 28]. The figure of merit that quantifies the maximum work that can be extracted in this regime is the _total ergotropy_, defined as \[\mathcal{E}_{\mathrm{tot}}(\hat{\varrho};\hat{H})\equiv\lim_{n\to\infty}\frac{ 1}{n}\mathcal{E}\left(\hat{\varrho}^{\otimes n}\hat{H}^{(n)}\right)\;, \tag{13}\] with \(\hat{H}^{(n)}\) the Hamiltonian of \(n\) non-interacting copies of the system. The total ergotropy can also be expressed as \[\mathcal{E}_{\mathrm{tot}}(\hat{\varrho};\hat{H})=\mathrm{Tr}[\hat{\varrho} \hat{H}]-\mathrm{Tr}[\hat{\omega}_{\beta(\hat{\varrho})}\hat{H}], \tag{14}\] where \(\hat{\omega}_{\beta(\hat{\varrho})}=e^{-\beta(\hat{\varrho})\hat{H}}/\mathcal{ Z}\) is the Gibbs state of thermal equilibrium with a unique inverse temperature \(\beta(\hat{\varrho})\) such that the von Neumann entropies match, \(S(\hat{\omega}_{\beta(\hat{\varrho})})=S(\hat{\varrho})\). (Equivalently, the Gibbs state is the state of lowest energy among those with the same entropy as \(\hat{\varrho}\).) Similarly, we can define and express the _total antiergotropy_ as \[\mathcal{A}_{\mathrm{tot}}(\hat{\varrho};\hat{H}) \equiv \lim_{n\to\infty}\frac{1}{n}\mathcal{A}\left(\hat{\varrho}^{ \otimes n}\hat{H}^{(n)}\right)\] \[= \mathrm{Tr}[\hat{\varrho}\hat{H}]-\mathrm{Tr}[\hat{\omega}_{- \beta^{*}(\hat{\varrho})};\hat{H}]\;,\] where \(\hat{\omega}_{-\beta^{*}(\hat{\varrho})}\) is the inverse Gibbs state with negative inverse temperature such that \(S(\hat{\omega}_{-\beta^{*}(\hat{\varrho})})=S(\hat{\varrho})\), which is also the state of highest energy among those with the same entropy as \(\hat{\varrho}\). The battery capacity of an ensemble of \(n\gg 1\) identical quantum systems will tend to the "entropy-dependent battery capacity" defined in Ref.[39], \[\mathcal{C}_{\mathrm{tot}}(\hat{\varrho};\hat{H}) = \lim_{n\to\infty}\frac{1}{n}\mathcal{C}\left(\hat{\varrho}^{ \otimes n};\hat{H}^{(n)}\right) \tag{16}\] \[= \mathrm{Tr}[\hat{\omega}_{-\beta^{*}(\hat{\varrho})};\hat{H}]- \mathrm{Tr}[\hat{\omega}_{\beta(\hat{\varrho})};\hat{H}]\;.\] _The capacity of a two-level battery._--We now consider the simplest, yet widely studied, example of a battery: a quantum system made of only two levels \(|0\rangle\) and \(|1\rangle\), with corresponding Hamiltonian \(\hat{H}=E|1\rangle\langle 1|\). As we will show, the battery capacity can be related to entropic quantities and measures of coherence [40; 41; 42]. In Appendix B, we generalize our findings to a \(d\)-dimensional battery with equally spaced energy levels. The generic density matrix \(\hat{\varrho}\) on a two-level system can be written as \[\hat{\varrho} = \begin{bmatrix}1-q&ce^{i\theta}\\ ce^{-i\theta}&q\end{bmatrix}\;; \tag{17}\] with \(q\in[0,1]\) the population of the excited state, \(c\in[0,\sqrt{q(1-q)}]\) the amount of coherence in the state, and \(\theta\in[0,2\pi]\). The two eigenvalues of the density matrix (17) are \(\lambda_{\pm}=[1\pm\sqrt{(2q-1)^{2}+4c^{2}}]/2\), with \(\lambda_{+}\geq\max\{q,1-q\}\) and \(\lambda_{-}\leq\min\{q,1-q\}\). The ergotropy of this state is \(\mathcal{E}(\hat{\varrho};\hat{H})=E(q-\lambda_{-})\), while its antigotropy is \(\mathcal{A}(\hat{\varrho};\hat{H})=E(\lambda_{+}-q)\). Hence, the battery capacity is \[\mathcal{C}(\hat{\varrho};\hat{H})=E(1-2\lambda_{-})=E\sqrt{(2q-1)^{2}+4c^{2}}\;. \tag{18}\] This simple model of a quantum battery is represented graphically in Fig. 1. We observe that the base-2 von Neumann entropy \(S(\hat{\varrho})=-\mathrm{Tr}(\hat{\varrho}\log_{2}\hat{\varrho})\) and the capacity of a two-level battery satisfy the inequality: \[\frac{\mathcal{C}(\hat{\varrho};\hat{H})}{E}+S(\hat{\varrho})\geq 1, \tag{19}\] with equality only for pure states or the completely mixed state. This follows by virtue of (18) with the inequality \(S(\hat{\varrho})\geq 2\lambda_{-}\) for \(\lambda_{-}\in[0,1/2]\). A similar inequality, but in the opposite direction, holds for a range of Tsallis entropies, defined by [18]: \[T_{p}(\hat{\varrho})=\frac{1-\mathrm{Tr}\hat{\varrho}^{p}}{p-1}=\frac{1-\lambda_{ -}^{p}-(1-\lambda_{-})^{p}}{p-1}\;. \tag{20}\] For orders \(p\geq 2\), we find \[\frac{\mathcal{C}(\hat{\varrho};\hat{H})}{E}+T_{p}(\hat{\varrho})\leq 1\;. \tag{21}\] This can be proven by using the function \(g_{p}(\hat{\varrho})=2\lambda_{-}-T_{p}(\hat{\varrho})\), which is monotonically increasing in \(\lambda_{-}\in[0,1/2]\) whenever \(p\geq 2\). Finally, for the special case of the linear entropy, \(L(\hat{\varrho})\equiv T_{2}(\hat{\varrho})=1-\mathrm{Tr}[\hat{\varrho}^{2}]=1 -\lambda_{-}^{2}-\lambda_{+}^{2}\)[19], one easily obtains the equality: \[\frac{\mathcal{C}^{2}(\hat{\varrho};\hat{H})}{E^{2}}+2L(\hat{\varrho})=1\;. \tag{22}\] Similar operational relationships will be proved for equidistant \(d\)-level batteries in Appendix B. We now turn to the relations between capacity and coherence. Three of the most common measures of coherence for quantum states are: the \(l_{1}\)-norm of coherence measuring the overall magnitude of off-diagonal elements, \(\mathsf{Cobe}_{l_{1}}(\hat{\varrho})=\sum_{i\neq j}\lvert\varrho_{i,j}\rvert\); the robustness of coherence [40; 41], \[\mathsf{Cobe}_{\mathrm{RoC}}(\hat{\varrho})=\min_{\hat{\tau}\in\mathcal{D}( \mathbb{C}^{\,d})}\left\{s\geq 0\,\Big{|}\,\frac{\hat{\varrho}+s\hat{\tau}}{1+s} \in\mathcal{F}\right\}, \tag{23}\] with \(\mathcal{D}(\mathbb{C}^{2})\) the convex set of \(d\)-dimensional density operators and \(\mathcal{F}\subset\mathcal{D}(\mathbb{C}^{2})\) the subset of incoherent states; and the relative entropy of coherence [22], \(\mathsf{Cobe}_{\mathrm{re}}(\hat{\varrho})=S(\hat{\varrho}_{\mathrm{inc}})-S( \hat{\varrho})\), where \(\hat{\varrho}_{\mathrm{inc}}\) is the state obtained by deleting all the off-diagonal elements from \(\hat{\varrho}\). For qubit states (17), the first two measures are equivalent, \[\mathsf{Cobe}_{l_{1}}(\hat{\varrho})=\mathsf{Cobe}_{\mathrm{RoC}}(\hat{ \varrho})=2c\;. \tag{24}\] Hence the capacity (18) of a qubit battery can be decomposed into an incoherent and a coherent part, \[\mathcal{C}^{2}(\hat{\varrho};\hat{H}) = \mathcal{C}^{2}(\hat{\varrho}_{\mathrm{inc}};\hat{H})+E^{2} \mathsf{Cobe}_{l_{1}}^{2}(\hat{\varrho}) \tag{25}\] \[= \mathcal{C}^{2}(\hat{\varrho}_{\mathrm{inc}};\hat{H})+E^{2} \mathsf{Cobe}_{\mathrm{RoC}}^{2}(\hat{\varrho})\;,\] where the incoherent part, \(\mathcal{C}(\hat{\varrho}_{\mathrm{inc}};\hat{H})=(1-2q)E\), is the battery capacity of the diagonal state \(\hat{\varrho}_{\mathrm{inc}}\). A similar decomposition does not hold for the relative entropy of coherence; however, a simple substitution from (19) yields the inequality \[1+\mathsf{Cobe}_{\mathrm{re}}(\hat{\varrho})\leq\frac{\mathcal{C}(\hat{ \varrho};\hat{H})}{E}+S(\hat{\varrho}_{\mathrm{inc}}). \tag{26}\] In Appendix B, we provide upper bounds on the capacity of uniform \(d\)-dimensional batteries in terms of coherence, indicating its beneficial role for quantum work storage. _The capacity gap as an entanglement measure._--In the case of composite quantum batteries comprised of two or more local Hamiltonians, an entangled battery state can accommodate non-local work storage that is more than the sum of its local parts. This gives rise to energy-based entanglement criteria for bipartite and multipartite systems. Consider first a bipartite state \(\hat{\varrho}\) on the Hilbert space \(\mathcal{H}_{A}\otimes\mathcal{H}_{\mathcal{B}}\), with Hamiltonian \(\hat{H}=\hat{H}_{A}\otimes\mathbb{I}_{B}+\mathbb{I}_{A}\otimes\hat{H}_{B}\). The _ergotopic gap_\(\delta_{\mathrm{out}}\) is the difference of the ergotropy obtained by global unitary operations and local unitary operations: \[\delta_{\mathrm{out}}(\hat{\varrho};\hat{H})\equiv\mathcal{E}( \hat{\varrho};\hat{H})-\mathcal{E}_{\mathrm{L}}(\hat{\varrho};\hat{H})\] \[=\max_{\hat{U}\in\mathbf{U}(d^{2})}W_{\hat{U}}(\hat{\varrho};\hat {H})-\max_{\hat{U}\in\mathbf{U}_{\mathrm{L}}(d^{2})}W_{\hat{U}}(\hat{\varrho}; \hat{H})\;, \tag{27}\] where \(\mathbf{U}_{\mathrm{L}}(d^{2})\) is the group of local unitary operations of the form \(\hat{U}_{\ell}=\hat{U}_{A}\otimes\hat{U}_{B}\). Similarly, we can define the difference of the antiergotropy as \[\delta_{\mathrm{in}}(\hat{\varrho};\hat{H})\equiv\mathcal{A}_{ \mathrm{L}}(\hat{\varrho};\hat{H})-\mathcal{A}(\hat{\varrho};\hat{H})\] \[=\min_{\hat{U}_{\ell}\in\mathbf{U}_{\mathrm{L}}(d^{2})}W_{\hat{U} _{\ell}}(\hat{\varrho};\hat{H})-\min_{\hat{U}\in\mathbf{U}(d^{2})}W_{\hat{U}}( \hat{\varrho};\hat{H})\;. \tag{28}\] The sum of \(\delta_{\mathrm{in}}\) and \(\delta_{\mathrm{out}}\) then corresponds to the difference between the global capacity of the battery state and the battery capacity restricted to local operations. The latter is simply the sum of the individual capacities of the reduced battery states, since the battery Hamiltonian \(\hat{H}\) is a sum of local terms. We call the difference in global and local capacities the _bipartite battery capacity gap_: \[\Delta_{A\lvert B}(\hat{\varrho};\hat{H}) \equiv \delta_{\mathrm{in}}(\hat{\varrho};\hat{H})+\delta_{\mathrm{out}}( \hat{\varrho};\hat{H}) \tag{29}\] \[= \mathcal{C}(\hat{\varrho};\hat{H})-\mathcal{C}(\hat{\varrho}_{A}; \hat{H}_{A})-\mathcal{C}(\hat{\varrho}_{B};\hat{H}_{B})\;.\] This definition naturally extends to multipartite systems: the _fully separable capacity gap_ of an \(n\)-partite battery state \(\hat{\varrho}\) with Hamiltonian \(\hat{H}=\sum_{i}\hat{H}_{A_{i}}\otimes\mathbb{I}\) will be \[\Delta_{A_{1}\lvert\cdots\lvert A_{n}}(\hat{\varrho};\hat{H})\equiv\mathcal{C}( \hat{\varrho};\hat{H})-\sum_{i=1}^{n}\mathcal{C}(\hat{\varrho}_{A_{i}};\hat{H}_{A_ {i}})\;. \tag{30}\] **Proposition 3**.: _The fully separable battery capacity gap \(\Delta_{A_{i}\lvert\cdots\lvert A_{n}}\) of a pure state \(\lvert\Psi\rangle\) on Hilbert space \(\mathcal{H}=\otimes_{i=1}^{n}\mathcal{H}_{A_{i}}\), is non-increasing under local operations and classical communications (LOCC)._ Proof.: This property follows from the fact that the fully separable ergotropic gap is non-increasing under LOCC [31], and that the energy of the active states, \(-\mathrm{Tr}(\rho_{A_{i}}^{\uparrow}\hat{H}_{A_{i}})\), is Schur-convex, see the proof of Prop. 1. Moreover, all pure states have the same capacity \(\mathcal{C}(\lvert\Psi\rangle\langle\Psi\rvert,\hat{H})\) for a given Hamiltonian due to unitary invariance. Thanks to Proposition 3, the battery capacity gap can serve as a witness of entanglement in bipartite or multipartite systems. In appendix C, we propose measures of genuine multipartite entanglement. Here, we present instructive examples for bipartite and tripartite states. _Example 1._ Consider the two-qubit pure state \(|\psi\rangle=\sqrt{\lambda}|00\rangle+\sqrt{1-\lambda}|11\rangle\). The battery capacity gap with respect to the local Hamiltonians \(\hat{H}_{A}=\hat{H}_{B}=|1\rangle\langle 1|\) is given by \[\Delta_{A|B}(|\psi\rangle;\hat{H})=4\left(1-\max\{\sqrt{\lambda},\sqrt{1- \lambda}\}\right). \tag{31}\] This can be extended to mixed states by the standard method [19] as \[\Delta_{A|B}(\hat{\varrho},\hat{H})=\min\sum_{i}p_{i}\Delta_{A|B}(|\psi_{i} \rangle;\hat{H}). \tag{32}\] where the minimum over all possible pure state decompositions, \(\varrho=\sum_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|\). This implies from ref.[43] that \[\Delta_{A|B}(\hat{\varrho};\hat{H})=2\left[1-\sqrt{1-C^{2}(\hat{\varrho})} \right], \tag{33}\] where \(C(\hat{\varrho})\) denotes the concurrence. For the isotropic state \(\rho_{v}=[(1-v)\mathbb{I}_{4}+(4v-1)|\psi\rangle\langle\psi||]/3\), equation (33) yields a positive gap \(\Delta_{A|B}(\rho_{v};\hat{H})=2(1-2\sqrt{v-v^{2}})\) for \(1/2<v\leq 1\)[44]. _Example 2._ Consider the generalized tripartite GHZ state \(|\phi\rangle=\cos\theta|000\rangle+\sin\theta|111\rangle\)[45] with \(\theta\in(0,\pi/2)\). For any bipartition, we obtain the gap \(\Delta_{A|BC}(|\phi\rangle;\hat{H})=4\sin^{2}\theta\). The symmetry of the state then implies the genuine multipartite entanglement measure [46; 31] \[\Delta_{\min}^{G}(|\phi\rangle;\hat{H}):=\min_{X}\Delta_{X|X^{*}}(|\phi\rangle; \hat{H})=4\sin^{2}\theta>0, \tag{34}\] minimising over all \(X\subset\{A,B,C\}\). _Conclusions._--We have introduced the capacity of a quantum battery system as the difference between the maximal and the minimal energy that can be reached from it by unitary evolution. It quantifies the amount of work that a quantum battery can at most supply during operation cycles. The battery capacity does not depend on the actual battery charge at any given moment, making it a suitable figure of merit for comparing different quantum battery models. Due to its unitary invariance, the battery capacity can also be put in relation with the entropy of the battery state, and with measures of coherence and entanglement, as we have discussed for simple models with an equidistant energy level spectrum. We hope that extending this analysis to other quantum battery models will lead to deeper insights into the connection between quantum thermodynamics, work storage, and quantum information theory. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China (Nos.62172341,12204386,12075159 and 12171044), National Natural Science Foundation of Sichuan Provence (No.23NSFSC0752), Beijing Natural Science Foundation (No.Z190005), and the Academician Innovation Platform of Hainan Province.
2304.04177
Evaluating spintronics-compatible implementations of Ising machines
The commercial and industrial demand for the solution of hard combinatorial optimization problems push forward the development of efficient solvers. One of them is the Ising machine which can solve combinatorial problems mapped to Ising Hamiltonians. In particular, spintronic hardware implementations of Ising machines can be very efficient in terms of area and performance, and are relatively low-cost considering the potential to create hybrid CMOS-spintronic technology. Here, we perform a comparison of coherent and probabilistic paradigms of Ising machines on several hard Max-Cut instances, analyzing their scalability and performance at software level. We show that probabilistic Ising machines outperform coherent Ising machines in terms of the number of iterations required to achieve the problem s solution. Nevertheless, high frequency spintronic oscillators with sub-nanosecond synchronization times could be very promising as ultrafast Ising machines. In addition, considering that a coherent Ising machine acts better for Max-Cut problems because of the absence of the linear term in the Ising Hamiltonian, we introduce a procedure to encode Max-3SAT to Max-Cut. We foresee potential synergic interplays between the two paradigms.
Andrea Grimaldi, Luciano Mazza, Eleonora Raimondo, Pietro Tullo, Davi Rodrigues, Kerem Y. Camsari, Vincenza Crupi, Mario Carpentieri, Vito Puliafito, Giovanni Finocchio
2023-04-09T07:09:16Z
http://arxiv.org/abs/2304.04177v1
# Evaluating spintronics-compatible implementations of Ising machines ###### Abstract The commercial and industrial demand for the solution of hard combinatorial optimization problems push forward the development of efficient solvers. One of them is the Ising machine which can solve combinatorial problems mapped to Ising Hamiltonians. In, particular, spintronic hardware implementations of Ising machines can be very efficient in terms of area and performance, and are relatively low-cost considering the potential to create hybrid CMOS-spintronic technology. Here, we perform a comparison of coherent and probabilistic paradigms of Ising machines on several hard Max-Cut instances, analyzing their scalability and performance at software level. We show that probabilistic Ising machines outperform coherent Ising machines in terms of the number of iterations required to achieve the problem's solution. Nevertheless, high frequency spintronic oscillators with sub-nanosecond synchronization times could be very promising as ultrafast Ising machines. In addition, considering that a coherent Ising machine acts better for Max-Cut problems because of the absence of the linear term in the Ising Hamiltonian, we introduce a procedure to encode Max-3SAT to Max-Cut. We foresee potential synergic interplay between the two paradigms. + Footnote †: journal: Computer Science Corresponding authors: *[email protected], *[email protected] ###### Abstract We consider a two-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-dimensional (N-D)-(N-D)-dimensional (N-D)-(N-D)-dimensional (N-D)-(N-D)-dimensional (N-D)-(N-D)-dimensional (N-D)-(N-D)-dimensional (N-D \[H(\mathbf{s})=-\sum_{ij}J_{ij}s_{i}s_{j}\ ; \tag{1}\] where \(\mathbf{s}=\{s_{1},...,s_{t},...,s_{N}\}\) is the vector of binary spin states of the system (\(s_{i}\in\{-1,+1\}\)) and \(J\) is the symmetric coupling matrix of the graph. The matrix element \(J_{ij}\) stores the weight of the edge joining nodes \(i\) and \(j\); the left part of Fig. 1 shows an example of construction of a \(J\) matrix. In cIMs, the binary spin states are mapped to the relative phase of oscillators in the sub-harmonic injection locking regime. In pIMs, the binary spin states are implemented with a stochastic bistable system. The Ising spin of both cIMs and pIMs can be implemented with a CMOS-compatible magnetic tunnel junction (MTJ), with the final goal of having IMs that can be manufactured with a heterogenous or monolithic integration of hybrid spintronic/CMOS technology. In this respect, three-terminal MTJs are compatible with both cIMs and pIMs. The two IMs are then compared in terms of performance, highlighting advantages and disadvantages. Moreover, we envision their compatibility for potential synergic interplays between the two paradigms. We show that pIMs outperform cIMs in software implementations. Nevertheless, cIMs show a great potential for the high working frequency at which spintronic oscillators can operate [12]. The paper is organized as follows. Section II presents potential spintronic implementations of the Ising spin for cIMs and pIMs. Section III describes the main model used for the cIMs, i.e., the Figure 1: A visual example of a maximum cut problem. The graph is encoded into an Ising Hamiltonian with a symmetric matrix \(J\) whose values correspond to the weights of the connections. In an optimal solution, the nodes are in one of the two states in such a way that the bipartition “cuts” the highest value of accumulated edges. Kuramoto model, and compare it to the universal model of nonlinear oscillators with negative damping used to model spintronic oscillators [13] (here referred to as Slavin model). Section IV briefly introduces the concept of probabilistic computing with p-bits [9]. Section V compares cIMs with pIMs on a well-known benchmark instance library. Section VI tests one of probabilistic computing's encoding techniques, called invertible logic gates, on a modified Kuramoto model. Section VII evaluates the performance of a problem mapping that allows to take advantage of the cIM's good performance with Max-Cut to solve other hard COPs. Section VIII summarizes results and outlooks. ## II Implementation of Ising spins for cIMs and pIMs with three-terminal magnetic tunnel junctions A promising platform for the implementation of Ising machines with low power consumption is the spintronic technology. To this end, here we show a versatile design based on three terminal MTJs [14], combining current-induced spin-orbit-torque (SOT) [15, 16, 17] and spin-transfer-torque (STT) [18, 19, 20, 21], and voltage-controlled magnetic anisotropy (VCMA) [22, 23], for the implementations of Ising spins. These effects, when properly combined, allow to successfully achieve binarization and tunability, enabling the implementation of several IM paradigms. This part of the work is based on micromagnetic simulations performed by numerically integrating the Landau-Lifshitz-Gilbert-Slonczewski equation [24]: \[\begin{split}\frac{d\mathbf{m}}{dt}&=\frac{\gamma M_{S} }{(1+\alpha_{G}^{2})}\left(-\left(\mathbf{m}\times\mathbf{h}_{eff}\right)-\alpha_{G} \left(\mathbf{m}\times\mathbf{m}\times\mathbf{h}_{eff}\right)\right.\\ &\qquad\qquad\qquad\left.+\sigma\left(J_{STT}\ g_{T}\ (\mathbf{m}\times\mathbf{m}\times\mathbf{p})+J_{SOT}\theta_{ SHE}\ (\mathbf{m}\times\mathbf{m}\times\mathbf{\sigma}_{ SHE}\ )\right)\right)\end{split} \tag{2}\] where \(\mathbf{m}=\frac{\mathbf{m}}{M_{S}}\) is the normalized magnetization vector, \(\gamma\) is the gyromagnetic ratio, \(M_{S}\) is the saturation magnetization of the MTJ free layer (FL), and \(\alpha_{G}\) is the Gilbert damping factor. \(\mathbf{h}_{eff}\) is the effective magnetic field, which includes the demagnetizing field and the interfacial uniaxial perpendicular anisotropy that also contains the VCMA contribution. The spin torques are proportional to a pre-factor \(\sigma=\frac{g|\mu_{B}|}{|e|yM_{S}^{2}d_{x}}\), where \(g\) is the gyromagnetic splitting factor, \(\mu_{B}\) is the Bohr magneton, \(e\) is the electron charge, and \(d_{x}\) is the thickness of the FL. The STT is proportional to the spin polarization function \(g_{T}\)[18], which is a function of the spin polarization \(P\), and depends on the polarizer orientation, \(p\). The SOT is proportional to the spin hall angle \(\theta_{SHE}\), a parameter that depends on the heavy metal (HM), and its orientation is determined by the unit vector \(\sigma_{SHE}\). The latter, for the Cartesian coordinate system fixed here, is along the \(x\)-direction, considering the charge current in the HM flowing along the \(-y\)-direction. Both \(J_{STT}\) and \(J_{SOT}\) are current densities. See Fig. 2 for more details. For cIMs, the MTJ is designed to have a free layer with perpendicular magnetization and an in-plane polarizer (Fig. 2(a)). The device is biased with a large enough SOT in order to excite a self-oscillation state [14, 25, 26]. We consider an MTJ with an elliptical cross-section. The simulation parameters are summarized in Tab. 1. In order to have an Ising spin, it is necessary to binarize the oscillator's phase. This can be achieved with VCMA-driven parametric excitations, as already proposed for spintronic oscillators with nano-constrictions and MTJ with two terminals [27, 28]. Our calculations predict the robustness of the binarization technique and that it can be achieved in a range of frequencies close to two times the self-oscillation frequency. An example of parametric locking is shown in Fig. 2b. Here we can observe the binarization of the locked states (red and blue curves). The phase difference between the self-oscillation and the ac VCMA input (Fig. 2(b)) of the two possible synchronized states is shifted by 180 degrees. The binarization can also be achieved using an ac STT current that drives the injection-locking at the second harmonics (Fig. 2(c)) as already observed experimentally [27, 29]. The implementation of pIMs requires the realization of a p-bit, i.e., a tunable stochastic bistable system [5, 9]. These p-bits can be naturally implemented with stochastic MTJs, with the search of p-bits implementations with sub-nanosecond switching time being a very active research direction [30, 31]. The first design of tunable p-bit based on a 3-terminal MTJ [9] has been demonstrated experimentally recently [32]. Here, we propose a p-bit implementation with three terminal MTJs based on the idea of magnetic clocking, as introduced for nanomagnetic logic [33, 34] and physical unclonable functions [35]. The sketch of the MTJ for the p-bit implementation is described in Fig. 2(d). The optimized MTJ has a circular cross-section with the equilibrium magnetic state of both free and polarizer layers being out-of-plane and with no need of the VCMA effect. The tunable random number generation (TRNG) process is summarized in Fig. 2e with a representation of how the energy landscape of the FL magnetization changes in presence/absence of SOT current. The process goes as follows: i) before applying the SOT, the FL energy landscape has two stable minima along the z-axis, and the FL magnetization is in one of the two stable configurations. ii) After applying a large enough SOT, the energy landscape changes and it has a minimum in which the magnetization of the FL is aligned along the spin-current direction \(y\)-axis [35] iii) Once the SOT is switched off, the magnetization relaxes towards one of the two z-axis directions due to the perpendicular magnetic anisotropy, both of which are stable states, sampled with an equal probability. The STT from the third terminal can adjust this switching probability, finally resulting in a tunable p-bit, as shown in Fig. 2(f). Each point of Fig. 2(f) is obtained by averaging over \(\mathbf{10^{4}}\) realizations with different seeds. The points are well-described by a sigmoidal hyperbolic tangent. \begin{table} \begin{tabular}{|l|c|c|} \hline \multicolumn{3}{|c|}{**LLG micromagnetic model**} \\ \hline & **cIM** & **pIM** \\ \hline \(\mathbf{Size\left(nm^{3}\right)}\) & 100 \(\times\) 40 \(\times\) 1 & 50 \(\times\) 50 \(\times\) 1.4 \\ \hline \(\mathbf{M_{s}\left(MA/m\right)}\) & 1.10 & 0.80 \\ \hline \(\mathbf{K_{u}\left(MJ/m^{3}\right)}\) & 0.62 & 0.060 \\ \hline \end{tabular} \end{table} Table 1: Parameters used in the LLG micromagnetic simulations. Figure 2: (a) A schematic of an MTJ designed to behave as an oscillator for a cIM. The MTJ has three terminals and has an elliptical cross section with the polarizer having its magnetization aligned along the \(\mathbf{x}\)-direction. The self-oscillations are driven by the dc current density and the VCMA signal acts as the injection-locking signal. (b) In the top panel, the VCMA signal acting on the MTJ; in the bottom panel, \(m_{x}\) dynamics from simulations of two distinct devices, showcasing the phase binarization phenomenon due to the injection-locking from the VCMA. (c) The same behavior shown in (b) can be achieved by using an ac STT current density as the injection-locking signal. (d) A schematic of an MTJ designed to behave as a p-bit for a pIM. The MTJ has three terminals and is a circular cylinder with an out-of-plane polarizer. (e) Schematic showing the three-steps process of tunable random number generation (RNG) as described in the text. The dependence of this probability on the signal intensity is sigmoidal. Each point of this plot is the average of \(10^{4}\) RNG simulations. The energy landscape of the three main states is shown in the insets. ## III Modeling cIMs State-of-the-art implementations of cIMs exploit coupled oscillators realized with various physical systems [1, 36, 37, 38], and most of them employ the Ising Hamiltonian \(J\) matrix as the coupling matrix of the system, achieving the binarization of the phase by sub-harmonic parametric locking at 0 an \(\pi\) radians, as already discussed in previous sections. Hence, a given Max-Cut instance is solved using a network of oscillators that naturally evolves toward a low energy state corresponding to the problem's solution, set by the coupling matrix. Optical parametric oscillators obtain solutions very quickly [1, 36], however the size of the required systems is not suitable for highly integrated applications. LC oscillators have been successfully used to solve Max-Cut problems [39, 40], with the advantage being the possibility of performing rapid analysis with lumped components, but the use of inductors and capacitors also hinder highly integrated solutions. Similar obstacles have been shown for insulator to metal (IMT) phase-transition nano-oscillators (PTNO) which provide excellent energetic performances, but require capacitors to couple devices [41, 42]. Spintronic solutions have been simulated [43, 44] and recently an 8-spin Max-Cut solver was realized in hardware using spin hall nano-oscillators (SHNO). The main limitation observed was overcoming the slow propagation of spin waves [45], although scalability is still an issue. Considering the state-of-the-art and future perspectives, MTJ-based oscillators offer an optimal solution for forthcoming implementations of an Ising Machine chip for daily life applications due to their exploitable nonlinear dynamics, low sizes, GHz working frequencies and CMOS compatibility. In the following, we describe two oscillator models and test their performance with the goal to furnish the tools and directions to design a scalable and compact cIM. ### A) Kuramoto model The Kuramoto model was first introduced to provide a general description of synchronization [46, 47]. By adding a binarizing term and designing the coupling matrix, the generalized Kuramoto model provides a simple analytical description of cIMs. In this generalized model, the phases of the coupled oscillators interdependently evolve according to \(N\) coupled differential equations, each of them describing the dynamics of the phase of a single oscillator \(\phi_{i}\). For the ith oscillator we have: \[\frac{d}{dt}\phi_{i}(t)=-K\sum_{j=1,j\neq i}^{N}J_{ij}\text{sin}\left[\phi_{i} (t)-\phi_{j}(t)\right]-S\sin[2\phi_{i}(t)]+\xi(t)\;; \tag{3}\] where \(\mathbf{\phi}=\{\phi_{i},...,\phi_{N}\}\) represents the phases of the coupled oscillators, \(K\) is the coupling strength, \(S\) is the parametric locking parameter and \(\xi\) an additive white Gaussian phase noise [48]. We solved the dimensionless dynamical equations, as it does not influence the accuracy of the Ising machine. As already discussed, the parametric locking term applies a signal with twice the frequency of the oscillator to binarize the oscillator's phase (sub-harmonic locking) into multiples of \(\pi\) radians. The advantage of this model lies in its easy-to-integrate dynamical equations that allows the implementation of advanced annealing schemes with relatively minor effort. Fig. 3(a) shows the solution of an exemplary 100 oscillator cubic Max-Cut problem using a linear annealing scheme of the coupling strength \(K\) and the parametric locking \(S\). Both coefficients are multiplied by an annealing parameter \(C\) that starts at zero, and linearly grows up to one. For each run, this annealing scheme is applied five times, consecutively. The annealing scheme and an example trend of the Max-Cut as a function of time are shown in Fig. 3(b). \(K\) and \(S\) are selected heuristically in order to optimize the performance of the simulated cIM and are, in general, dependent on the topology of the problem. Each of the oscillators' phases are initialized with a random value between 0 and \(\pi\) radians. Fig. 3(a) also shows how the system quickly reaches a local minimum, remains stable as long as the annealing parameters are below a well-defined threshold, which depends on a trade-off between the coupling strength and the force driving the phase binarization. This annealing strategy allows an effective exploration of the energetic landscape, with every reset of the coupling coefficients often resulting in a better Max-Cut. The Kuramoto model has already been tested against large-scale benchmark problems and has shown very promising results [48]. It can be used to describe spintronic oscillators with reduced frequency-power coupling [49, 50] or characterized by soliton dynamics, such as vortices [51, 52] and bubbles [53]. However, this simple model cannot fully describe the physical behavior of oscillators in which phase and power are coupled. For example, this is the case of spintronic oscillators where, as a first approximation, phase and power are linked through the nonlinear frequency shift \(N_{0}\)[13, 49, 50, 54], as described in detail in the following section. ### B) Universal model of nonlinear oscillators with negative damping (Slavin model) Experiments have shown the accuracy of the Slavin model to simulate and understand the behaviour of spin-torque nano oscillators and spin-hall oscillators where the oscillator power and phase are coupled [53, 55], and their collective behaviors when they interact [43]. The dynamics of each oscillator can be described by two coupled differential equations [13], \[\frac{dp_{i}}{dt}=-2p_{i}\big{[}\Gamma_{+,i}(p_{i})-\Gamma_{-,i} (p_{i})\big{]}+2F_{\rm e}\sqrt{p_{i}}\cos(2\omega_{i}t+2\phi_{i})+\] \[+2\Omega\sum_{j,j=i}^{N}J_{ij}\sqrt{p_{i}p_{j}}\cos\big{(}\phi_{ i}-\phi_{j}-\beta\big{)}+\xi_{p}(t), \tag{4a}\] \[\frac{d\phi_{i}}{dt}=-\omega_{i}(p_{i})-\frac{F_{\rm e}}{\sqrt{p_{i}}}\sin(2 \omega_{i}t+2\phi_{i})+\Omega\sum_{j,j\neq i}^{N}J_{ij}\sqrt{\frac{p_{i}}{p_{ j}}}\sin\big{(}\phi_{i}-\phi_{j}+\beta\big{)}+\xi_{\phi}(t)\,, \tag{4b}\] where \(\phi_{i}(t)\), and \(p_{i}(t)\) describe the time evolution of the oscillator power and phase, respectively. In both equations, the first term on the righthand side describes the dynamics of an independent device. \(\Gamma_{+}\) and \(\Gamma_{-}\) are functions representing respectively the positive and negative damping effects. First order expansion of these functions results in \(\Gamma_{+,i}=\Gamma_{G}(1+Qp_{i})\) and \(\Gamma_{-,i}=\Gamma_{G}I_{\rm ratio}(1-p_{i})\) where \(Q\) is the nonlinear damping coefficient, and \(I_{\text{ratio}}\) is the ratio between the applied current and the threshold current necessary to excite self-oscillations. These expressions are proved to accurately describe experimental findings [13, 55, 56]. The frequency \(\omega_{l}\) of each oscillator is linked with its power \(p_{l}\) through the relation \(\omega_{l}=\omega_{0}+N_{0}p_{l}\), where \(\omega_{0}\) is the resonance frequency and \(N_{0}\) is the nonlinear frequency shift. The noise has been implemented as proposed in [13], and the results obtained with and without the application of the thermal field at room temperature are qualitatively equivalent. The term with amplitude \(F_{\text{e}}\) is the external signal used for the parametric locking. Finally, the third term models the effect of the coupling between the oscillators and is dependent on the coupling strength \(\Omega\) and the topology of the oscillator network, with \(J_{ij}\) being the corresponding element of the Ising matrix (see Eq. (1)). The \(\beta\) parameter represents the phase delay of the two coupled signals, it depends mainly on the coupling mechanism and the spatial distance among oscillators when considered [57]. These models can be used for the implementation of spintronic cIMs [43]. The parameters used for the simulations are based on a experimental parameters of MTJ-based spintronic oscillators [58] having CoFeB as a free layer (see the whole set of parameters in Tab. 2). Fig. 3(c) shows the solution of the same problem of Fig. 3(a) obtained solving Eq. (4a) and (4b). In this case the phases and the powers of the oscillators are shown, even if only the phase is used to evaluate the Max-Cut. We notice that, as opposed to the simulation of the Kuramoto model of Fig. 3(a), the exploration of the phase landscape (Fig. 3(c), upper panel) takes place during the first part of the annealing period, with the network dynamics stabilizing toward the end. The plot of the powers shows their relationship with the annealing schedule (Fig. 3(c), lower panel) and, as the phases, they tend to binarize. At the beginning of the new annealing cycle, the metastable state becomes unstable, and the powers drop rapidly. This simple parameter schedule improves the Max-Cut as the number of annealing cycles increases. **C) Comparison between Kuramoto and Slavin models** Fig. 3(b) shows the cut values as a function of time of the models over the same instance. Both Kuramoto and Slavin cIMs reach a maximum value of 136. Overall, the former gets to an energetic minimum more rapidly, while the latter has a wider exploration of the energetic landscape. In both cases, the annealing schedule (also shown in Fig. 3(b)) is a key component of the energy-minimization process. A systematic comparison between the performance of software implementation of cIMs (Kuramoto vs Slavin models) has been performed considering Max-Cut instances with an increasing number of nodes (each node is an oscillator). The Max-Cut achieved for each instance is shown in Fig. 3(d) and reveals that the accuracy of the two models is comparable. In other words, the coupling between power and phase does not reduce the performance of a cIM. This is one of the main results of this work. We have also performed a grid search of the optimal nonlinear frequency shift \(N\) and nonlinear damping coefficient \(Q\), running with all the models the same 100 random instances of Max-Cut problems of cubic graphs with 100 oscillators and averaging the results. We emphasize that the tunable parameter \(N\) measures the coupling between the power and phase dynamics and is a key difference between the two studied models [13]. For \(N\) equals to zero, the frequency of the oscillators in the Slavin model is independent of the power as in the Kuramoto model. Tab. 3 shows that, in general, the parameter \(Q\) does not influence the Max-Cut score as much as the parameter \(N\), for which an optimal value is approximately \(N=10N_{0}\), where \(N_{0}\) is a reference value [43]. These results underline that the nonlinearities of spintronic oscillators might be beneficial for the realization of an IM hardware implementation and reveal that, by properly tuning the system, the Slavin model can achieve better results than the Kuramoto model. \begin{table} \begin{tabular}{|c|} \hline **Slavin model** \\ \hline \end{tabular} \end{table} Table 2: Parameters used in the simulations of the Slavin model Ising machine. Figure 3: (a) – (c) Example runs of maximum cut search of the same randomly generated cubic graph with 100 oscillators simulated using the Kuramoto model (a) and the Slavin model (c). The plots show the phases, used to evaluate the cut value, and the powers of the oscillators in the latter case. The cut values of both models evaluated in each time step (continuous line) are plotted in (b) together with the linear annealing schedule of the respective annealing parameters (dashed line). Both models use a saw-tooth annealing schedule. (d) A comparison between the performance of the Kuramoto model and the Slavin model finding the Max-Cut of randomly generated cubic graphs. For most of the reported values, the Max-Cut has been averaged over the solution of 100 different randomly generated graphs; for a higher number of oscillators only one run is considered. ## IV Modeling pIMs In pIMs, the nodes of the Ising model are represented by p-bits which can be hardware-implemented with stochastic MTJs [9] or, as proposed in this work, with a deterministic clocked excitation of the randomness. The update process of the p-bits in a pIM is described by the following equations [5, 9], \[m_{i}(t)=\text{sgn}(\text{rand}(-1,+1)+\tanh(I_{i}(t)))\,,\] (5a) where \[I_{i}(t)=I_{0}(t)\big{(}\sum_{ij}J_{ij}m_{j}(t)+h_{i}\big{)}\,. \tag{5b}\] Here \(\mathbf{m}=\{m_{1},...,m_{N}\}\) is the state vector of the probabilistic bits (p-bits), \(I_{0}\) is a pseudo-temperature parameter used to control the annealing process and \(\mathbf{h}=\{h_{1},...,h_{N}\}\) is the bias vector, an additional term meant to represent external field excitations in the general formulation of an Ising Hamiltonian. By updating a system's p-bits in sequence, the state samples the energy landscape of the Ising Hamiltonian. Using common annealing strategies like simulated annealing [59] or parallel tempering [60, 61], the pIM solver drives the system to the ground state [5]. pIMs have been employed successfully with COP encodings using probabilistic spin logic (PSL) [4, 9]. PSL's strategy is to use invertible logic gates, Ising models whose energy function has \begin{table} \begin{tabular}{c c c c c c c} [MISSING_PAGE_POST] its minima corresponding to the truth table state of the gate. Their advantage is that, since they are transposed to an undirected graph, they no longer have a preferential input/output combination and can thus be operated in reverse. This allows for a conceptually simple way to solve COPs: by using a logic circuit where the standard logic gates are substituted by invertible logic gates, the solution can be reached by operating a simple circuit in reverse [9]. **V. A comparison between cIMs and pIMs** Simulated performance of cIMs based on the Kuramoto model (similar results are also obtained with the Slavin model) and pIMs were compared by attempting to solve several hard Max-Cut instances from G-set graphs, generated by the machine independent graph generator "rudy" of G. Rinaldi. Each instance was attempted one-hundred times by both solvers and the cut value as a function of the iterations was recorded. Both solvers were operated using heuristically optimized parameters. Tab. IV summarizes the results of the comparison. For each instance, we report both the average Max-Cut score and the best one achieved among the one-hundred runs. In addition, we also show the number of times the best result was reached, along with how often a solution close to the best one was obtained [48]. While there is variation between individual runs, the plot clearly shows that the software implementation of pIMs perform consistently better than cIMs. Fig. 4 shows the analysis of a single instance. The pIM runs (in red) reach higher-quality states than cIM ones (in blue) and in substantially shorter times. However, it is important to note that we make the comparisons in terms of number of computational steps performed. In this context, pIMs, with their discrete dynamics, explore new states at a faster rate than cIMs, whose dynamics is obtained as a numerical integration of continuous differential equations. In a potential hardware implementation, the performance of the two paradigms depends largely on the specific properties of the devices used for the implementation. In this context, we envision that the two paradigms can achieve comparable performance, and even effectively synergize if used in combination. is the best solution found among the 100 trials of each instance, '# of best' is the number of times that best result was achieved, and '# 0.999% of best' is the number of times a state with maximum cut 99.9 % close to the best result was reached. \begin{table} \begin{tabular}{|l|l l|l|l|l|l|l|l|l|} \hline Instance & pIM & \multirow{2}{*}{pIM Average} & pIM & pIM & pIM & \multirow{2}{*}{cIM Average} & cIM & \multirow{2}{*}{cIM Average} & cIM & cIM & \multirow{2}{*}{cIM \# of best} & cIM & \multirow{2}{*}{cIM \# 0.999\%} \\ name & & & Best & \# of best & & \# of best & & \multicolumn{2}{c|}{of best} & & \multicolumn{2}{c|}{of best} \\ \hline g1 & 11571.06 & \(\pm\) 19.30 & 11624 & 2 & 4 & 11475.38 & \(\pm\) 25.01 & 11540 & 1 & 1 \\ \hline g2 & 11571.90 & \(\pm\) 17.28 & 11620 & 2 & 3 & 11481.84 & \(\pm\) 31.89 & 11575 & 1 & 2 \\ \hline g3 & 11573.58 & \(\pm\) 19.20 & 11617 & 1 & 4 & 11473.73 & \(\pm\) 29.95 & 11550 & 1 & 1 \\ \hline g4 & 11601.52 & \(\pm\) 23.56 & 11646 & 2 & 10 & 11497.78 & \(\pm\) 37.83 & 11650 & 1 & 1 \\ \hline g5 & 11585.88 & \(\pm\) 18.77 & 11622 & 2 & 11 & 11491.06 & \(\pm\) 30.18 & 11545 & 1 & 8 \\ \hline g6 & 2128.77 & 20.56 & 2175 & 1 & 4 & 2029.92 & \(\pm\) 29.10 & 2113 & 1 & 1 \\ \hline g7 & 1965.63 & \(\pm\) 18.10 & 2000 & 1 & 1 & 1866.66 & \(\pm\) 31.79 & 1923 & 3 & 4 \\ \hline g8 & 1967.06 & \(\pm\) 16.84 & 1998 & 1 & 2 & 1878.82 & \(\pm\) 29.02 & 1934 & 1 & 1 \\ \hline g9 & 2007.54 & \(\pm\) 18.50 & 2047 & 1 & 1 & 1908.26 & \(\pm\) 37.49 & 1976 & 1 & 1 \\ \hline g10 & 1958.16 & \(\pm\) 17.60 & 1999 & 1 & 1 & 1864.34 & \(\pm\) 33.02 & 1946 & 1 & 1 \\ \hline g11 & 556.04 & \(\pm\) 2.83 & 562 & 3 & 3 & 483.76 & \(\pm\) 7.67 & 502 & 1 & 1 \\ \hline g12 & 547.42 & \(\pm\) 3.09 & 554 & 4 & 4 & 478.90 & \(\pm\) 8.36 & 492 & 6 & 6 \\ \hline g13 & 572.14 & \(\pm\) 2.72 & 580 & 1 & 1 & 503.06 & \(\pm\) 9.51 & 522 & 1 & 1 \\ \hline g14 & 3045.17 & \(\pm\) 4.43 & 3053 & 2 & 19 & 2967.60 & \(\pm\) 10.39 & 2991 & 1 & 2 \\ \hline g15 & 3029.81 & \(\pm\) 6.25 & 3049 & 1 & 2 & 2947.04 & \(\pm\) 9.63 & 2967 & 1 & 3 \\ \hline g16 & 3031.10 & \(\pm\) 5.15 & 3043 & 1 & 5 & 2953.35 & \(\pm\) 10.34 & 2976 & 1 & 3 \\ \hline g17 & 3026.31 & \(\pm\) 5.02 & 3042 & 1 & 1 & 2948.26 & \(\pm\) 10.73 & 2967 & 3 & 6 \\ \hline g18 & 971.01 & \(\pm\) 9.43 & 988 & 1 & 1 & 902.29 & \(\pm\) 18.50 & 941 & 1 & 1 \\ \hline g19 & 885.29 & \(\pm\) 9.57 & 904 & 1 & 1 & 815.12 & \(\pm\) 18.30 & 856 & 1 & 1 \\ \hline g20 & 919.75 & \(\pm\) 10.93 & 940 & 1 & 1 & 847.73 & \(\pm\) 17.25 & 907 & 1 & 1 \\ \hline g21 & 906.61 & \(\pm\) 9.89 & 928 & 1 & 1 & 838.35 & \(\pm\) 19.32 & 881 & 1 & 1 \\ \hline g22 & 13278.04 & \(\pm\) 21.72 & 13336 & 1 & 2 & 13060.89 & \(\pm\) 35.50 & 13141 & 1 & 4 \\ \hline g23 & 13282.67 & \(\pm\) 18.81 & 13333 & 1 & 3 & 13071.73 & \(\pm\) 34.54 & 13149 & 1 & 3 \\ \hline g24 & 13271.28 & \(\pm\) 18.02 & 13317 & 1 & 5 & 13063.72 & \(\pm\) 32.00 & 13127 & 1 & 5 \\ \hline g25 & 13273.72 & \(\pm\) 20.04 & 13318 & 1 & 9 & 13075.01 & \(\pm\) 34.39 & 13164 & 1 & 2 \\ \hline g26 & 13263.88 & \(\pm\) 18.27 & 13314 & 1 & 3 & 13063.50 & \(\pm\) 32.36 & 13130 & 1 & 4 \\ \hline g27 & 3272.67 & \(\pm\) 19.28 & 3008 & 1 & 4 & 3066.10 & \(\pm\) 38.15 & 3153 & 1 & 2 \\ \hline g28 & 3234.82 & \(\pm\) 17.94 & 3272 & 2 & 3 & 3026.49 & \(\pm\) 34.10 & 3146 & 1 & 1 \\ \hline g29 & 3334.90 & \(\pm\) 14.52 & 3369 & 2 & 4 & 3130.31 & \(\pm\) 33.79 & 3222 & 1 & 1 \\ \hline g30 & 3348.80 & \(\pm\) 19.76 & 3392 & 1 & 1 & 3137.66 & \(\pm\) 36.18 & 3217 & 1 & 1 \\ \hline g31 & 3245.56 & \(\pm\) 19.65 & 3280 & 1 & 4 & 3040.71 & \(\pm\) 34.89 & 3120 & 1 & 1 \\ \hline g32 & 1382.90 & \(\pm\) 5.41 & 1394 & 2 & 2 & 12026.6 & \(\pm\) 14.35 & 1242 & 1 & 1 \\ \hline g33 & 1358.48 & \(\pm\) 4.41 & 1368 & 4 & 4 & 1181.68 & \(\pm\) 13.76 & 1216 & 1 & 1 \\ \hline g34 & 1361.26 & \(\pm\) 4.60 & 1372 & 1 & 1 & 1186.12 & \(\pm\) 14.71 & 1218 & 1 & 1 \\ \hline g35 & 7633.99 & \(\pm\) 8.70 & 7655 & 1 & 2 & 7430.10 & \(\pm\) 19.89 & 7475 & 1 & 1 \\ \hline g36 & 7626.45 & \(\pm\) 10.29 & 7648 & 1 & 8 & 7422.34 & \(\pm\) 16.98 & 7460 & 1 & 3 \\ \hline g37 & 7634.26 & \(\pm\) 9.28 & 758 & 1 & 5 & 7435.55 & \(\pm\) 17.99 & 7493 & 1 & 1 \\ \hline g38 & 7634.18 & \(\pm\) 8.48 & 7 ### VI. Invertible logic gates with modified Kuramoto model To implement invertible logic gates with cIM models, we consider an additional bias vector \(h\) in the model of Eq. (3). The extra bias favors one of the two binarized phases for each oscillator making the probability to select a specific Ising spin state tunable, similarly to the corresponding term of Eq. (5a) for pIMs [9]. It corresponds to the full Ising Hamiltonian with local applied fields, \[H(\mathbf{s})=-\sum_{ij}J_{ij}s_{i}s_{j}+\sum_{i}h_{i}s_{i}\,. \tag{6}\] The generalized Kuramoto model to describe the Hamiltonian of Eq. 6 is given by \[\frac{d}{dt}\phi_{i}(t)=-K\sum_{j=1,j\neq i}^{N}J_{ij}\text{sin }[\phi_{i}(t)-\phi_{j}(t)]-K_{h}h_{i}\sin[-\phi_{i}(t)]-S\sin[2\phi_{i}(t)]+ \xi(t)\,, \tag{7}\] where \(K_{h}\) represents the bias coupling strength which is proportional to a sinusoidal term at the same frequency of the oscillator frequency. The added term mimics the PSL pivot structure used in pIMs that locally bias the switching probability in a sigmoidal fashion and is independent of the states of other nodes. To study the properties of the modified proposed model, we ran an extensive parametric study on an AND gate Ising encoding to estimate the optimal values of the three coupling parameters \(K\), \(K_{h}\) and Figure 4: A comparison of the pIM and the Kuramoto-based cIM model on the instance “g1.rud” from the G-set. The instance has 800 nodes and 19176 edges. The red (blue) area represents an ensemble of 100 solving attempts using pIM (cIM). For both models, the black line represents the average of all solving attempts. The annealing parameters were chosen after a systematic study to ensure fairness in the comparison. \(S\) in the Kuramoto model (Eq. (7)). To assess whether the final state of each schedule belonged to the truth table of the AND gate, we ran a total of \(10^{4}\) annealing schedules with randomized initial conditions and recorded their final state. After finding the optimal set of parameters for a balanced AND gate, we considered composite circuits of AND gates to assess whether the parametrization was scalable. Fig. 5(a) shows that the probability of achieving a state compatible with the clamped output of the AND gate does match the expected values. This means that, when the output is equal to 1 (orange bars in the graph), 111 is the only visited state; on the other hand, when it is clamped to 0 (blue bars in the graph), the states 000, 100, and 010 are equally explored. In Fig. 5(b) the same parameters were used to test two AND gates connected acting as 3-inputs AND gate. While the selected states are mostly correct, the probability distribution of the final states does not match the energy distribution of the model, even accounting for statistical fluctuations. In the time evolution of the phase space representation of Fig. 5(c), we observed that the final state of a run with more than one ground state depends almost exclusively on its initial phase configuration. This intuitively resembles an attraction/repulsion electromagnetic-like model, with the charges being the binary configurations of the phase space. The attraction/repulsion effect of a configuration depends on how energetically advantageous it is. The oscillators act as a test charge cast in a random position \(\mathbf{P_{0}}\) of the phase space and subject to a force equal to: \[\mathbf{F_{T}}=\sum_{n=1}^{2N}\frac{C_{n}}{d(\mathbf{P_{0}},\mathbf{S_{n}})}\left(\mathbf{P_{0 }}-\mathbf{S_{n}}\right)\, \tag{8}\] Where \(\mathbf{S_{n}}\) is the \(N\)-dimensional vector that represents the position in the phase space of the n\({}^{\text{th}}\) configuration and \(C_{n}\) is its attraction/repulsion coefficient. \(d(\mathbf{x},\mathbf{y})\) is the \(N\)-dimensional Euclidean distance between two points. In Fig. 5(d) we compare the results of the cIM implementation with the effective model introduced. The similarities between the results show that the explorable energy landscape of the cIM implementation is indeed very sensitive to the initial phase of each oscillator, as described by the model of Eq. (8). This may be due to the annealing process for cIMs, which only allows classical trajectories in the phase space and produces a particle-like behaviour of the relative phases. Non-linearities of the phase dynamics, such as the ones described by the Slavin model, and modified annealing processes could overcome the observed limitation, improving the invertible logic gates implementation with cIMs. Figure 5: (a)-(b) 2-inputs and 3-inputs AND gate simulation with optimized parameters \(K=1\), \(K_{h}=0.5\), \(S=1\). Each set of colored bars represents a simulation setup with a different bias \(h\). In orange (blue) the output is clamped to \(+1\) (\(-1\)), so \(h_{3}\) is increased (decreased) by 10. The height of the bars is the number of times a randomized simulation has ended its annealing schedule in that state. The truth table states to visit with output clamped to \(+1\) (\(-1\)) are highlighted in green (red). Composing circuits seems to affect the probability distribution, hindering scalability. (c) Three-dimensional representation of final state vs initial conditions of the unbiased AND gate simulated in (a). (d) Comparison between Kuramoto model oscillators and attraction model simulations. The similarities suggest that oscillators tend to gravitate toward the closest solution in the phase space. ### VII Maximum cut encoding of Max-3SAT The goal of the maximum satisfiability problem (Max-SAT) is to satisfy as many clauses as possible of an instance in conjunctive normal form. As mentioned earlier, COPs in the same complexity class can be mapped to each other. This allows, in principle, to solve Max-SAT problems by solving Max-Cut instances and vice versa. Due to the success of the cIMs to solve Max-Cut, we propose an alternative method to the use of invertible logic gates to solve Max-SAT problems by relying on the mapping between the two COPs. In particular, we demonstrate the solution of Max-3SAT (i.e., each clause has exactly three variables) by solving the equivalent Max-Cut. We emphasize that any Max-kSAT, with k \(>\)3, can be mapped to a Max-3SAT [62]. The mapping between Max-3SAT and Max-Cut for a single clause is graphically shown in Fig. 6(a). In the chosen map, we introduce additional ancillary nodes to mediate the interaction between the variable nodes to ensure that each clause with three variables is only satisfied if all three variables are true. By carefully choosing the couplings of the graph and by keeping one of the auxiliary oscillators fixed to phase zero, we ensure that a maximum cut solution corresponds to a satisfied clause. We mention that the inverse is not true, as some sets of variables values can satisfy a clause while not corresponding to the maximum cut of the graph. As illustrated by the example in Fig. 6(b), to join several clauses, one connects the shared variables nodes to the other clause nodes, inverting the sign of their couplings if the variable appears negated in that clause. Moreover, if two variables appear together in two different clauses, we consider the coupling as the sum of their couplings in each clause. This mapping was used to obtain the graph of a simple Max-3SAT instance, "uf20-01.cnf", with 20 variables and 91 clauses. This instance was tested a total of 1000 times, each with a randomized initial configuration. The results are shown in Fig. 6(c) and 6(d). The system can achieve the optimal or close to it with high probability, although further testing is required to evaluate the scaling capabilities of the chosen map. In Fig. 6(d), we investigated how commonly the degeneracies caused by non-maximum cut states in satisfied clauses could bring to final guesses that reached the optimal solution while not achieving the maximum cut (few values with zero cost and maximum cut equal to 32). The opposite, achieving the maximum cut with a sub-optimal state, is indeed possible in a graph with several clauses, differently from the single clause case. As an example, if two clauses contain the same two variables, once with concordant and once with discordant signs, the coupling between the two cancels out and becomes zero, slightly altering the balance of those clauses' topology. However, as shown by the red line in Fig 6(d), the trend of the mapping shows that the degeneracy does not strongly influences the result of the Max-Cut, i.e., there is a strong correlation between the maximum cut and the Max-3SAT solution cost. The degeneracy of the map could be removed by considering sparse maps. Figure 6: (a) Graph representation of a single (A V B V C) logic clause. The maximum cut is only achieved in states where the clause is satisfied. (b) Graph of a toy Max-3SAT instance with an example of a variable shared in more than one logic clause. (c) – (d) States reached by the Kuramoto model system of oscillators for the simple Max-3SAT instance “ufl20-01.cnf”, with 20 variables and 91 clauses. We performed 1000 simulations with random initial state. The solution cost represents the number of logic clauses not satisfied at the last step of each simulation. The red dashed line is a guide to the eye to demonstrate the correlation between the maximum cut and the solution cost of the Max-SAT. ### Summary and Conclusions In this manuscript we have compare two different strategies which can be used for the hardware implementations of Ising machines, i) cIMs, based on a system of coupled oscillators, and ii) pIMs, based on a system of coupled p-bits, proposing a spintronic implementation of the Ising spin for each of them. From a modeling point of view, spintronic cIMs can be described by the Kuramoto model and the Slavin model. The latter includes the coupling between the oscillators' power and phase via the nonlinear shift. We compared the performance of the two models at software level and showed that the power-phase coupling may allow for a slightly better exploration of the cIM phase space, which can lead to higher accuracy. We also performed a comparison between the accuracy of pIMs and cIMs, showing that the first achieves better results. The use of invertible logic gates implemented with cIMs can underperform because of a classical-like particle behavior of the oscillators' phases. In addition, we proposed an alternative method to the use of invertible logic gates for the solution of Max-SAT, which exploits the effectiveness in solving Max-Cut instances of cIMs. The possibility to map between different COP problems in the same complexity class allows to leverage the optimal behavior of different cIM architectures and promises results for the implementation of cIMs to solve a wide range of COPs. Considering new optimized architectures and annealing schemes, which may include the combination of pIMs and cIMs, stimulates the development of fast, scalable, accurate and energy efficient hardware implementations of Ising machines. ## Acknowledgements This work was supported under the project number 101070287 -- SWAN-on-chip -- HORIZON-CL4-2021-DIGITAL-EMERGING-01, the project PRIN 2020LWPKH7 "The Italian factory of micromagnetic modeling and spintronics" funded by the Italian Ministry of University and Research (MUR), and by the PETASPIN association (www.petaspin.com). DR thanks the support from the project D.M. 10/08/2021 n. 1062 (PON Ricerca e Innovazione) funded by the Italian MUR. KYC acknowledges the support from a CNR YIP grant.
2306.05410
LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs
A critical obstacle preventing NeRF models from being deployed broadly in the wild is their reliance on accurate camera poses. Consequently, there is growing interest in extending NeRF models to jointly optimize camera poses and scene representation, which offers an alternative to off-the-shelf SfM pipelines which have well-understood failure modes. Existing approaches for unposed NeRF operate under limited assumptions, such as a prior pose distribution or coarse pose initialization, making them less effective in a general setting. In this work, we propose a novel approach, LU-NeRF, that jointly estimates camera poses and neural radiance fields with relaxed assumptions on pose configuration. Our approach operates in a local-to-global manner, where we first optimize over local subsets of the data, dubbed mini-scenes. LU-NeRF estimates local pose and geometry for this challenging few-shot task. The mini-scene poses are brought into a global reference frame through a robust pose synchronization step, where a final global optimization of pose and scene can be performed. We show our LU-NeRF pipeline outperforms prior attempts at unposed NeRF without making restrictive assumptions on the pose prior. This allows us to operate in the general SE(3) pose setting, unlike the baselines. Our results also indicate our model can be complementary to feature-based SfM pipelines as it compares favorably to COLMAP on low-texture and low-resolution images.
Zezhou Cheng, Carlos Esteves, Varun Jampani, Abhishek Kar, Subhransu Maji, Ameesh Makadia
2023-06-08T17:56:22Z
http://arxiv.org/abs/2306.05410v1
# LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs ###### Abstract A critical obstacle preventing NeRF models from being deployed broadly in the wild is their reliance on accurate camera poses. Consequently, there is growing interest in extending NeRF models to jointly optimize camera poses and scene representation, which offers an alternative to off-the-shelf SfM pipelines which have well-understood failure modes. Existing approaches for unposed NeRF operate under limiting assumptions, such as a prior pose distribution or coarse pose initialization, making them less effective in a general setting. In this work, we propose a novel approach, LU-NeRF, that jointly estimates camera poses and neural radiance fields with relaxed assumptions on pose configuration. Our approach operates in a local-to-global manner, where we first optimize over local subsets of the data, dubbed "mini-scenes." LU-NeRF estimates local pose and geometry for this challenging few-shot task. The mini-scene poses are brought into a global reference frame through a robust pose synchronization step, where a final global optimization of pose and scene can be performed. We show our LU-NeRF pipeline outperforms prior attempts at unposed NeRF without making restrictive assumptions on the pose prior. This allows us to operate in the general SE(3) pose setting, unlike the baselines. Our results also indicate our model can be complementary to feature-based SfM pipelines as it compares favorably to COLMAP on low-texture and low-resolution images. ## 1 Introduction NeRF [35] was introduced as a powerful method to tackle the problem of learning neural scene representations and photorealistic view synthesis, and subsequent research has focused on addressing its limitations to extend its applicability to a wider range of use cases (see [55, 60] for surveys). One of the few remaining hurdles for view synthesis in the wild is the need for accurate localization. As images captured in the wild have unknown poses, these approaches often use structure-from-motion (SfM) [49, 41] to determine the camera poses. There is often no recourse when SfM fails (see Fig. 7 for an example), and in fact, even small inaccuracies in camera pose estimation can have a dramatic impact on photorealism. Few prior attempts have been made to reduce the reliance on SfM by integrating pose estimation directly within the NeRF framework. However, the problem is severely underconstrained (see Fig. 1) and current approaches make additional assumptions to make the problem tractable. For example, NeRF\(---\)[57] focuses on pose estimation in forward-facing configurations, BARF [30] initialization must be close to the true poses, and GNeRF [33] assumes a 2D camera model (upright cameras on a hemisphere). We propose an approach for jointly estimating the camera pose and scene representation from images from a single scene while allowing for a more general camera configuration than previously possible. Conceptually, our approach is organized in a local-to-global learning framework using NeRFs. In the _local_ processing stage we partition the scene into overlapping subsets, each containing only a few images (we call these subsets _mini-scenes_). Knowing images in a mini-scene are mostly nearby is what makes the joint estimation of pose and scene better conditioned than performing the same task globally. In the _global_ stage, the overlapping mini-scenes are registered in a common reference frame through pose synchronization, followed by jointly refining all poses and learning the global scene representation. This organization into mini-scenes requires learning from a few local unposed images. Although methods exist for few-shot novel view synthesis [62, 28, 39, 21, 13, 12], and separately for optimizing unknown poses [30, 33, 57], the combined setting presents new challenges. Our model must reconcile the ambiguities prevalent in the local unposed setting - in particular the mirror symmetry ambiguity [40], where two distinct 3D scenes and camera configurations produce similar images under affine projection. We introduce a Local Unposed NeRF (LU-NeRF) model to address these challenges in a principled way. The information from the LU-NeRFs (estimated poses, confidences, and mirror symmetry analysis) is used to register all cam eras in a common reference frame through pose synchronization [20, 43, 24], after which we refine the poses and optimize the neural scene representations using all images. In summary, our key contributions are: * A local-to-global pipeline that learns both the camera poses in a general configuration and a neural scene representation from only an unposed image set. * LU-NeRF, a novel model for few-shot local unposed NeRF. LU-NeRF is tailored to the unique challenges we have identified in this setting, such as reconciling mirror-symmetric configurations. Each phase along our local-to-global process is designed with robustness in mind, and the consequence is that our pipeline can be successful even when the initial mini-scenes contain frequent outliers (see Sec 4 for a discussion on different mini-scene construction techniques). The performance of our method surpasses prior works that jointly optimize camera poses and scene representation, while also being flexible enough to operate in the general SE(3) pose setting unlike prior techniques. Our experiments indicate that our pipeline is complementary to the feature-based SfM pipelines used to initialize NeRF models, and is more reliable in low-texture or low-resolution settings. ## 2 Related work **Structure from motion (SfM).** Jointly recovering 3D scenes and estimating camera poses from multiple views of a scene is the classic problem in Computer Vision [25]. Numerous techniques have been proposed for SfM [41, 49] with unordered image collections and visual-SLAM for sequential data [54, 38]. These techniques are largely built upon local features [32, 45, 22, 52] and require accurate detection and matching across images. The success of these techniques has led to their widespread adoption, and existing deep-learning approaches for scene representation and novel view synthesis are designed with the implicit assumption that the SfM techniques provide accurate poses in the wild. For example, NeRF [35] and its many successors (_e.g_. [5, 6, 37]) utilize poses estimated offline with COLMAP [49, 31]. However, COLMAP can fail on textureless regions and low-resolution images. The local-to-global framework proposed in this work is inspired by the "divide-and-conquer" SfM and SLAM methods [8, 66, 23, 15, 19, 65, 18]. **Neural scene representation with unknown poses.** BARF [30] and GARF [16] jointly optimize neural scene and camera poses, but require good initialization (_e.g_. within \(15^{\circ}\) of the groundtruth). NeRF\(--\)[57], X-NeRF [42], SiNeRF [59], and SaNeRF [14] only work on forward-facing scenes; SAMURAI [10] aims to handle coarsely specified poses (octant on a sphere) using a pose multiplexing strategy during training; GNeRF [33] and VMRF [63] are closest to our problem setting. They do not require accurate initialization and work on \(360^{\circ}\) scenes. However, they make strong assumptions about the pose distribution, assuming 2DoF and a limited elevation range. Performance degrades when the constraints are relaxed. Approaches that combine visual SLAM with neural scene representations [67, 51, 44] typically rely on RGB-D streams and are exclusively designed for video sequences. The use of depth data significantly simplifies both scene and pose estimation processes. There are several parallel efforts to ours in this field. For instance, NoPe-NeRF [9] trains a NeRF without depending on pose priors; however, it relies on monocular depth priors. In a manner akin to our approach, LocalRF [34] progressively refines camera poses Figure 1: Jointly optimizing camera poses and scene representation over a full scene is difficult and underconstrained. This example is the Lego scene with 100 images from the Blender dataset. **Left**: When provided noisy observations of the true camera locations, BARF [30] cannot converge to the correct poses. **Middle**: GNeRF [33] assumes a 2D camera representation (azimuth, elevation) which is accurate for the Blender dataset which has that exact configuration (upright cameras on a sphere). However, GNeRF also requires an accurate prior distribution on poses for sampling. The Lego images live on one hemisphere, but when GNeRF’s prior distribution is the full sphere it also fails to localize the images accurately. **Right**: Our full model, LU-NeRF+Sync, is able to recover poses almost perfectly in this particular example. By taking a local-to-global approach, we avoid having strong assumptions about camera representation or pose priors. Following [30, 33] pose errors for each method are reported after optimal global alignment of estimated poses to ground truth poses. To put the translation errors in context, the Blender cameras are on a sphere of radius \(4.03\). and radiance fields within local scenes. Despite this similarity, it presumes monocular depth and optical flow as supervision, and its application is limited to ordered image collections; MELON [29] optimizes NeRF with unposed images using equivalence class estimation, yet it is limited to **SO**(3); RUST [46] and FlowCam [50] learn a generalizable neural scene representation from unposed videos. In summary, prior work on neural scene representation with unknown poses assumes either small perturbations [30, 16, 57, 59], a narrow distribution of camera poses [33, 63], or depth priors [9, 34]. To the best of our knowledge, we are the first to address the problem of neural rendering with unconstrained unknown poses for both ordered and unordered image collections. **Few-shot scene estimation.** Learning scene representations from a few images has been studied in [62, 21, 13, 12, 28, 39]. PixelNeRF [62] uses deep CNN features to construct NeRFs from few or even a single image. MVSNeRF [12] leverages cost-volumes typically applied in multi-view stereo for the same task, while DS-NeRF [21] assumes depth supervision is available to enable training with fewer views. Our approach to handle the few-shot case relies on a standard neural field optimization with strong regularization, similar to RegNeRF [39]. **Unsupervised pose estimation.** There are a number of techniques that can learn to predict object pose from categorized image collections without explicit pose supervision. Multiple views of the same object instance are used in [56, 26] to predict the shape and pose while training is self-supervised through shape rendering. RotationNet [27] uses multiple views of an object instance to predict both poses and class labels but is limited to a small set of discrete uniformly spaced camera viewpoints. The multi-view input is relaxed in [36, 58] which operates on single image collections for a single category. UNICORN [36] learns a disentangled representation that includes pose and utilizes cross-instance consistency at training, while an assumption about object symmetry guides the training in [58]. ## 3 Methodology An illustration of our approach is shown in Figure 2. At the core of our method is the idea of breaking up a large scene into mini-scenes to overcome the non-convexity of global pose optimization without accurate initialization. When the camera poses in the mini-scene are close to one another, we are able to initialize the optimization with all poses close to the identity and optimize for relative poses. In Sec. 4, we describe how we construct mini-scenes, and below we describe the process of local shape estimation followed by global synchronization. ### Local pose estimation The local pose estimation step takes in mini-scenes of typically three to five images and returns the relative poses Figure 2: **Proposed method.** (A) shows the ground truth locations of each image (we show this only for visualization). Edge colors show the grouping within mini-scenes. We create a mini-scene for each image, though here only three mini-scenes are highlighted; the ones centered at image 2 (red edges), image 5 (green edges), and image 7 (blue edges). Depending on the strategy used to create mini-scenes, the grouped images can contain outlier images far from the others. (B) LU-NeRF takes unposed images from a single mini-scene and optimizes poses without any constraints on the pose representation. (C) The reference frame and scene scale learned by LU-NeRF is unique to each mini-scene. This, plus estimation errors, means the relative poses between images in overlapping mini-scenes will not perfectly agree. To register the cameras in a common reference frame, we utilize pose synchronization which seeks a globally optimal positioning of all cameras from noisy relative pose measurements – this is possible since we have multiple relative pose estimations for many pairs of images. (D) Lastly, we jointly refine the synchronized camera poses and learn a scene representation. between the images. The model, denoted LU-NeRF-1, is a small NeRF [35] that jointly optimizes the camera poses as extra parameters as in BARF [30]. In contrast with BARF, in this stage, we are only interested in a rough pose estimation that will be improved upon later, so we aim for a lightweight model with faster convergence by using small MLPs and eliminating positional encoding and view dependency. As we only need to recover relative poses, without loss of generality, we freeze one of the poses at identity and optimize all the others. Few-shot radiance field optimization is notoriously difficult and requires strong regularization [39]. Besides the photometric \(\ell_{2}\) loss proposed in NeRF, we found that adding a loss term for the total variation of the predicted depths over small patches is crucial for the convergence of both camera pose and scene representation: \[\frac{1}{|\mathcal{R}|}\sum_{\mathbf{r}\in\mathcal{R}}\sum_{i,j=1}^{K}\big{(}d _{\theta}(\mathbf{r}_{i,j})-d_{\theta}(\mathbf{r}_{i,j+1})\big{)}^{2}+\big{(}d _{\theta}(\mathbf{r}_{i,j})-d_{\theta}(\mathbf{r}_{i+1,j})\big{)}^{2}\] where \(\mathcal{R}\) is a set of ray samples, \(d_{\theta}(\mathbf{r})\) is the depth rendering function for a ray \(\mathbf{r}\), \(\theta\) are the model parameters and camera poses, \(K\) is the patch size, and \((i,j)\) is the pixel index. ### Mirror-symmetry ambiguity The ambiguities and degeneracies encountered when estimating 3D structure have been extensively studied [53, 7, 17]. One particularly relevant failure mode of SfM is distant small objects, where the perspective effects are small and can be approximated by an affine transform, and one cannot differentiate between reflections of the object around planes parallel to the image plane [40]. When enforcing multi-view consistency, this effect, known as mirror-symmetry ambiguity, can result in two different configurations of structure and motion that cannot be told apart (see Fig. 3). We notice, perhaps for the first time, that neural radiance fields with unknown poses can degenerate in the same way. One potential solution to this problem would be to keep the two possible solutions and drop one of them when new observations arrive. This is not applicable to our case since at this stage the only information available is the few images of the mini-scene. To mitigate the issue, we introduce a second stage for the training, denoted LU-NeRF-2. We take the estimated poses in world-to-camera frame \(\{R_{i}\}\) from LU-NeRF-1, and the reflected cameras \(\{R_{\pi}R_{i}\}\), where \(R_{\pi}\) is a rotation around the optical axis. Note that this is different than post-multiplying by \(R_{\pi}\), which would correspond to a global rotation that wouldn't change the relative poses that we are interested in at this stage. We then train two new models, with the scene representation started from scratch and poses initialized as the original and reflected sets, and resolve the ambiguity by picking the one with the smallest photometric training loss. The rationale is that while the issue is caused by LU-NeRF-1 ignoring small perspective distortions, the distortions can be captured on the second round of training, which is easier since one of the initial sets of poses is expected to be reasonable. ### Local to global pose estimation After training LU-NeRF-2, we have sets of relative poses for each mini-scene in some local frame. The problem of finding a global alignment given a set of noisy relative poses is known as pose synchronization or pose averaging. It is formalized as optimizing the set of \(N\) global poses \(\{P_{i}\}\) given relative pose observations \(R_{ij}\), \[\operatorname*{argmin}_{P\in\mathbf{SE}(3)^{N}}d(P_{ij},P_{j}P_{i}^{\top}), \tag{1}\] for some metric \(d\colon\mathbf{SE}(3)\times\mathbf{SE}(3)\mapsto\mathbb{R}\). The problem is challenging due to non-convexity and is an active subject of research [4, 43, 20]. We use the Shonan rotation method [20] to estimate the camera rotations, followed by a least-squares optimization of the translations. **Global pose and scene refinement.** After pose averaging, the global pose estimates are expected to be good enough such that any method that requires cameras initialized close to the ground truth should work (_e.g_. BARF [30], GARF [16]). We apply BARF [30] at this step, which results in both accurate poses and a scene representation accurate enough for realistic novel view synthesis. We refer to the full pipeline as LU-NeRF+Sync. Figure 3: **Mirror symmetry ambiguity. Under affine projection, a 3D scene (\(S_{0}\)) and its reflection (\(S_{1}\)) across a plane (\(R\)) will produce the same image viewed from affine camera \(C\). The consequence of this is that two distinct 3D scenes and camera poses will produce similar images. In this illustration, scene \(S_{0}\) viewed from camera \(P_{0}\) will produce the same image as the reflected scene \(S_{1}\) viewed from \(P_{1}\). While this relationship is exact in the affine model, we observe that the mini-scene configuration with respect to the scene structure is often well-approximated as affine and training can converge to the near-symmetric solutions. Our LU-NeRF model is explicitly designed to anticipate this failure mode. This illustration is inspired by a similar diagram in [40].** ## 4 Experiments Our method as described in Sec. 3 starts from a set of mini-scenes that covers the input scene. We evaluate different approaches to constructing mini-scenes, each with different assumptions on the input. The most strict assumption is that we have an _optimal graph_ connecting each image to its nearest neighbors in camera pose space. While this seems unfeasible in practice, some real-life settings approximate this, for example, when images are deliberately captured in a pattern such as a grid, or if they are captured with camera arrays. In a less constrained version of the problem, we assume an _ordered image collection_, where the images form a sequence, from where a line graph is trivially built. This is a mild assumption that is satisfied by video data, as well as the common setting of a camera physically moving around a scene sequentially capturing images. In the most challenging setting, we assume nothing about the scene and only take an _unordered image collection_. Building graphs from unordered image collections.We evaluate two simple ways of building graphs from unordered image collections. The first is to use deep features from a self-supervised model trained on large image collections. We use the off-the-shelf DINO model [11, 2] to extract image features and build the graph based on the cosine distance between these features. The second is to simply use the \(\ell_{1}\) distance in pixel space against slightly shifted and rotated versions of the images. Neither of these approaches is ideal. The deep features are typically coarse and too general, failing to detect specific subtle changes on the scene. The \(\ell_{1}\) distance has the opposite issue, where small changes can result in large distances. We provide a detailed analysis in the Appendix. Exploring other methods for finding a proxy metric for the relative pose in image space is a direction for future work. **Datasets.** We compare with existing published results on the synthetic-NeRF dataset [35]. We use the training split of the original dataset as our _unordered image collection_ which consists of 100 unordered images per 3D scene. We use the \begin{table} \begin{tabular}{l c c c c c c c c c c c c} & \multicolumn{2}{c}{Chair} & \multicolumn{2}{c}{Hotdog} & \multicolumn{2}{c}{Lego} & \multicolumn{2}{c}{Mic} & \multicolumn{2}{c}{Drums} & \multicolumn{2}{c}{Ship} \\ \cline{2-13} & rot & trans & rot & trans & rot & trans & rot & trans & rot & trans & rot & trans \\ \hline COLMAP & 0.12 & 0.01 & 1.24 & 0.04 & 2.29 & 0.10 & 8.37 & 0.18 & 5.91 & 0.28 & 0.17 & 0.01 \\ +BARF & 0.14 & 0.01 & 1.20 & 0.01 & 1.88 & 0.09 & 3.73 & 0.15 & 8.71 & 0.54 & 0.15 & 0.01 \\ \hline VMRF \(120^{\circ}\) & 4.85 & 0.28 & – & – & 2.16 & 0.16 & 1.39 & 0.07 & 1.28 & 0.08 & 16.89 & 0.71 \\ GNeRF \(90^{\circ}\) & 0.36 & 0.02 & 2.35 & 0.12 & 0.43 & 0.02 & 1.87 & 0.03 & 0.20 & 0.01 & 3.72 & 0.18 \\ GNeRF \(120^{\circ}\) & 4.60 & 0.16 & 17.19 & 0.74 & 4.00 & 0.20 & 2.44 & 0.08 & 2.51 & 0.11 & 31.56 & 1.38 \\ GNeRF \(150^{\circ}\) & 16.10 & 0.76 & 23.53 & 0.92 & 4.17 & 0.36 & 3.65 & 0.26 & 5.01 & 0.18 & – & – \\ \hline GNeRF \(180^{\circ}\) (2DOF) & 24.46 & 1.22 & 36.74 & 1.46 & 8.77 & 0.53 & 12.96 & 0.66 & **9.01** & 0.49 & – & – \\ Ours (3DOF) & **2.64** & **0.09** & **0.24** & **0.01** & **0.09** & **0.00** & **6.68** & **0.10** & 12.39 & **0.23** & – & – \\ \end{tabular} \end{table} Table 1: **Camera pose estimation on unordered image collection.** GNeRF [33] and VMRF [63] constrain the elevation range, where the maximum elevation is always \(90^{\circ}\). For example, GNeRF \(120^{\circ}\) only samples elevations in \([-30^{\circ},90^{\circ}]\). The \(180^{\circ}\) variations don’t constrain elevation and are closest to our method, but they are still limited to 2 degrees of freedom for assuming upright cameras. Bold numbers indicate superior performance between the bottom two rows, which are the fairest comparison among NeRF-based methods, although our method is still solving a harder 3DOF problem versus 2DOF of GNeRF. We outperform GNeRF in all but one scene in this comparison. COLMAP [49] results in its best possible scenario are shown for reference (higher resolution images and assuming optimal graph to set unregistered poses to the closest registered pose). COLMAP+BARF runs a BARF refinement on top of these initial results, and even in this best-case scenario, our method still outperforms it in some scenes, which shows that LU-NeRF can complement COLMAP and work in scenes COLMAP fails. Our model fails on the Ship scene due to outliers in the connected graph; GNeRF with fewer constraints also fails on it. We provide a detailed error analysis on the Drums scene in the Appendix. Figure 4: **Camera pose estimation on unordered image collections.** The performance of GNeRF drops dramatically when the pose prior is expanded beyond the true distribution. In comparison, our method does not rely on any prior knowledge of pose distribution. first 8 images from the validation set as our test set for the novel view synthesis task, following prior works [33, 63] To evaluate on image sequences, where the order of images is known, we further render a Blender _ordered image collection_ with 100 images along a spiral path per scene. The images are resized to \(400\times 400\) in our experiments. We also evaluate on real images from the object-centric videos in Objectron [1]. The dataset provides ground truth poses computed using AR solutions at 30fps, and we construct a wider-baseline dataset by subsampling every 15th frame and selecting videos with limited texture (Fig. 7). **Evaluation metrics.** We evaluate the tasks of camera pose estimation and novel view synthesis. For camera pose estimation, we report the camera rotation and translation error using Procrustes analysis as in BARF [30]. For novel view synthesis, we report the PSNR, SSIM, and LPIPS [64]. **Baseline methods.** We compare with GNeRF [33], VMRF [63], and COLMAP [49] throughout our experiments. GNeRF samples camera poses from a predefined prior pose distribution and trains a GAN-based neural rendering model to build the correspondence between the sampled camera poses and 2D renderings. The method provides accurate pose estimation under _proper_ prior pose distribution. However, its performance degrades significantly when the prior pose distribution doesn't match the groundtruth. VMRF attempts to relieve the reliance of GNeRF on the prior pose distribution but still inherits its limitations. In our experiments, we evaluate with the default pose priors of GNeRF on the NeRF-synthetic dataset, _i.e_., azimuth \(\in[0^{\circ},360^{\circ}]\) and elevation \(\in[0^{\circ},90^{\circ}]\), and also on less constrained cases. COLMAP works reliably in texture-rich scenes but may fail dramatically on texture-less surfaces. **Implementation details.** We use a compact network for LU-NeRF to speed up the training and minimize the memory cost. Specifically, we use a 4-layer MLP without positional encoding and conditioning on the view directions. We stop the training early when the change of camera poses on mini-scenes is under a predefined threshold. To resolve the mirror symmetry ambiguity (Sec. 3.2), we train two additional LU-NeRFs for a fixed number of training iterations (50k by default). The weight of the depth regularization is 10 times larger than the photometric \(\ell_{2}\) loss throughout our experiments. More details are in the Appendix. ### Unordered Image Collections **Camera pose estimation.** Tab. 1 compares our method to GNeRF, VMRF, and COLMAP in the camera pose estimation task. GNeRF achieves high pose estimation accuracy when the elevation angles are uniformly sampled from a \(90^{\circ}\) \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Chair} & \multicolumn{3}{c}{Drums} & \multicolumn{3}{c}{Lego} & \multicolumn{3}{c}{Mic} \\ \cline{2-13} & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline GNeRF \(90^{\circ}\) & 31.30 & 0.95 & 0.08 & 24.30 & 0.90 & 0.13 & 28.52 & 0.91 & 0.09 & 31.07 & 0.96 & 0.06 \\ GNeRF \(120^{\circ}\) & 25.01 & 0.89 & 0.15 & 20.63 & 0.86 & 0.20 & 22.95 & 0.85 & 0.16 & 23.68 & 0.93 & 0.11 \\ GNeRF \(150^{\circ}\) & 22.18 & 0.88 & 0.20 & 19.05 & 0.83 & 0.27 & 21.39 & 0.84 & 0.18 & 23.22 & 0.92 & 0.13 \\ VMRF \(120^{\circ}\) & 26.05 & 0.90 & 0.14 & 23.07 & 0.89 & 0.16 & 25.23 & 0.89 & 0.12 & 27.63 & 0.95 & 0.08 \\ VMRF \(150^{\circ}\) & 24.53 & 0.90 & 0.17 & 21.25 & 0.87 & 0.21 & 23.51 & 0.86 & 0.14 & 24.39 & 0.94 & 0.10 \\ \hline GNeRF \(180^{\circ}\) (2DOF) & 21.27 & 0.87 & 0.23 & 18.08 & 0.81 & 0.33 & 18.22 & 0.82 & 0.24 & 17.22 & 0.86 & 0.32 \\ VMRF \(180^{\circ}\) (2DOF) & 23.18 & 0.89 & 0.16 & 20.01 & 0.84 & 0.29 & 21.59 & 0.83 & 0.18 & 20.29 & 0.90 & 0.22 \\ Ours (3DOF) & **30.57** & **0.95** & **0.05** & **23.53** & **0.89** & **0.12** & **28.29** & **0.92** & **0.06** & **22.58** & **0.91** & **0.08** \\ \hline \hline \end{tabular} \end{table} Table 2: **Novel view synthesis on unordered collections.** Our method outperforms the baselines on most scenes while being more general for considering arbitrary rotations with 3 degrees-of-freedom. Here we quote the baseline results from VMRF [63], where _hotdog_ is not available. We provided the results on all scenes (including _hotdog_) using the public source code of GNeRF in the Appendix. Figure 5: **Novel view synthesis on unordered image collections**. GNeRF makes assumptions on the elevation range, where the maximum elevation is always \(90^{\circ}\). For instance, GNeRF \(150^{\circ}\) only samples elevations in [-60\({}^{\circ}\), \(90^{\circ}\)]. The \(180^{\circ}\) variations don’t constrain elevation and are closest to our method, but they are still limited to \(2\) degrees of freedom for assuming upright cameras. The performance of GNeRF drops as prior poses are less constrained. Please zoom into the figure to see the details in the renderings. interval; however, its performance drops significantly when the range of elevation is enlarged. Our method outperforms GNeRF in most scenes when the prior pose distribution is unknown, since we do not require any prior knowledge of the camera poses. Fig. 4 provides the visualization of the estimated camera poses from GNeRF under different prior pose distributions and our method. Tab. 3 shows the number of images COLMAP registers out of 100 in each scene. COLMAP is sensitive to image resolution, and its performance drops significantly on low-resolution images. For instance, COLMAP only registers 15 images out of 100 on the Mic scene when the image size is \(400\times 400\). Our method provides accurate pose estimation for all cameras given \(400\times 400\) images. Tab. 1 also reports how COLMAP performs in the pose estimation task on the Blender scenes. We use the most favorable settings for COLMAP - \(800\times 800\) images and set the poses of unregistered cameras to the poses of the nearest registered camera, assuming the _optimal graph_ is known, while our method makes no such assumption. Nevertheless, our model achieves better performance than COLMAP in some scenes, even when a BARF refinement is applied to initial COLMAP results. This shows that LU-NeRF complements COLMAP by working in scenes where COLMAP fails. **Novel view synthesis.** Fig. 5 and Tab. 2 show our results in the task of novel view synthesis on unordered image collections. The results are consistent with the quantitative pose evaluation - our model outperforms both VMRF and GNeRF when no priors on pose distribution are assumed. ### Analysis This section provides additional analysis of our approach. All the experiments discussed below were conducted on the unordered image collection. See the Appendix for an extended discussion. **Mirror symmetry ambiguity.** Tab. 7 shows the performance of our full method with and without the proposed solution to the mirror-symmetry ambiguity (Sec. 3.2). Resolving the ambiguity improves performance consistently, confirming the importance of this component to our pipeline. For closer inspection, we present qualitative results for LU-NeRF with and without ambiguity resolution for select mini-scenes in Fig. 8. Fig. 8 presents a visual comparison between LU-NeRF with and without the proposed solution to the mirror-symmetry ambiguity. Without the ambiguity resolution, the predicted depths are reflected across a plane parallel to the image plane (having the effect of inverted disparity maps), and the poses are reflected across the center camera of a mini-scene. Our LU-NeRF-2 rectifies the predicted geometry and local camera poses, which effectively resolves the ambiguity. ## 5 Discussion In this work, we propose to estimate the neural scene representation and camera poses jointly from an unposed image collection through a process of synchronizing local unposed NeRFs. Unlike prior works, our method does not rely on a proper prior pose distribution and is flexible enough to operate in general **SE**(3) pose settings. Our framework works reliably in low-texture or low-resolution images and thus complements the feature-based SfM algorithms. Our pipeline also naturally exploits sequential image data, which is easy to acquire in practice. One limitation of our method is the computational cost, which can be relieved by recent advances in neural rendering [55]. Another limitation is the difficulty in building graphs for unordered scenes, which is a promising direction for future work. ## 6 Acknowledgements We thank Zhengqi Li and Mehdi S. M. Sajjadi for fruitful discussions. The research is supported in part by NSF grants #1749833 and #1908669. Our experiments were partially performed on the University of Massachusetts GPU cluster funded by the Mass. Technology Collaborative. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Ambiguity & Chair & Hotdog & Lego & Mic & Drums \\ \hline w/o resolution & 39.14 & 138.9 & 0.48 & 107.9 & 11.35 \\ w/ resolution & **4.24** & **0.23** & **0.07** & **0.84** & **0.05** \\ \hline \hline \end{tabular} \end{table} Table 7: **Mirror symmetry ambiguity.** The mean rotation error in degrees for our pipeline (starting with the optimal graph), with and without the proposed strategy to resolve the ambiguity. Figure 8: **Mirror symmetry ambiguity.** For specific mini-scenes, we present renderings, disparity maps, PSNRs between the renderings and the groundtruth, and relative rotation errors (_lower is better_) for LU-NeRF with and without the proposed solution to the mirror-symmetry ambiguity. Brightness is inversely related to depth in the disparity map. The groundtruth depth maps are not available with the dataset. Figure 7: **Camera pose estimation on textureless scenes.** COLMAP fails to register any cameras in these Objectron scenes. Ground truth cameras are in purple, our predictions in blue.
2308.13906
A Two-Dimensional Deep Network for RF-based Drone Detection and Identification Towards Secure Coverage Extension
As drones become increasingly prevalent in human life, they also raises security concerns such as unauthorized access and control, as well as collisions and interference with manned aircraft. Therefore, ensuring the ability to accurately detect and identify between different drones holds significant implications for coverage extension. Assisted by machine learning, radio frequency (RF) detection can recognize the type and flight mode of drones based on the sampled drone signals. In this paper, we first utilize Short-Time Fourier. Transform (STFT) to extract two-dimensional features from the raw signals, which contain both time-domain and frequency-domain information. Then, we employ a Convolutional Neural Network (CNN) built with ResNet structure to achieve multi-class classifications. Our experimental results show that the proposed ResNet-STFT can achieve higher accuracy and faster convergence on the extended dataset. Additionally, it exhibits balanced performance compared to other baselines on the raw dataset.
Zixiao Zhao, Qinghe Du, Xiang Yao, Lei Lu, Shijiao Zhang
2023-08-26T15:43:39Z
http://arxiv.org/abs/2308.13906v1
A Two-Dimensional Deep Network for RF-based Drone Detection and Identification Towards Secure Coverage Extension ###### Abstract As drones become increasingly prevalent in human life, they also raises security concerns such as unauthorized access and control, as well as collisions and interference with manned aircraft. Therefore, ensuring the ability to accurately detect and identify between different drones holds significant implications for coverage extension. Assisted by machine learning, radio frequency (RF) detection can recognize the type and flight mode of drones based on the sampled drone signals. In this paper, we first utilize Short-Time Fourier Transform (STFT) to extract two-dimensional features from the raw signals, which contain both time-domain and frequency-domain information. Then, we employ a Convolutional Neural Network (CNN) built with ResNet structure to achieve multi-class classifications. Our experimental results show that the proposed ResNet-STFT can achieve higher accuracy and faster convergence on the extended dataset. Additionally, it exhibits balanced performance compared to other baselines on the raw dataset. radio frequency (RF) detection, short-time fourier transform (STFT), convolutional neural network (CNN), drone detection and identification. ## I Introduction Nowadays, the applications of drones, which are also known as unmanned aerial vehicles (UAVs), have been penetrated into every aspects of human life, including aerial photography, plant protection, military, etc. The global civil drone industry is expected to reach about 21.6 billion U.S. dollars by 2027. Military use has previously accounted for much of drone use, but the industry is increasingly entering commercial, scientific, and agricultural usage. While drones offer numerous benefits and opportunities, they also present several security concerns that need to be addressed [1], raising challenges for coverage extension. One of the primary concerns is the potential for unauthorized access and control of drones. Hackers or malicious individuals could attempt to gain control over a drone by exploiting vulnerabilities in its communication systems or flight controls. This can lead to misuse of the drone for illegal activities or sabotage. Another concern arises from the growing number of drones in the airspace, which increases the risk of collisions and interference with manned aircraft. Unauthorized or unregulated drone flights can pose risks to aviation safety, especially near airports or in restricted airspace. An example occurred in Oct. 2017 that a civil aircraft collided with a drone as the former was approaching the airport near Quebec City, Canada. Lastly, data breaches and privacy issues are also hidden troubles. Drones often capture and transmit data during their operations, including images, videos, and other sensor readings. If proper security measures are not in place, there is a risk of data breaches, where the captured information can be intercepted or accessed by unauthorized parties. This raises concerns about privacy violations and the potential misuse of sensitive data [2]. Thus as drones become increasingly prevalent, ensuring the ability to accurately detect and identify between different drones holds significant implications for secure coverage extension. In order to reduce or eliminate the threats posed by illegal drone flights, there are four main detection methods: optical detection, acoustic detection, radar detection, and radio frequency (RF) detection [3]. Compared with the other three detection methods, the advantages of RF detection which will be achieved based on the captured communication signals include: it can detect drones of any size and distance, i.e., within line of sight or beyond line of sight, and can also be used to identify the flight mode of drones, such as flying, hovering, recording, etc [4]. In this paper, we design a deep network to accurately detect the presence of drone signals and identify various drone states, taking into full consideration the characteristics of sampled radio signals. The contributions of this work include: 1. We extend the raw dataset considering the situation that several different types of drones are coexisting and simultaneously transmitting signals. Correspondingly, we design extra classification tasks which have not been discussed in previous research. 2. We employ short-time fourier transform (STFT) algorithm to extract two-dimensional features, i.e., time-domain and frequency-domain. They can provide more information of hidden correlations for classification by feeding into two-dimensional convolutional neural network (2D CNN). 3. Our experiments show that the proposed ResNet-STFT algorithm is able to achieve 98.7% accuracy in seven-class classification based on the extended dataset. Moreover, it can also achieve faster convergence compared with the one-dimensional baseline method. The rest of the paper is organized as follows. Section II introduces the related works. Section III describes the dataset and extended version. Section IV proposes our methodology including feature extraction and classification network. Section V presents the experimental results. Finally, Section VI concludes the work. ## II Related Works Machine learning has been widely used in drone detection. Nie. et al. in [3] extracted fractal dimension, axially integrated bispectra, and square integrated bispectra as UAV radio frequency (RF) fingerprint. The principal component analysis (PCA) algorithm was applied to reduce the dimensionality of features, then machine learning classifier achieved UAV identification. Medaiyese. et al. in [4] proposed a three-level hierarchical framework to detect UAV signals in the presence of other wireless signals such as Bluetooth and WiFi, which utilized a semi-supervised learning approach. DroneRF is a common drone dataset proposed by Allaham. et al. in [5]. Some research have been done based on this dataset. In [6] authors proposed a fully connected neural network with three hidden layers to classify drone signals. Allaham. et al. in [7] further proposed a multi-channel 1D CNN achieving higher accuracy performances. In [8] Raina. et al. proposed ConvLGBM model which combined the feature extraction capability of a CNN network with the high classification accuracy of the Light Gradient Boosting Machine (LightGBM). Considering feature engineering, Inani. et al. in [9] synthetically discussed several features in time and frequency-domain, including root mean square energy (RMS), discrete-fourier transform (DFT), power spectral density (DFT), etc. They further proposed a 1D CNN to identify the target drone signals with these extracted features. In this paper, we will extend the raw DroneRF dataset, and introduce feature extraction methods and classification networks working in two-dimensional space. ## III Dataset and Extended Version ### _DroneRF dataset_ M.S.Allaham et al. in [5] provide the DroneRF dataset which they collected from three types of drones and five types of function modes. Specifically, Parrot Bebop and Parrot AR Drone were both tested in Off, On and connected, Hovering, Flying, and Video recording modes. Another DJI phantom was merely tested in two modes, i.e., Off and On and connected. The RF receiver can capture the transmission signals between the drone and controller, which sampling bandwidth is equal to 40MHz. Thus for scanning 80MHz spectrum, the authors adopted two RF receivers, sampling for the lower 0\(\sim\)40MHz frequency band and higher 40\(\sim\)80MHz frequency band separately. Each sample of this dataset, which is also denoted as a _segment_, is composed of \(10^{7}\) time-domain points. The dataset has 227 segments in total, and the proportions of them can be listed in Table I. Moreover, the authors design binary unique identifier (BUI) rule to effectively name and distinguish the segments. The BUI number of each segment is composed of five binary digits. The first number indicates the presence of drone activities, the second and third numbers are used to characterize the three drone types, and the last two digits are corresponding to the four function modes. This scientific naming rule will also be practiced in the following dataset augmentation. In Table I the samples with BUI belonging to 0xxxx or 1xxxx come from the raw dataset. Others in Table I are extended samples, and we will introduce them below. ### _Data augmentation_ We further consider the extended condition that there are two types of drones coexisting and working in the same mode in a segment. Due to the records of Phantom merely contain On and connected mode in the raw DroneRF dataset, we mainly discuss this mode. Specifically, we add the time-domain points of Bebop & AR, Bebop & Phantom, and AR & Phantom respectively, indicating the two drones transmitting signals simultaneously. We show the detailed information in Table I. Moreover, we utilize BUI 2xxxx to name the extended data. The first number indicates there two types of drones are coexisting. The second and third numbers, i.e., 00, 01, and 10, are related to Bebop & AR, Bebop & Phantom, and AR & Phantom, respectively. The last two numbers indicates the function mode, which are merely set as 00 because of On and connected mode. ### _Classification cases_ According to the data types of dataset, we design five kinds of classification cases described as follows: * Case I: Binary classification. The classifier needs to detect whether a piece of given data contains drone signals. * Case II-A: Four-class classification. The classifier needs to identify none or which type of drone signal a piece of given data contains, including Bebop, AR, or Phantom. * Case II-B: Three-class classification. The classifier needs to identify which two types of drone signals a piece of given data contains, including Bebop & AR, Bebop & Phantom, or AR & Phantom. * Case II-C: Seven-class classification. The integrated case of Case II-A and Case II-B. * Case III: Ten-class classification. The classifier needs to identify none or which type of drone signal and its function mode a piece of given data contains. Note that Case I, Case II-A, and Case III are based on the raw DroneRF dataset which have been discussed by other papers [6, 7, 8, 9], while Case II-B and Case II-C are based on our extended dataset. The following experiments will compare our detection algorithm with other outstanding baselines and prove that ours can achieve balanced performances on general cases and better performances on extended cases. ## IV methodology ### _Feature extraction with Short-time fourier transform_ In [6] a segment was divided into several continuous but non-overlapping time-domain parts, and we denote this as simple-cutting (SCU) method. Compared with SCU, short-time fourier transform (STFT) algorithm compensates the information loss by introducing window function. Specifically, the overlapping part between two windows contributes to catch hidden but continuous time-domain information. The formulation of STFT can be depicted as follows: \[STFT(\tau,f)=\int_{-\infty}^{+\infty}x(t)h(t-\tau)e^{-j2\pi ft}d\tau, \tag{1}\] where \(x(t)\) denotes the target signal, and \(h(t-\tau)\) denotes a window function which is used for intercepting a _frame_ from \(x(t)\). Shifting \(\tau\) along timeline, STFT can get the fourier transform result of each intercepted frame, and finally, help to analyze the whole time-domain and frequency-domain information of \(x(t)\). The function _spectrogram(x,window,noverlap,nfft,fs)_ in MATLAB can fulfill the calculation of STFT. We set the parameter _nfft_ as 128, indicating 128-point FFT for each frame. The type and length of window function both can be altered. To ensure that the numbers of time-domain and frequency-domain points are both equal to 128, which will be convenient for the following CNN to recognize and classify, we set the length of _Hamming_ window as \(8.8\times 10^{4}\) and overlapping parts between two windows as \(10^{4}\) accordingly. The overlapping ratio is equal to 11.4%. We further compare the STFT algorithm with SCU method in visualization form. The upper-half spectrogram is generated with lower sampling band, while the lower-half spectrogram is generated with higher sampling band that were sampled simultaneously. Hereinafter, we will use this concatenation way to generate feature patterns from dataset. For instance, we choose a segment coming from BUI = 20000 which indicates that Bebop drone and AR drone are coexisting. As shown in Fig. 1, compared with SCU style (Fig. 1(b)), the frequency points with high energy in STFT style (Fig. 1(a)) tend to be more concentrative, and some hidden features are also enhanced. We believe that STFT is able to provide additional and valuable information for identification. Moreover, it is evident that high energy points mainly appear in the upper-half spectrogram. Authors in [10] discussed the performances with lower band, higher band, and both two bands, respectively. They believed that the lower sampling band has carried enough features for detecting and identification drones. Our spectrogram results confirm this conclusion well. ### _ResNet structure_ Convolutional neural network (CNN) is one of the most typical algorithm in deep learning, which contains convolutional computation and deep structure. The applications of CNN are mainly related to computer vision area, such as image classification, semantic segmentation, pose estimation, etc. Especially when the dimensions of input data are quite large, CNN is able to avoid explosive scale of network parameters by local and distributed convolutional computing. One-dimensional (1D) CNN is typically used for signal processing, in which the input data contains correlation characteristic on timeline and needs to be predicted or classified. Two-dimensional (2D) CNN has border application prospect, in which the input data is shaped into matrix form. The typical 2D CNN structures include AlexNet [11], VGG [12], GoogleNet [13], ResNet [14], etc. In this paper we employ 2D CNN with ResNet structure to solve the problems. In computer vision, the depth of the network will increase with the larger scale of input features. Typically deeper network is able to achieve better performances, however, the new problem, i.e., gradient explosion and gradient vanishing, which will result in the failure of network convergence, becomes an obstacle to training such a network. ResNet structure is proposed to counter this problem. Firstly, the shortcut connection Fig. 1: These figures are spectrogram of signals when Bebop drone and AR drone are coexisting. The size is equal to 128*128. The horizontal axis indicates time-domain containing 128 intercepted parts, and the vertical axis indicates frequency-domain containing 128-points FFT. (a) is generated with STFT algorithm, and (b) is generated with SCU method. Compared with (b), some hidden features are exhibited in (a). of ResNet structure can accelerate the information propagation in the whole network. Secondly, the batch normalization (BN) is utilized to ensure that the input feature map of every convolutional layer follows the normal distribution with mean 0 and variance 1. The formulations of BN can be depicted as follows [15]: \[\begin{cases}y_{i}=\gamma\hat{x_{i}}+\beta;\\ \hat{x_{i}}=\frac{x_{i}-\mu_{\mathcal{B}}}{\sqrt{\sigma_{\mathcal{B}}^{2}+ \epsilon}};\\ \sigma_{\mathcal{B}}^{2}=\frac{1}{m}\sum_{i=1}^{m}(x_{i}-\mu_{ \mathcal{B}})^{2};\\ \mu_{\mathcal{B}}=\frac{1}{m}\sum_{i=1}^{m}x_{i}.\end{cases} \tag{2}\] The input of BN is a mini-batch \(\mathcal{B}=\{x_{1\dots m}\}\), and for each value \(x_{i}\) of \(\mathcal{B}\), the output is denoted as \(y_{i}=\text{BN}_{\gamma,\beta}(x_{i})\). The \(\mu_{\mathcal{B}}\) and \(\sigma_{\mathcal{B}}^{2}\) represent the mean and variance value of mini-batch, respectively. The \(\hat{x_{i}}\) is the result of normalization. Moreover, the parameter \(\gamma\) and \(\beta\) need to be learned in the back propagation process, which reflect the mean and variance of the whole training dataset and can help to scale and shift the normalized \(\hat{x_{i}}\). ### _ResNet-CNN classifier for STFT feature map identification_ In this paper we develop a 72-layers ResNet, as shown in Fig. 2. The input of network is 128*128 STFT feature map which has been introduced in Section IV-A. For each orange rectangle, which represents the convolutional layer, the first number denotes the size of filter, the second denotes the number of filters, and the last number denotes stride. The internal structure of Resnet block is shown in Fig. 2(b). For Resnet block I, II, and III, the numbers of filters, i.e., the values of parameter \(c\_num\), are equal to 128, 256, and 512, respectively. Note that the shortcut connection of the upper-half part includes dimension reduction implemented by a convolutional layer (stride=2). Lastly, the traditional fully connected layer is replaced with the global average pooling layer, thus the feature map is directly fed into the softmax layer, assisting to reduce network parameters. ## V Experiment Results ### _Experimental setup_ #### V-A1 Baselines We employ the newly proposed work in [9] as baseline which has been verified to achieve good accuracy performances on the raw DroneRF dataset. It developed a 1D-CNN network and recommended to extract power spectral density (PSD) features for the raw signals. For brief descriptions, we simply denote this baseline as 1D-PSD, and denote our recommended algorithm as ResNet-STFT below. The formulation of PSD is as follows, where \(x_{N}\) denotes \(N\) sampling points: \[P(w)=\frac{1}{N}\left\|\sum_{n=0}^{N-1}x_{N}(n)e^{-jwn}\right\|^{2}. \tag{3}\] Fig. 2: The network structure of ResNet-CNN. (a) shows the complete structure, where the input is STFT feature map and the output is classification result. (b) shows the internal structure of Resnet block which decreases the height and width dimensions of input but increases the depth dimension. #### V-B2 Metrics We use the common metrics, i.e., accuracy, precision, recall, and F1-score, to evaluate the classification performances. Especially, for the multi-class classification, we calculate 'one vs rest' separately for each class and choose the average value to denote the overall performance. Moreover, we will also show the confusion matrix of each classifier. The formulations can be depicted as follows: \[\text{Accuracy} =\frac{TP+TN}{TP+TN+FP+FN}; \tag{4}\] \[\text{Precision} =\frac{TP}{TP+FP};\] \[\text{Recall} =\frac{TP}{P};\] \[\text{F1-score} =\frac{2*\text{Precision}*\text{Recall}}{\text{Precision}+ \text{Recall}}\] #### V-B3 Experiment settings The experiments have been carried out on MATLAB. STFT feature maps are extracted from the raw data by _spectrogram_ function. We use _deepNetworkDesigner_ toolbox to design and analyze the deep networks. The network training function is _trainNetwork_, where we set: optimizer\(->\)'_adam_', initial learn rate\(->\)0.0001, and L2 regularization factor\(->\)0.0001. Considering different scales of cases which have been defined in Section III-C, for the experiments based on the raw dataset, we set 'MiniBatchSize'=8 and 'MaxEpochs'=50, while for those based on the extended dataset, we set 'MiniBatchSize'=32 and 'MaxEpochs'=5. The loss function is _cross entropy_. We repetitively carry out each experiment for five times, and use the average as the final results. In each training-testing process, we divide the whole dataset into three sections, i.e., 80% training data, 10% validation data, and 10% testing data. ### _Experiment I: Classifications on raw dataset_ This experiment compares the performances of our proposed ResNet-STFT with baseline 1D-PSD on Case I, Case II-A, and Case III. Specifically, the cases are related to two, four, and ten-class classifications, respectively. The results can be found in Table II. In binary classification, two models both can achieve no error, i.e., always accurately detect whether the sample contains drone signals. In complex multi-class classifications, there comes some decline on accuracy, especially in ten-class classification. The factors that contribute to this situation include: 1) the small number of samples in each class is not sufficient for training; 2) correspondingly, the overfitting problem appears in deep network which has significant affects on the testing dataset. In the future work, we will further develop DroneRF dataset, such as adding noise, to compensate this problem. ### _Experiment II: classifications on extended dataset_ This experiment compares the performances of our proposed ResNet-STFT with baseline 1D-PSD on Case II-B and Case II-C. Specifically, the cases are related to three and seven-class classifications, respectively. The results can be found in Table III. In Case II-B, our network can precisely identify which two types of drones are coexisting. The accuracy and f-score performances of ResNet-STFT are a bit better than those of 1D-PSD. In Case II-C, the detection algorithm needs to identify which type(s) of drones it contains, i.e., none, single three types, or coexisting in pairs. The accuracy of ResNet-STFT can achieve 98.7%, which also takes less time to converge. As shown in Fig. 3(a), the accuracy and loss curves of ResNet-STFT quickly converge into relatively ideal levels in the first epoch. Comparatively, after 5 epochs 1D-CNN still cannot well converge and merely reaches 81.1% accuracy. Moreover, we also show the confusion matrix figures in Fig. 4. ResNet-STFT mistook two samples of class 2 into class 5, which represent single Bebop and Bebop & AR respectively. However, 1D-PSD seriously confused class 5 and class 6, which represent Bebop & AR and Bebop & Phantom respectively. The precision of class 5 and class 6 are equal to 74% and 82.1%. Thus ResNet-STFT has potential of identifying complex energy signals. On the one hand, STFT features catch more time-domain information which can perform as a good supplement to frequency-domain. Especially when several signals overlap, excavating the correlations between time-domain and frequency-domain will be significant. On the other hand, ResNet-STFT is much deeper than 1D-PSD, but its scale of parameters does not explosively increase. On the contrary, it can achieve faster convergence compared with baseline. ## VI Conclusions As drones become increasingly prevalent in human life, ensuring the ability to accurately detect and identify between different drones holds significant implications for public safety. We selected a common dataset DroneRF to verify our algorithm. Moreover, based on the raw dataset, we also considered the extended condition that there were two types of drones coexisting. We first utilized Short-Time Fourier Transform (STFT) to extract two-dimensional features from the raw signals, which contained both time-domain and frequency-domain information. Then, we employed a Convolutional Neural Network (CNN) built with ResNet structure to achieve multi-class classifications. Our experimental results showed that the proposed ResNet-STFT could achieve higher accuracy and faster convergence on the extended dataset, especially 98.7% accuracy in seven-class classification. Additionally, it exhibited balanced performance compared to other baselines on the raw dataset. In the future, we will further develop DroneRF dataset, such as adding noise, to compensate the problem that too small numbers of samples can be trained in ten-class classification. Moreover, we also consider employ and cascade other efficient machine learning classifiers after extracting abstract features by our ResNet-STFT.
2304.14364
CONSCENDI: A Contrastive and Scenario-Guided Distillation Approach to Guardrail Models for Virtual Assistants
A wave of new task-based virtual assistants has been fueled by increasingly powerful large language models (LLMs), such as GPT-4 (OpenAI, 2023). A major challenge in deploying LLM-based virtual conversational assistants in real world settings is ensuring they operate within what is admissible for the task. To overcome this challenge, the designers of these virtual assistants rely on an independent guardrail system that verifies the virtual assistant's output aligns with the constraints required for the task. However, relying on commonly used, prompt-based guardrails can be difficult to engineer correctly and comprehensively. To address these challenges, we propose CONSCENDI. We use CONSCENDI to exhaustively generate training data with two key LLM-powered components: scenario-augmented generation and contrastive training examples. When generating conversational data, we generate a set of rule-breaking scenarios, which enumerate a diverse set of high-level ways a rule can be violated. This scenario-guided approach produces a diverse training set and provides chatbot designers greater control. To generate contrastive examples, we prompt the LLM to alter conversations with violations into acceptable conversations to enable fine-grained distinctions. We then use this data, generated by CONSCENDI, to train a smaller model. We find that CONSCENDI results in guardrail models that improve over baselines in multiple dialogue domains.
Albert Yu Sun, Varun Nair, Elliot Schumacher, Anitha Kannan
2023-04-27T17:39:11Z
http://arxiv.org/abs/2304.14364v2
CONSCENDI: A Contrastive and Scenario-Guided Distillation Approach to Guardrail Models for Virtual Assistants ###### Abstract A wave of new task-based virtual assistants has been fueled by increasingly powerful large language models, such as GPT-4 (OpenAI, 2023). These conversational agents can be customized to serve customer-specific use cases, but ensuring that agent-generated text conforms to designer-specified rules included in prompt instructions alone is challenging. Therefore, chatbot designers often use another model, called a _guardrail model_, to verify that the agent output aligns with their rules and constraints. We explore using a distillation approach to _guardrail models_ to monitor the output of the first model using training data from GPT-4. We find two crucial steps to our CONSCENDI process: scenario-augmented generation and contrastive training examples. When generating conversational data, we generate a set of rule-breaking scenarios, which enumerate a diverse set of high-level ways a rule can be violated. This scenario-guided approach produces a diverse training set of rule-violating conversations, and it provides chatbot designers greater control over the classification process. We also prompt GPT-4 to also generate contrastive examples by altering conversations with violations into acceptable conversations. This set of borderline, contrastive examples enables the distilled model to learn finer-grained distinctions between what is acceptable and what is not. We find that CONSCENDI results in guardrail models that improve over baselines. ## 1 Introduction The emergence of transformer-based (Vaswani et al., 2017) large language models (LLMs), such as GPT-4 (OpenAI, 2023) and PaLM (Chowdhery et al., 2022), have enabled highly-capable conversational agents. With this increase in natural language sophistication, agent designers must ensure both responsible usage and adherence to task-specific constraints. _Guardrail_ models have been designed to ensure these rules are enforced (Chen et al., 2022). Most of these systems primarily focus on preventing the generation of harmful text (OpenAI, 2020; Welbl et al., 2021; Glaese et al., 2022). While customized guardrails using models such as GPT-4 are powerful, they are often impractical for deployment in real-world settings. In addition to their high inference cost and latency, their performance relies solely on what can be included in a prompt. It is challenging to define rules through instructions and in-context examples broad enough to consider all possibilities, especially in a fixed-window context. For example, a rule prohibiting an agent from stating political opinions can guard against generating controversial text. Yet defining the intricacies of this rule is challenging - are widely-accepted statements acceptable, but more sectarian statements out-of-bounds? Model distillation promises to be a solution to this challenge. By using a small fined-tuned model, the need for a large static prompt during inference is eliminated. In addition, one can provide training examples that cover all potential ways in which a rule might be violated, yielding better results than adding in-prompt few-shot examples (Lester et al., 2021). On the other hand, we can look to GPT-4 to generate synthetic conversations containing violations and non-violations of specified rule sets. This removes the need to manually annotate data, which can be especially difficult given the challenge of anticipating the full variety of rule-violating scenarios. Yet, naively generating data from GPT-4 can also produce datasets that suffer from the same lack of breadth. Therefore, we propose a multi-stage data generation pipeline to ensure GPT-4 produces a broad, domain-specific dataset. We begin by prompting an LLM to generate a variety of scenarios that illustrate different ways a dialog agent might break each given rule. Scenarios can be added or removed from this set given the engineer's preferences, providing a granular level of control. Next, we use GPT-4 to simulate a conversation between a user and a dialog agent that violates the rule according to the provided scenario. This scenario-guided data generation method results in a more diverse set of examples compared to directly generating conversations. Furthermore, we employ a contrastive approach to generate non-violating conversations that are alterations of a conversation with violations (Uehara et al., 2020). In addition to directly generating non-violating conversations, contrastive example generation takes further advantage of GPT-4's generation capabilities and provides a richer dataset for model training. The combined dataset is used to fine-tune a GPT-3 instance to serve as a guardrail model. We show this distilled model can serve as a better guardrail model than prompt-based LLMs, providing a crucial tool for user-facing text generation tools. Our paper makes the following contributions: 1. We explore guard trails in the context of dialog systems. An example conversation with a violation is shown in Figure 1. We design a set of rules for three domains within the SGD (Rastogi et al., 2020) conversational dataset. 2. We propose a scenario-guided generation pipeline. This method enables the generation of diverse conversations by first generating diverse scenarios and using each individual scenario to generate conversations. 3. We explore generating contrastive examples by altering conversations with violations to not include a violation. 4. Our distillation approach produces fine-tuned models that can identify rule violations with high accuracy better than GPT-4, including on conversations guided by scenarios unseen during training. 5. We find using scenario-guided conversations and contrastive examples is important in producing an accurate distilled guardrail model. 6. We will release an open-source dataset1 with domain-specific rules drawn from the SGD dataset (Rastogi et al., 2020) so that this can serve as a guardrail benchmark. Footnote 1: [https://github.com/curai/curai-research/tree/main/CONSCENDI](https://github.com/curai/curai-research/tree/main/CONSCENDI) ## 2 Guardrails for Virtual Assistants We explore building _guardrail models_(Chen et al., 2022), which consists of a second model verifying that the generated text of a first model adheres to a set of rules. While some standards are likely incorporated into the model directly (Bender et al., 2021; Welbl et al., 2021; Ziegler et al., 2020; Glaese et al., 2022; OpenAI, 2020), the original models have not likely considered domain-specific behavior to avoid. In addition, there is utility in including a second level of verification in user-facing applications, where providing inaccurate or misleading text can cause serious harm. Figure 1: **Example guardrail task. In this example, a virtual assistant in the restaurant domain provides information about an ongoing promotion to the user, thereby breaking rule 2. The guardrail model uses the last 2 turns of the conversation (non-grayed text) classify the last two turns as a rule violation (which rule) or no violation.** We focus on building guardrails for conversational agents. In this setting, a model-based agent \(A\) is having a conversation with an end user \(U\) about a specific topic, as illustrated in Figure 1. A conversation \(C\) consists of a sequence of turns \(T\). Each turn consists of a user's message \(u_{t}\) and a response message from the assistant \(a_{t}\). The example in Figure 1 consists of three turns, each with two messages. A full conversation is therefore \[C=\{(u_{1},a_{1}),(u_{2},a_{2}),\ldots,(u_{T},a_{T})\}.\] We formulate the instructions of the guardrail model as a set of \(N\) rules \(R\) enumerated by a system designer, denoted as: \[R=\{r_{1},r_{2},\ldots,r_{N}\}.\] The goal of the guardrail model \(G\) is to check, at each turn \(a_{t}\) of the agent model \(A\), whether the potential output violates any of the designated rules. This is a multi-class classification problem, where we provide the last two turns of conversation \((u_{t},a_{t})\) as input of the guardrail model, and the output of the guardrail is either the number of the rule \(r\in\{1,2,\ldots,N\}\) violated, or _None_ if the agent model output conforms to all rules. We design our setting to provide the model with the last two turns \((u_{t},a_{t})\) because we find that providing only the assistant turn can miss important context in a conversation, while adding more turns can increase latency in practical settings. \[G((u_{t},a_{t}))=r\in\{None,1,2,\ldots,N\}.\] In the last turn of the example conversation in Figure 1, the virtual assistant breaks rule \(r=2\): Do not provide information on promotions, discounts, or special offers, related to the restaurant. The expected behavior of the agent model \(A\) varies by the outcome of the guardrail. ## 3 Model Distillation While large language models like GPT-4 have advantages in terms of generative capability, we propose that distilling a smaller model from GPT-4 provides better guardrails. In addition to reductions in cost and latency, training a model edge cases to be learned through data. This is easier to engineer than including them in the prompt, which can be challenging to accomplish with examples and written explanations. For example, consider rule 19 in Appendix Table 15, Do not provide information on modes of transportation that are not buses, such as trains or taxis. Handling edge cases for this rule may be challenging. In San Francisco, are chats about Trolleybuses acceptable, and chats about Light Rail a violation? Instead of expanding the definition of this rule or adding a specific example, we can add training data that captures all intricacies of a given rule to the training data. ### Scenario Generation Our multi-stage generation pipeline is shown in Figure 2. For each rule \(r\), we generate a set of _scenarios_ (Prompt 2). Each scenario represents a high-level reason why a rule might be violated. Consider the violated rule in Figure 1: Do not provide information on promotions, discounts, or special offers related to the restaurant. One scenario that was generated was: A user asks if any coupons are available Figure 2: Construction of guardrail models. Using GPT-4, we generate 3 types of conversations: **1. Violations**: conversations that violate our given rules, **2. Contrastive Nonviolations:** conversations that are identical to our generated violations but replace the rule-violating chatbot turn with a non-violating chatbot turn, and **3. Nonviolations:** conversations that don’t violate any of our rules. These newly-generated conversations are few-shot generated using example conversations from Rastogi et al. (2020)). for a particular restaurant. We use scenarios to ensure that the generated conversations will cover a broad set of possibilities, including edge cases. If we generate conversations without this step, these conversations are likely to omit tail scenarios. This also adds an additional layer of interpretability. A chatbot designer has the ability to add and remove scenarios to tailor the guardrail design. This is inspired by works that augment LLMs using information retrieved from a prior database Lewis et al. (2021). ### Conversation Generation As seen in Fig. 2, in the conversation generation step, we generate 3 different types of conversations to fine-tune our GPT-3 models: 1. **Violations**, 2. **Contrastive Nonviolations**, and 3. **Nonviolations**. Starting with **Violations**, using the scenarios generated above, we generate rule-violating synthetic user-agent conversations (Prompt 3). For each rule, we rotate through the 7-10 scenarios in a round-robin fashion and generate an equal amount of conversations for each rule. We generate the entire conversation and truncate it to the last 2 turns. We find that this generates more realistic conversations than prompting the model to just generate the last two turns of a hypothetical conversation. In addition to rule-violating conversations, we must generate non-rule-violating conversations. We produce these conversations in two ways. We create our **Contrastive Nonviolations** by taking each rule-violating conversation and removing just the virtual assistant's line that was a violation (\(a_{T}\)). This is replaced with a non-violating assistant utterance (Prompt 4). By using this contrastive learning approach Chuang et al. (2020); Uehara et al. (2020), we aim to generate non-violations similar to violations. As the entire conversation is the same up to the last message, this forces the model to focus on just the agent output. Finally, we also generate **Nonviolation** conversations by few-shot prompting GPT-4 to generate a conversation that does not violate any of the rules in our rule group. We slice these conversations at different turns in the conversations to give us a wide variety of non-violations throughout the conversation, which will allow the model to generalize throughout the progression of the conversation. We use this set of generated data to fine-tune GPT-3 models (ada, babbage, curie, davinci). ## 4 Datasets We demonstrate the efficacy of our approach to virtual assistants in 3 domains: flights, restaurants, and buses. These are drawn from the Schema Guided Dialogue (SGD) dataset's 20 schemas Rastogi et al. (2020). The SGD dataset contains conversations between a user and a task-based virtual assistant. In order to generate real conversations, we use several of the conversations in the SGD dataset as few-shot examples to generate conversations. The SGD dataset contains realistic conversations between a simulated virtual assistant and real crowd workers. We also diversify our dataset by randomizing the English levels (beginner, intermediate, advanced, proficient) of our users for each generation. We include the selected level in the conversational generation prompt (see Appendix Section A.1 for details). We design 7-8 rules for each schema; the full rulesets can be found in the appendix in Tables 13, 14, and 15. For simplicity, we choose rules that can be verified within the turns of a conversation. In this paper, we do not investigate rules that must be verified using an API. For instance, for a restaurant chatbot, we do not create rules such as Do not get the restaurant name and opening times incorrect, because \begin{table} \begin{tabular}{l|c c c} \hline \hline Domain & distinct@1/2/3 & Corr. \\ \hline Restaurants & 0.65 / 0.91 / 0.97 & 0.89 \\ Buses & 0.66 / 0.91 / 0.96 & 0.91 \\ Flights & 0.65 / 0.91 / 0.96 & 0.90 \\ \hline \hline \end{tabular} \end{table} Table 1: **Diversity and accuracy metrics of generated conversations.** We look at distinct@1/2/3 to evaluate the diversity of text within a conversation. For correctness, we measure the correlation of the labels in the generated conversations using Amazon Mechanical Turk Masters-certified human labelers. \begin{table} \begin{tabular}{l|c c c|c} \hline \hline & Train & Test\_ID & Test\_OOD & **Total** \\ \hline Rest. & 901 & 334 & 298 & 1533 \\ Bus & 946 & 351 & 255 & 1552 \\ Flights & 937 & 347 & 302 & 1586 \\ \hline **Total** & 2784 & 1032 & 855 & 4671 \\ \hline \hline \end{tabular} \end{table} Table 2: **Data splits for our generated datasets.** For each domain, we split up our conversations into a train, test, and OOD set. We finetune GPT-3 models, and we evaluate these models on the test and OOD datasets. that would require an API to rely on. We leave this for future work. We designed rules that do not overlap with each other for the purposes of clean multi-class classification, although this may be challenging in practice. We used GPT-4 to assist us in generating realistic domain-specific rules for this paper (see Appendix Prompt. 1). Some of our rules are inspired by the rules used in the DeepMind paper on designing Sparrow Glaese et al. (2022) to maximize helpfulness/harmlessness. Our final dataset statistics are shown in Table 2. While we do not have a separate development set for these domains, we developed our method on a separate domain dataset. For each domain, we generate roughly 500 violations, 500 non-violations, and 200 non-contrastive non-violations. Each non-contrastive non-violation conversation is split into 5 training examples at the first 5 turns: \(\{(u_{1},a_{1}),...,(u_{5},a_{5})\}\). In total, this gives us more than 4500 datapoints (pairs of turns) across all 3 domains. The final numbers for non-violating conversations and violating conversations can be found in Appendix Table 10. We also set up an out-of-distribution (OOD) scenario analysis by holding out 3 random scenarios from the train set. The data split between in-distribution (ID) and out-of-distribution (OOD) scenarios can be found in Table 2. We hold out 3 random scenarios for each domain from fine-tuning to represent out-of-distribution examples. The remaining 7 scenarios are used for our in-distribution examples. Maintaining the proportion of rules and scenarios in both ID train and test datasets, we stratify split the ID dataset into train/test sets with a 73:27 ratio. We assess in-conversation diversity and accuracy metrics in Table 1. We assess the generative diversity within each conversation using distinct@kLi et al. (2016), a standard conversation generation diversity metric. With almost 100% distinct@2 and distinct@3, we find that the text generated within our conversations are diverse. While our datasets are automatically generated and labeled, we verify a subset of the labels using Amazon Mechanical Turk (AMT). In the vast majority of cases, we find that our generated conversations are labeled correctly. Additional setup and details can be found in Appendix A.3. ## 5 Experimental Details We use GPT-4 to generate all training data with the exception of the scenarios. For the scenarios, we use GPT-3.5-Turbo to first generate 10 distinct \begin{table} \begin{tabular}{c l|c c c|c c c} \hline \hline \multicolumn{2}{c|}{GPT Model} & \multicolumn{3}{c|}{ID Scenario Acc. (\%) \(\uparrow\)} & \multicolumn{3}{c}{OOD Scenario Acc. (\%) \(\uparrow\)} \\ \cline{3-8} \multicolumn{2}{c|}{} & & Restaurant & Bus & Flight & Restaurant & Bus & Flight \\ \hline \multirow{3}{*}{**Prompt-based**} & ada & 40.1 & 71.5 & 73.2 & 14.1 & 49.8 & 49.7 \\ & davinci & 57.2 & 71.5 & 69.2 & 34.9 & 48.6 & 45.0 \\ & GPT-4 & 78.7 & 89.7 & 90.5 & 58.1 & 84.7 & 77.8 \\ \hline \multirow{3}{*}{**Distilled**} & \multirow{3}{*}{\(\checkmark\)**scenarios**} & ada & 75.1 & 77.2 & 76.9 & 55.4 & 58.4 & 57.3 \\ & davinci & 82.6 & 77.8 & 77.8 & 65.8 & 63.5 & 57.3 \\ \cline{1-1} \cline{2-8} & \multirow{3}{*}{\(\checkmark\)**contrastive**} & ada & 90.4 & 88.9 & 91.9 & 80.2 & 83.5 & 84.8 \\ \cline{1-1} & & davinci & 93.1 & 89.7 & 90.2 & 83.6 & 85.5 & 76.8 \\ \cline{1-1} \cline{2-8} & \multirow{3}{*}{\(\checkmark\)**contrastive**} & ada & **99.7** & 96.3 & **95.7** & 92.6 & 94.1 & 89.4 \\ \cline{1-1} & \multirow{3}{*}{\(\checkmark\)**scenarios**} & davinci & **99.7** & **98.2** & 94.8 & **94.3** & **96.1** & **93.4** \\ \hline \hline \end{tabular} \end{table} Table 3: **Guardrail accuracy metrics. We compare our fine-tuned approach (Distilled \(\checkmark\)contrastive \(\checkmark\)scenarios) with 3 baselines: 1. Prompt-based models, which are not fine-tuned, but include 5 few-shot examples from the in-distribution training set; 2. Distilled \(\checkmark\)scenarios models, which are fine-tuned without contrastive examples; 3. Distilled \(\checkmark\)contrastive models, which are fine-tuned with violations generated without scenarios. We calculate domain-level guardrail accuracy separately for in-distribution (ID) Scenarios, which consist of examples generated from scenarios included in the model training, and out-of-distribution (OOD) Scenarios, which consist of examples generated from scenarios not included in the training data. We find that Distilled \(\checkmark\)contrastive \(\checkmark\)scenarios outperforms GPT-4’s performance. We find that this performance gain is especially important in terms of OOD data, which highlights our distillation approaches’ ability to generalize well.** scenarios for each rule. We used GPT-3.5-Turbo because we observed that GPT-4 tended to output very specific scenarios. We aimed to generate a broad variety of scenarios in order to produce conversations with more variation. We remove scenarios that were not suitable. For fine-tuning, we use the default hyperparameters of OpenAI. We use n_epochs of 4, batch_size of 0.2% of the training set and default learning_rate_multiplier (0.05, 0.1, 0.2 depending on final batch_size, decided by the fine-tuning API). ## 6 Results In Table 3, we evaluate the accuracy of our distilled guardrail approach (Distilled \(\mathsf{\check{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf \mathsfmathsf{ \mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}\) scenarios) against the following model baselines: * Prompt-based: GPT-family models without fine-tuning, including the original GPT-3 base models (ada, and davinci) and GPT-4. Note that we use a generic prompt format, and do not tailor it to the specific domain. * Distilled \(\mathsf{\check{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}}}\) scenarios: GPT-3 models fine-tuned with scenario-guided conversations but without contrastive examples. * Distilled \(\mathsf{\check{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}}}\) }}\) : GPT-3 models fine-tuned with contrastive examples but without scenario-guided conversations. These experiments were conducted using the versions of the above OpenAI models on April 2023. Costs were also calculated using the OpenAI pricing page, as of April 2023 which can be found in the appendix. We include separate evaluations of the seen scenarios (conversations guided by scenarios included in the training set) and unseen scenarios (conversations guided by scenarios excluded from the training set) in Table 3. Additional experiments, including accuracy on intermediate GPT-3 models and GPT-3.5-turbo, are included in Appendix Table 12. ### Accuracy Our fine-tuning approach (Distilled \(\mathsf{\check{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}\) scenarios) achieves higher accuracy scores than all other baselines on both ID and OOD examples (Table 3) for all domains. This is likely due to the ability of the fine-tuned GPT-3 models to better learn the intended interpretations of each guardrail rule enabled by the fine-grained nature of the training data. This includes the notable but unsurprising performance gains compared to smaller prompt-based GPT3 methods. Ada, the smallest GPT-3 model, doubles its accuracy score in many cases, showing that this distillation approach can enable even smaller models to achieve high performance. Yet the most surprising rise in performance is compared to GPT-4. Given that GPT-4 generated the data used for training, why does GPT-4 perform worse than a fine-tuned GPT-3 model? ### Where GPT-4 Fails We study how the distilled models achieve higher accuracy scores over GPT-4 by looking at examples that GPT-4 labeled incorrectly but that our approach gets correct. As seen in Table 4, mistakes commonly occur when GPT-4 mispredicts conversations that do not have violations as violations. Our fine-tuned models correctly classify these as nonviolations. We find that GPT-4 performs 33% worse than our fine-tuned davinci model on contrastive nonviolations (Table 5). This difference in performance on contrastive examples implies that GPT-4 is not good at classifying contrastive examples out of the box. While it may seem counterintuitive that GPT-4 does worse on self-generated examples, the nature of how we generate contrastive non-violations explains this result. Contrastive nonviolations are more difficult to classify because they illustrate settings where the user might be attempting some rule-breaking behavior, but the agent responds correctly. In these conversations, the assistant actually responds to the user's topic of conversation, but doesn't break the rule. It may be difficult for a classifier to distinguish between a conversation where a virtual assistant discusses a rule-adjacent topic versus a conversation where the rule is broken. The following is an example of a contrastive nonviolation conversation from the bus domain. GPT-4 incorrectly predicts that this conversation violates rule 16, Do not guarantee the accuracy of real-time bus arrival or departure times, as these can be subject to changes.. User: I am at 123 Main St and I need to get to City University. Assistant: Based on your location, the most efficient way to get to City University is to take Bus #55 from Main St & Park Ave. The journey should take approximately 30 minutes, depending on traffic conditions. Please be aware that real-time bus arrival and departure times can be subject to changes. GPT-4's prediction is incorrect because the virtual assistant does not guarantee the arrival or departure time. We include more examples of such errors in Appendix section A.6. Overall, a chatbot designer may argue that one of these conversations is or is not a violation, especially on borderline cases. But given the subjective nature of the task, we argue that it is crucial to enable a chatbot designer to fully define the behavior of the guardrail model. While this may be possible with more complex manually-engineered GPT-4 prompts, we argue that it is easier to distill a model using our scenario-generated method. ### Impact of Contrastive and Scenario-Guided Examples Contrastive training examples are important in building a model that can deal with contrastive examples, as shown in the results comparing Distilled \(\check{\vee}\)scenarios and Distilled \(\check{\vee}\)contrastive \(\check{\vee}\)scenarios models in Fig. 3. As stated in the Accuracy section, taking contrastive examples out of our training dataset results in a 15-35% reduction in accuracy for our models. Similarly, scenario-augmented training examples help improve model accuracy and generalization. Without the scenario-guided examples (shown in Distilled \(\check{\vee}\)contrastive), the model can suffer from a 5% to 10% reduction in accuracy. This shows that it is important to fine-tune the distilled model with both a set of close example pairs and a wide variety of examples. These accuracy gains are crucial given the user-facing nature of the task. ### Latency and Cost We compare the cost and latency of our fine-tuned approach (Distilled \(\check{\vee}\)contrastive \(\check{\vee}\)scenarios) with baselines GPT-3.5 and GPT-4 in Table 6. Our fine-tuned GPT-3 models perform up to _2-4x faster_ and are up to _200x cheaper_ than GPT-4. While the latest version of GPT-3.5 (GPT-3.5-Turbo) is faster than GPT-4, GPT-3.5-Turbo is roughly equal in speed as the slowest fine-tuned model (davinci), because we have to add a prompt to GPT-3.5-Turbo. Similarly, the cheapest and fastest model is our fine-tuned GPT-3 Ada model, which still achieves much higher accuracy in both ID and OOD settings than GPT-3.5 and GPT-4, costing $0.0001 per turn. Latency and cost are important in production, and they can stack \begin{table} \begin{tabular}{c c|c c|c|c c|c} \hline \hline \multicolumn{3}{c|}{Restaurant} & \multicolumn{3}{c|}{Buses} & \multicolumn{3}{c}{Flights} \\ \hline True Label & GPT4 Pred. & n & True Label & GPT4 Pred. & n & True Label & GPT4 Pred. & n \\ \hline None & Rule 4 & 30 & None & Rule 16 & 29 & None & Rule 12 & 20 \\ None & Rule 3 & 26 & None & Rule 20 & 17 & None & Rule 8 & 13 \\ None & Rule 5 & 26 & Rule 23 & None & 9 & Rule 11 & None & 10 \\ \hline \hline \end{tabular} \end{table} Table 4: **Three most common mistakes that GPT-4 made that our approach**(Distilled \(\check{\vee}\)contrastive \(\check{\vee}\)scenarios) **correctly predicted for each domain.** These are the 3 most common label-prediction combinations in each domain (restaurant, buses, flights). For example, for the restaurant domain, there are 30 examples where the label was ”None” (no rules were violated), but GPT-4 guessed that the example violated rule 4 (and our fine-tuned models guessed that this was correctly a non-violation). The rest of the combinations can be found in the tables in Appendix Section. \begin{table} \begin{tabular}{l c c c} \hline \hline & **V** (\%)\(\uparrow\) & **Con. NV**(\%)\(\uparrow\) & **NV**(\%)\(\uparrow\) \\ \hline GPT-4 & 84.8 & 63.6 & 99.3 \\ Our Approach & **92.3** & **96.6** & **100** \\ \hline \hline \end{tabular} \end{table} Table 5: **Accuracy breakdown.** We compare accuracy for GPT-4 with our fine-tuning Distilled \(\check{\vee}\)contrastive \(\check{\vee}\)scenarios approach on the GPT-3 Davinci model. We compare these models on our different classes of generated data: Violations, Contrastive Nonviolations, and Nonviolations, as introduced in Table 2. These results are aggregated across all domains and both ID and OOD test datasets. up quickly across many conversations with many turns. These inference costs do not account for the costs of fine-tuning our models and generating conversation data (discussed in section A.2), but this is a fixed initial cost. ### Training Size We investigate the impact of varying the size of the training set on the performance of a fine-tuned GPT-3 Curie model. We present our findings in Table 7, where we compare the small (\(\frac{1}{3}\) of data) and medium (\(\frac{2}{3}\) of data) datasets to the large dataset, which includes all the training samples. We ensured that the proportion of scenarios and rules remained consistent across all three datasets. The small dataset contains roughly 1 conversation generated from each rule-scenario combination, while the medium dataset contains 2 conversations, and the large dataset contains 3-4 conversations. Our results show that while the model fine-tuned on the small dataset performs moderately well, there is a significant increase in performance with the addition of more training data. In certain domains such as restaurants and flights, we achieve impressive results of over 90% accuracy using a medium-sized dataset. However, in other domains such as the bus domain, the difference in accuracy between the medium and all datasets is substantial, with accuracy jumping from around 48% to 96%. This jump in accuracy also results suggests that our original selected training size, which includes around 250 violations with an equal mix across 10 scenarios is important for our selected domains and rules. It also suggests that GPT-4 is capable of generating diverse conversations _within_ a specific rule and scenario combination because the addition of more conversations from these combinations continues to improve a model's performance. ## 7 Discussion Leveraging a distilled GPT-3 model combines the efficiency of a smaller model with the accuracy of a more powerful one. In all cases, fine-tuned GPT-3 models outperform Vanilla GPT-3 models in terms of accuracy. Even compared to a more powerful model, such as GPT-4, our distilled approach not only provides benefits in terms of latency and cost but also delivers improvements in terms of accuracy. This is the case for both scenarios seen during model training (ID examples), and unseen scenarios (OOD examples) that have been held out. We find that a major factor in its ability to generalize is the inclusion of contrastive examples. As broadly shown in previous work Liu et al. (2021); Solaiman and Dennison (2021), we find that these examples allow GPT-3 to specifically better model the fine-grained differences that can occur between conversations with and without violations. Further, we note that the ability of GPT-4 to produce these contrastive examples illustrates its generative power. ## 8 Related Work Language models are increasingly used to power task-oriented dialogue systems, like ChatGPT OpenAI (2022) and Google's Bard (Pichai, 2023). They are used as personal assistants and customer support in different domains Rastogi et al. (2020); Eric et al. (2019). With this increase in language model ability, there has been an increased focus on ensuring that generated text does not contain harmful content Weidinger et al. (2021); Bender et al. (2021); Nair et al. (2023) for large language models. Previous works have used reinforcement learning from human feedback (RLHF) to minimize harmful content from large language models Glaese et al. (2022); Ouyang et al. (2022). Scheurer et al. (2022) advocates for fine-tuning models with human feedback without reinforcement learning. Our approach of using language models to scale oversight and help supervise other language models is also similar to the approach in Bai et al. (2022). They specifically focus on general harmlessness/harmfulness rules, while our approach is a more general approach that allows chatbot designers decide what type of rules they want to enforce downstream. Knowledge distillation has shown to be an effective way to compress the knowledge of larger models/ensembles of models into single, smaller \begin{table} \begin{tabular}{l|c c} \hline \hline Model & Time (sec) \(\downarrow\) & Cost \(\$\downarrow\) \\ \hline ada & **0.11** & **.0001** \\ davinci & 0.26 &.0071 \\ \hline GPT-3.5-turbo & 0.34 &.0006 \\ GPT-4 & 2.94 &.0086 \\ \hline \hline \end{tabular} \end{table} Table 6: **Inference latency (in seconds) and cost (in USD).** We compare inference latency and cost between fine-tuned GPT-4, GPT-3.5, and GPT-4. Cost calculations are based on April 2023 pricing, see Appendix Section A.2 for details. models Bucilua et al. (2006); Hinton et al. (2015). Previous work has shown the ability of large language models to transfer reasoning capabilities to smaller language models for specific tasks Ho et al. (2022); Magister et al. (2021). Unlike previous work, we train our student model on generated examples from the teacher model. This is unlike previous work that trains student models on the inference or reasoning capabilities of a teacher model. This allows us to harness the generation abilities of larger models while minimizing latency and hardware costs. ## 9 Conclusion We propose a distillation approach for guardrail models. These verification models are crucial for enabling large language model-based tools to be deployed with confidence. In addition to potential applications in harm reduction, they also allow conversational agent designers to include rules not accounted for in the original model training. We propose a distillation pipeline that enables data generation across a broad variety of cases. By first generating rule-breaking scenarios, the resulting conversations will cover a broader set of possibilities than doing so without this step. Second, by transforming these rule-breaking conversations into non-rule-breaking conversations, we provide the model with a set of contrastive examples that better teach it how to differentiate between the cases. Our results demonstrate that GPT-4 generated training data allows fine-tuned smaller models (GPT-3) to surpass baselines in various metrics like accuracy, speed, and cost. There are several future directions for distilling guardrail models. Second, while we design violations that are separable, this might not be possible in practice. Approaches that can handle multi-label violations will likely be helpful in those settings. Designing evaluation strategies for generated conversational data will be important in ensuring that output will be similar to real-world data. ## 10 Limitations We rely on OpenAI's API to generate data, fine-tune our model, and run inference. These models are shown to be more powerful than many previous models, but challenges remain in terms of replicating results. Although we conduct extensive ablations and experiments across domains, we only include a single run of each particular model due to cost.
2305.03531
Random Smoothing Regularization in Kernel Gradient Descent Learning
Random smoothing data augmentation is a unique form of regularization that can prevent overfitting by introducing noise to the input data, encouraging the model to learn more generalized features. Despite its success in various applications, there has been a lack of systematic study on the regularization ability of random smoothing. In this paper, we aim to bridge this gap by presenting a framework for random smoothing regularization that can adaptively and effectively learn a wide range of ground truth functions belonging to the classical Sobolev spaces. Specifically, we investigate two underlying function spaces: the Sobolev space of low intrinsic dimension, which includes the Sobolev space in $D$-dimensional Euclidean space or low-dimensional sub-manifolds as special cases, and the mixed smooth Sobolev space with a tensor structure. By using random smoothing regularization as novel convolution-based smoothing kernels, we can attain optimal convergence rates in these cases using a kernel gradient descent algorithm, either with early stopping or weight decay. It is noteworthy that our estimator can adapt to the structural assumptions of the underlying data and avoid the curse of dimensionality. This is achieved through various choices of injected noise distributions such as Gaussian, Laplace, or general polynomial noises, allowing for broad adaptation to the aforementioned structural assumptions of the underlying data. The convergence rate depends only on the effective dimension, which may be significantly smaller than the actual data dimension. We conduct numerical experiments on simulated data to validate our theoretical results.
Liang Ding, Tianyang Hu, Jiahang Jiang, Donghao Li, Wenjia Wang, Yuan Yao
2023-05-05T13:37:34Z
http://arxiv.org/abs/2305.03531v2
# Random Smoothing Regularization in Kernel Gradient Descent Learning ###### Abstract Random smoothing data augmentation is a unique form of regularization that can prevent overfitting by introducing noise to the input data, encouraging the model to learn more generalized features. Despite its success in various applications, there has been a lack of systematic study on the regularization ability of random smoothing. In this paper, we aim to bridge this gap by presenting a framework for random smoothing regularization that can adaptively and effectively learn a wide range of ground truth functions belonging to the classical Sobolev spaces. Specifically, we investigate two underlying function spaces: the Sobolev space of low intrinsic dimension, which includes the Sobolev space in \(D\)-dimensional Euclidean space or low-dimensional sub-manifolds as special cases, and the mixed smooth Sobolev space with a tensor structure. By using random smoothing regularization as novel convolution-based smoothing kernels, we can attain optimal convergence rates in these cases using a kernel gradient descent algorithm, either with early stopping or weight decay. It is noteworthy that our estimator can adapt to the structural assumptions of the underlying data and avoid the curse of dimensionality. This is achieved through various choices of injected noise distributions such as Gaussian, Laplace, or general polynomial noises, allowing for broad adaptation to the aforementioned structural assumptions of the underlying data. The convergence rate depends only on the effective dimension, which may be significantly smaller than the actual data dimension. We conduct numerical experiments on simulated data to validate our theoretical results. ## 1 Introduction Random smoothing data augmentation is a technique used to improve the generalization and robustness of machine learning models, particularly in the context of deep learning. This method involves adding random noise, such as Gaussian or Laplace noise, to the input data during the training process. The idea behind random smoothing is to make the model more robust to small perturbations in the input data, as the added noise simulates variations that may occur naturally in real-world data. This augmentation approach has proven to be an effective regularization technique, contributing to the empirical success of deep learning models across various applications. For instance, random flip, random crop, and color jitter can significantly improve the classification accuracy in natural images (Goodfellow et al., 2016; Shorten and Khoshgoftaar, 2019). Random smoothing has been proven effective for improving model robustness and generalization (Blum et al., 2020; Rosenfeld et al., 2020; Mehra et al., 2021; Wang et al., 2020; Gao et al., 2020). For example, random smoothing with Gaussian noise injection is introduced to address the adversarial vulnerability (Cohen et al., 2019; Salman et al., 2019), and by encouraging the feature map to be invariant under data augmentations, self-supervised contrastive learning methods (He et al., 2020; Chen et al., 2020; Grill et al., 2020; Chen and He, 2021; He et al., 2021) can achieve state-of-the-art performance for various downstream tasks. Random smoothing can be viewed as a form of regularization (Grandvalet et al., 1997). Regularization techniques generally aim to reduce the complexity of a model, making it less prone to fitting the noise in the training data and, consequently, improving its performance on unseen data. Random smoothing can be considered an implicit form of regularization, as it does not directly modify the model's parameters or loss function, unlike explicit regularization techniques such as \(\ell_{1}\) or \(\ell_{2}\) regularization. Instead, it indirectly influences the model's behavior by altering the input data during training. By adding random noise to the input data, random smoothing forces the model to focus on the underlying structure of the data rather than memorizing specific instances. This leads to more robust and generalizable models that can better handle variations in real-world data. As a result, random smoothing acts as a regularizer, improving the model's ability to generalize from the training set to unseen data. Such a regularization perspective at least starts with Grandvalet et al. (1997). However, in spite of the empirical success of random smoothing in various applications, there is a lack of systematic research on the regularization effect of random smoothing in the literature. In this paper, we address this gap by examining the classic nonparametric regression problem from the perspective of random smoothing regularization. In nonparametric regression, the primary objective is to uncover the functional relationship between input and output variables. By making appropriate assumptions about the underlying truth function and selecting the appropriate estimator, we focus on understanding the efficiency of the estimation, specifically, the rate at which the estimation error converges to zero as the sample size \(n\) increases. The optimal convergence rate is typically dictated by the problem's inherent complexity. The actual achievable convergence rates depend on the specific estimation methods employed. Among various techniques, we consider kernel methods that have been extensively investigated in the research literature (Wahba, 1990; Hastie et al., 2001). In this study, we present a unified framework that can learn a wide range of \(D\)-dimensional ground truth functions belonging to the classical Sobolev spaces (\(\mathcal{W}^{m_{f}}\)) in an effective and adaptive manner. The framework incorporates random smoothing as a central component. Our hypothesis space is a reproducing kernel Hilbert space that is associated with a kernel function of smoothness denoted by \(m_{0}\). Random smoothing regularization leads to a novel convolution between the kernel function and a probability density function for the injected input noise. This injected noise is governed by either short or long-tail distributions, namely Gaussian and polynomial (including Laplace) noises, respectively. The resulting convolution-based random smoothing kernel enables us to adapt to the smoothness of the target functions more efficiently. Notably, we establish that for any \(m_{0}\) and \(m_{f}\) greater than \(D/2\), optimal convergence rates can be achieved by utilizing random smoothing regularization and appropriate early stopping and/or weight decay techniques. To be specific, we investigate two possible function spaces that may contain the target function. In Section 4.2, we analyze the Sobolev space with a low intrinsic dimension, which is denoted by \(d\). This space covers both \(D\)-dimensional Euclidean spaces (when \(d=D\)) and low-dimensional sub-manifolds as specific examples. In Section 4.3, we explore the mixed smooth Sobolev spaces, which possess a tensor structure. Our principal findings are summarized below. * In case of Sobolev space of low intrinsic dimensionality \(d\leq D\): When using Gaussian random smoothing, an upper bound of the convergence rate is achieved at \(n^{-m_{f}/(2m_{f}+d)}(\log n)^{D+1}\), which recovers the results presented in Hamm and Steinwart (2021) and is hypothetically optimal up to a logarithmic factor. However, in contrast to Hamm and Steinwart (2021), we present a different approach that allows us to analyze polynomial smoothing; When using polynomial random smoothing with data size adaptive smoothing degree, a convergence rate of \(n^{-m_{f}/(2m_{f}+d)}(\log n)^{2m_{f}+1}\) is achieved, which is again, hypothetically optimal up to a logarithmic factor. * In case of mixed smooth Sobolev spaces, using polynomial random smoothing of degree \(m_{\varepsilon}\), a fast convergence rate of \(n^{-2m_{f}/(2m_{f}+1)}(\log n)^{\frac{2m_{f}}{2m_{f}+1}\left(D-1+\frac{1}{2(m _{0}+m_{\varepsilon})}\right)}\) is achieved, which is optimal up to a logarithmic factor. To the best of our knowledge, such results have not been studied in the literature so far. They have various implications below. First of all, these results enhance the convergence rates in the context of kernel ridge regression by incorporating random smoothing data augmentation with two other popular techniques, early stopping and weight decay. In kernel ridge regression, it is crucial to balance the smoothness of the kernel function (\(m_{0}\)) with that of the ground truth (\(m_{f}\)). In practice, it is common for \(m_{0}\) to be unequal to \(m_{f}\). In cases of mismatch, regularization becomes essential. Specifically, if \(m_{0}\in[m_{f}/2,\infty)\), the optimal convergence rate \(n^{-m_{f}/(2m_{f}+D)}\) can be achieved by employing an appropriate ridge penalty strength. This result can be generalized to low intrinsic dimensionality \(d\leq D\), where the hypothetically optimal convergence rate is \(n^{-m_{f}/(2m_{f}+d)}\)(Hamm and Steinwart, 2021). However, when the chosen kernel has a smoothness \(m_{0}\) less than \(m_{f}/2\), the optimal adaptation is not well studied in kernel ridge regression. In contrast, our findings demonstrate optimal adaptation for arbitrary \(m_{0}\) and \(m_{f}\geq D/2\) without such a constraint. This highlights the broad adaptation ability of random smoothing regularization. Moreover, the optimal adaptation of polynomial random smoothing has an implication for neural networks via the (generalized) Laplace random smoothing. It is known that the training of neural networks, with enough overparametrization, can be characterized by kernel methods with a special family of kernels called the "neural tangent kernel" (NTK). Due to the low smoothness of the ReLU activation function, the corresponding NTK also has a low smoothness that is the same as a Laplace kernel (Chen and Xu, 2020; Geifman et al., 2020). To the best of our knowledge, the estimation error is at the rate \(n^{-\frac{D}{2D-1}}\)(Hu et al., 2021). Our results, using the polynomial random smoothing with (generalized) Laplace distributions, show that the convergence rate can be improved, which sheds light on understanding non-smooth augmentations such as random crop and mask. Based on this understanding, numerical experiments with neural networks are conducted on simulated data to corroborate our theoretical results. Finally, it is worth mentioning that with random smoothing, the convergence rates mentioned above can be obtained by early stopping. However, if one applies weight decay, the number of iterations can be reduced from polynomial(\(n\)) to polynomial(\(\log n\)). Additionally, our estimator can adapt to the low-dimensional assumptions mentioned earlier, as the convergence rates depend on \(D\) at most logarithmically, alleviating the curse of dimensionality. It is also important to note that we do not employ the spectrum of integral operator technique (Yao et al., 2007; Lin et al., 2016; Lin and Rosasco, 2017), but instead use Fourier analysis, which provides a universal basis for kernels of different smoothness, and avoids imposing conditions on the eigenvalues and eigenfunctions of the kernel function. This is because there is no clear relationship between the low intrinsic dimension and the eigenvalues of the integral operator. Furthermore, our theoretical analysis can be applied to the widely used Matern kernel functions. The remainder of this paper is structured as follows. In Section 2, we provide a review of related works. Section 3 introduces the settings considered in this work, which include early stopping with a random smoothing kernel, as well as the conditions and assumptions utilized in this work. The main theoretical results are presented in Section 4, and numerical studies are conducted in Section 5. Conclusions and a discussion are provided in Section 6. Technical proofs are included in the Appendix. Related Works Various means of regularization have been proposed for kernel methods to better recover the underlying function, among which, ridge penalty and early stopping are the most popular. Kernel ridge regression has been extensively studied in the literature, see Blanchard and Mucke (2018); Dicker et al. (2017); Guo et al. (2017); Lin et al. (2017); Steinwart et al. (2009); Tuo et al. (2020); Wu et al. (2006) for example. Early stopping treats the number of training iterations as a hyperparameter in the optimization process, which has been extensively studied by the applied mathematics community (Dieuleveut and Bach, 2016; Yao et al., 2007; Pillaud-Vivien et al., 2018; Raskutti et al., 2014). Various forms of early stopping also have been studied including boosting (Zhang and Yu, 2005; Bartlett and Traskin, 2007), conjugate gradient algorithm (Blanchard and Kramer, 2016) and kernel gradient descent (Buhlmann and Yu, 2002; Caponnetto and Yao, 2006; Yao et al., 2007; Wei et al., 2017; Lin et al., 2016). Some works (e.g. Lin et al. (2016); Lin and Rosasco (2017); Pillaud-Vivien et al. (2018)) have explored early stopping by employing the integral operator induced by the kernel, imposing conditions on the eigenvalues and eigenfunctions of the kernel function. Smoothness or regularity of functions thus implicitly depends on the measure that defines the spectrum of the integral operator, whereas classical smoothness like Sobolev spaces is not explicitly handled. In kernel regression with gradient descent, Raskutti et al. (2014) showed that early stopping and ridge penalty both can achieve the optimal convergence rate if the smoothness is well-specified. Yet, kernel ridge regression might suffer the "saturation issues" while early stopping does not (Engl et al., 1996; Yao et al., 2007). In regression problems, it is usually assumed that the domain of interest has a positive Lebesgue measure, while in practice, the data generating distribution is supported on some low-dimensional smooth sub-manifold (Scott and Nowak, 2006; Yang and Dunson, 2016; Ye and Zhou, 2008, 2009; Hamm and Steinwart, 2021, 2021). Kernel methods can circumvent the curse of dimensionality and adapt to various low-dimensional assumptions of the underlying function. In particular, Hamm and Steinwart (2021, 2021) generalized the manifold assumption by applying the box-counting dimension of the support of the data distribution, and derived upper bounds on the convergence rate of the prediction error. Another simplifying assumption is tensor product kernels (Gretton, 2015; Szabo and Sriperumbudur, 2017), whose product forms allow efficient computation of Gaussian process regression (Saatci, 2012; Wilson and Nickisch, 2015; Ding and Zhang, 2022; Chen et al., 2022) and analysis of independent component (Bach and Jordan, 2002; Gretton et al., 2005, 2007). The RKHS induced by a tensor product kernel is simply tensored RKHS (Paulsen and Raghupathi, 2016). Tensor product kernels we consider induce the tensored Sobolev spaces (Dung et al., 2018). For complicated high-dimensional data, deep learning models seem to perform extremely well, which has sparked numerous investigations into their generalization ability. As it turns out, the training of neural networks has deep connections to kernel methods with neural tangent kernels (NTK). Under proper initialization, training sufficiently wide DNN with gradient descent equates to kernel regression using NTK. First introduced by Jacot et al. (2018), the correspondence has been significantly extended (Du et al., 2018; Li and Liang, 2018; Arora et al., 2019; Cao and Gu, 2020; Arora et al., 2019; Li et al., 2019; Huang et al., 2020; Kanoh and Sugiyama, 2021; Hu et al., 2022). From the NTK point of view, ridge penalty and early stopping are also vital in training neural networks. The former is equivalent to weight decay (Hu et al., 2021), which is applied by default in training deep learning models for better generalization, so is early stopping (Prechelt, 1998). Zhang et al. (2021); Hardt et al. (2016) revealed that longer training can harm the generalization performance of deep models. Li et al. (2020); Bai et al. (2021) utilized early stopping to improve robustness to label noises. Besides NTK, various data augmentation techniques in deep learning that are proven effective in improving model generalization can also provide inspiration for kernel methods. Grandvalet et al. (1997) studied from a regularization perspective how noise injection can improve generalization. Data augmentation is particularly important for handling natural images (Shorten and Khoshgoftaar, 2019), where horizontal flip, random crop, color jitter can significantly improve the classification accuracy. By applying the above augmentations, self-supervised contrastive learning methods (He et al., 2020; Chen et al., 2020; Grill et al., 2020; Chen and He, 2021; He et al., 2021) can achieve state-of-the-art performance for various downstream tasks. Randomized smoothing (Cohen et al., 2019; Salman et al., 2019) is a special data augmentation, first proposed to address the adversarial vulnerability (Goodfellow et al., 2014; Carlini and Wagner, 2017) of deep learning models. The key idea is to perturb the input with random noise injection and make predictions by aggregating the outputs from all augmented inputs. Random smoothing has been proven effective for improving model robustness and generalization (Rosenfeld et al., 2020; Mehra et al., 2021; Wang et al., 2020; Gao et al., 2020). Our proposed framework incorporates random smoothing, together with weight decay and early stopping, to provide a unified solution for the smoothness mismatch problem in kernel regression. It is worth clarifying the difference between our method and the "errors in variables" literature (Zhou et al., 2019; Wang et al., 2022; Cressie and Kornak, 2003; Cervone and Pillai, 2015). Though the formulations seem similar, i.e., the inputs in both cases are corrupted with noises, the two are fundamentally different. In our setting, both the input \(\mathbf{x}\) and added noise \(\mathbf{\varepsilon}\) are known (we control the noises in our estimator) while in the other setting, the input is noisy and only \(\mathbf{x}+\mathbf{\varepsilon}\) is observed. ## 3 Random Smoothing Kernel Regression In this section, we introduce the problem of interest, our methodology, and the necessary conditions used in this work. ### Problem Setting Suppose we have observed data \((\mathbf{x}_{j},y_{j})\) for \(j=1,...,n\), which follows the relationship given by \[y_{j}=f^{*}(\mathbf{x}_{j})+\epsilon_{j}. \tag{1}\] Here, \(\mathbf{x}_{j}\)'s are independent and identically distributed (i.i.d.) following a marginal distribution \(P_{\mathbf{X}}\) with support \(\text{supp}(P_{\mathbf{X}})=\Omega\subset\mathbb{R}^{D}\). The function \(f^{*}\in\mathcal{H}(\Omega)\), where \(\mathcal{H}(\Omega)\) denotes a function space, and \(\epsilon_{j}\)'s are i.i.d. noise variables with mean zero and finite variance. Our objective is to recover the function \(f^{*}\) based on the noisy observations. In this work, we consider two cases. In the first case (Section 4.2), the function space \(\mathcal{H}(\Omega)\) is a Sobolev space with smoothness \(m\), denoted by \(\mathcal{W}^{m}(\Omega)\), and the data is of low intrinsic dimension. In the second case (Section 4.3), the function space \(\mathcal{H}(\Omega)\) is a tensor Sobolev space. Throughout this work, we assume without loss of generality that \(P_{\mathbf{X}}\) follows a uniform distribution. Note that our theoretical analysis can be easily extended to the case where \(P_{\mathbf{X}}\) is upper and lower bounded by positive constants. In order to recover the function \(f^{*}\), we use reproducing kernel Hilbert spaces (RKHSs). We briefly introduce the RKHSs and their relationship with Sobolev spaces in the following, and refer to Wendland (2004) and Adams and Fournier (2003) for details. Let \(K:\Omega\times\Omega\to\mathbb{R}\) be a symmetric positive definite kernel function. Define the linear space \[F_{K}(\Omega)=\left\{\sum_{k=1}^{n}\beta_{k}K(\cdot,\mathbf{x}_{k}):\beta_{k}\in \mathbb{R},\mathbf{x}_{k}\in\Omega,n\in\mathbb{N}\right\}, \tag{2}\] and equip this space with the bilinear form \[\left\langle\sum_{k=1}^{n}\beta_{k}K(\cdot,\mathbf{x}_{k}),\sum_{j=1}^{m}\gamma_{ j}K(\cdot,\mathbf{x}_{j}^{\prime})\right\rangle_{K}:=\sum_{k=1}^{n}\sum_{j=1}^{m} \beta_{k}\gamma_{j}K(\mathbf{x}_{k},\mathbf{x}_{j}^{\prime}).\] Then the reproducing kernel Hilbert space \(\mathcal{H}_{K}(\Omega)\) generated by the kernel function \(K\) is defined as the closure of \(F_{K}(\Omega)\) under the inner product \(\langle\cdot,\cdot\rangle_{K}\), and the norm of \(\mathcal{H}_{K}(\Omega)\) is \(\|f\|_{\mathcal{H}_{K}(\Omega)}=\sqrt{\langle f,f\rangle_{\mathcal{H}_{K}( \Omega)}}\), where \(\langle\cdot,\cdot\rangle_{\mathcal{H}_{K}(\Omega)}\) is induced by \(\langle\cdot,\cdot\rangle_{K}\). The following theorem gives another characterization of the reproducing kernel Hilbert space when \(K\) is stationary, via the Fourier transform. Our notion of the Fourier transform is \[\mathcal{F}(g)(\mathbf{\omega})=(2\pi)^{-D/2}\int_{\mathbb{R}^{D}}g(\mathbf{x})e^{-i \mathbf{\omega}^{T}\mathbf{x}}\mathrm{d}\mathbf{x},\] for a function \(g\in L_{1}(\mathbb{R}^{D})\). Note that a kernel function \(K\) is said to be stationary if the value \(K(\mathbf{x},\mathbf{x}^{\prime})\) only depends on the difference \(\mathbf{x}-\mathbf{x}^{\prime}\). Thus, we can write \(K(\mathbf{x}-\mathbf{x}^{\prime}):=K(\mathbf{x},\mathbf{x}^{\prime})\). **Theorem 3.1** (Theorem 10.12 of Wendland (2004)).: _Let \(K\) be a positive definite kernel function that is stationary, continuous, and integrable in \(\mathbb{R}^{D}\). Define_ \[\mathcal{G}:=\{f\in L_{2}(\mathbb{R}^{D})\cap C(\mathbb{R}^{D}):\mathcal{F}(f) /\sqrt{\mathcal{F}(K)}\in L_{2}(\mathbb{R}^{D})\},\] _with the inner product_ \[\langle f,g\rangle_{\mathcal{H}_{K}(\mathbb{R}^{D})}=(2\pi)^{-d/2}\int_{ \mathbb{R}^{d}}\frac{\mathcal{F}(f)(\mathbf{\omega})\overline{\mathcal{F}(g)(\bm {\omega})}}{\mathcal{F}(K)(\mathbf{\omega})}\mathrm{d}\mathbf{\omega}.\] _Then \(\mathcal{G}=\mathcal{H}_{K}(\mathbb{R}^{D})\), and both inner products coincide._ For \(m>D/2\), the (fractional) Sobolev norm for function \(g\) on \(\mathbb{R}^{D}\) is defined by \[\|g\|_{\mathcal{W}^{m}(\mathbb{R}^{D})}^{2}=\int_{\mathbb{R}^{d}}|\mathcal{F} (g)(\mathbf{\omega})|^{2}(1+\|\mathbf{\omega}\|_{2}^{2})^{m}\mathrm{d}\mathbf{\omega}, \tag{3}\] and the inner product of a Sobolev space \(\mathcal{W}^{m}(\mathbb{R}^{D})\) is defined by \[\langle f,g\rangle_{\mathcal{W}^{m}(\mathbb{R}^{D})}=\int_{\mathbb{R}^{D}} \mathcal{F}(f)(\mathbf{\omega})\overline{\mathcal{F}(g)(\mathbf{\omega})}(1+\|\omega\|_ {2}^{2})^{m}\mathrm{d}\mathbf{\omega}.\] **Remark 3.1**.: _In this work, we are only interested in Sobolev spaces with \(m>D/2\) because these spaces contain only continuous functions according to the Sobolev embedding theorem._ It can be shown that if \(m\) is an integer, the norm defined in (3) is equivalent to that of the usual Sobolev space (Adams and Fournier, 2003). If \(m\) is not an integer, then the corresponding Sobolev space is called a Bessel potential space (Almeida and Samko, 2006; Gurka et al., 2007). The Sobolev space on a region \(\tilde{\Omega}\) with a positive Lebesgue measure can be defined via restrictions as \[\|f\|_{\mathcal{W}^{m}(\tilde{\Omega})}=\inf\{\|f_{E}\|_{\mathcal{W}^{m}( \mathbb{R}^{D})}:f_{E}\in\mathcal{W}^{m}(\mathbb{R}^{D}),f_{E}|_{\tilde{ \Omega}}=f\},\] where \(f_{E}|_{\tilde{\Omega}}\) denotes the restriction of \(f_{E}\) to \(\tilde{\Omega}\). Comparing Theorem 3.1 and (3), it can be seen that if \[c_{1}(1+\|\mathbf{\omega}\|_{2}^{2})^{-m}\leq\mathcal{F}(K)(\mathbf{\omega})\leq c_{2 }(1+\|\mathbf{\omega}\|_{2}^{2})^{-m},\forall\mathbf{\omega}\in\mathbb{R}^{D},\] for some two constants \(c_{1},c_{2}>0\), then \(\mathcal{W}^{m}(\mathbb{R}^{D})\) coincides with the reproducing kernel Hilbert space \(\mathcal{H}_{K}(\mathbb{R}^{D})\) with equivalent norms (also see Wendland (2004), Corollary 10.13). By the extension theorem (DeVore and Sharpley, 1993), \(\mathcal{H}_{K}(\Omega)\) also coincides with \(\mathcal{W}^{m}(\Omega)\), and two norms are equivalent. ### Random Smoothing Kernel Regression with Early Stopping In this study, we systematically investigate the efficiency of random smoothing data augmentation, which is a widely used technique in deep learning, in improving the estimation efficiency (i.e., convergence rate) for \(f^{*}\in\mathcal{H}(\Omega)\) without assuming any relationship between \(\mathcal{H}(\Omega)\) and \(\mathcal{H}_{K}(\Omega)\) and considering a wide context of \(\Omega\) that may have Lebesgue measure zero. To overcome the lack of smoothness in \(\mathcal{H}_{K}(\Omega)\), we construct \(N\) augmentations for each observed input point \(\mathbf{x}_{j}\) by adding i.i.d. noise \(\mathbf{\varepsilon}_{jk}\) with a continuous probability density function \(p_{\varepsilon}\). We can generate \(\mathbf{\varepsilon}_{jk}\) independently for each \(j\), or we can generate \(\mathbf{\varepsilon}_{k}\) for \(k=1,...,N\), and apply them to all \(\mathbf{x}_{j}\), \(j=1,...,n\) simultaneously. While the latter is easier to implement, the former is easier to theoretically justify. Due to its lower computational complexity, we only consider the latter method in this work. **Remark 3.2** (Adding non-smooth noise and practical data augmentation techniques).: _It should be noted that we do not assume \(p_{\varepsilon}\) to be Gaussian, and can be non-smooth. While applying Gaussian noise is a common practice, not all data augmentation techniques involve smooth noise, such as random crop, random mask, and random flip. In this work, we investigate various types of noise, including non-smooth Laplace noise and smooth Gaussian noise. Although adding non-smooth noise still cannot capture the effects of complex data augmentation techniques such as random mask or random crop, we aim to use it as a tool to gain insights into the success of these more complicated data augmentations._ With augmented data, we proceed to the estimation of the function \(f^{*}\). For any point \(\mathbf{x}\in\Omega\), we obtain the estimator by computing the average of the function values evaluated at the \(N\) augmented inputs. Specifically, the estimator is constructed as \[f(\mathbf{x})=\frac{1}{N}\sum_{k=1}^{N}h(\mathbf{x}+\mathbf{\varepsilon}_{k}) \tag{4}\] for \(h\in\mathcal{H}_{K}(\Omega)\). By properties of the RKHS, \(f\) as in (4) is also inside \(\mathcal{H}_{K}(\Omega)\). We consider the following \(l_{2}\) loss function defined as \[L_{n}(f)=\frac{1}{2n}\sum_{j=1}^{n}\left(f(\mathbf{x}_{j})-y_{j} \right)^{2}, \tag{5}\] or equivalently, \[L_{n}(h)=\frac{1}{2n}\sum_{j=1}^{n}\left(\frac{1}{N}\sum_{k=1}^{ N}h(\mathbf{x}_{j}+\mathbf{\varepsilon}_{k})-y_{j}\right)^{2}.\] **Remark 3.3**.: _The loss function \(L_{n}(h)\) is slightly different from the loss function used in practice, i.e.,_ \[L_{n}^{\prime}(h)=\frac{1}{2n}\sum_{j=1}^{n}\frac{1}{N}\sum_{k=1 }^{N}\left(h(\mathbf{x}_{j}+\mathbf{\varepsilon}_{k})-y_{j}\right)^{2}.\] _However, it can be shown that \(L_{n}(h)\) is close to \(L_{n}^{\prime}(h)\). To see this, note that_ \[L_{n}^{\prime}(h)-L_{n}(h)= \frac{1}{2n}\sum_{j=1}^{n}\frac{1}{2N^{2}}\sum_{k=1}^{N}\sum_{l=1 }^{N}\left(h(\mathbf{x}_{j}+\mathbf{\varepsilon}_{k})-h(\mathbf{x}_{j}+\mathbf{\varepsilon}_{ l})\right)^{2}. \tag{6}\] _As we will see later in Section 4, we require that the variance of \(\mathbf{\varepsilon}_{k}\) to converge to zero, which implies that the right-hand side in (6) is close to zero._ In order to minimize (5), we apply the gradient descent method. Since we impose a restriction that the estimator \(f\) is in the RKHS \(\mathcal{H}_{K}(\Omega)\), by the representer theorem, it suffices to consider the function space \[\mathcal{F}_{0}=\left\{f:f(\cdot)=\sum_{j=1}^{n}\sum_{k=1}^{N}w_{ jk}K(\cdot-(\mathbf{x}_{j}+\mathbf{\varepsilon}_{k})),w_{jk}\in\mathbb{R}\right\}.\] Because the number of parameters in \(\mathcal{F}_{0}\) scales as \(n\times N\), which can be prohibitively large if there are too many augmentations, it is often necessary to reduce the flexibility of \(\mathcal{F}_{0}\) in order to minimize the loss function (5). To achieve this, we consider a subspace of \(\mathcal{F}_{0}\), denoted by \[\mathcal{F}=\left\{f:f(\cdot)=\sum_{j=1}^{n}\sum_{k=1}^{N}w_{j}K( \cdot-(\mathbf{x}_{j}+\mathbf{\varepsilon}_{k})),w_{j}\in\mathbb{R}\right\},\] i.e., all the weights for the different augmented data from the same input \(\mathbf{x}_{j}\) are the same. Define an empirical random smoothing kernel function by \[K_{S}(\mathbf{x}_{l}-\mathbf{x}_{j}):=\frac{1}{N^{2}}\sum_{k_{1}=1}^{N}\sum_{k_{2}=1}^{N} K(\mathbf{x}_{l}+\mathbf{\varepsilon}_{k_{1}}-(\mathbf{x}_{j}+\mathbf{\varepsilon}_{k_{2}})), \tag{7}\] whose expectation leads to the following random smoothing kernel function, which plays an important role in the convergence analysis. **Definition 3.1** (Random smoothing kernel function).: _The kernel function \(K_{S}\) defined in (7) is the empirical random smoothing kernel function corresponding to the original kernel \(K\). The expectation of \(K_{S}\) with respect to the noise \(\mathbf{\varepsilon}_{k}\) is the convoluted kernel function \(K*p_{\varepsilon}\), where \(*\) is a convolution operator defined by_ \[(g_{1}*g_{2})(\mathbf{s})=\int g_{1}(\mathbf{t})g_{2}(\mathbf{s}-\mathbf{t})\mathrm{d}\mathbf{t},\] _for two functions \(g_{1}\) and \(g_{2}\). We call the convoluted kernel function \(K*p_{\varepsilon}\) as the random smoothing kernel function._ Now we can rewrite the loss function \(L_{n}(f)\) in (5) as \[L_{n}(\mathbf{w})=\frac{1}{2}\left\|\mathbf{y}-\mathbf{K}\mathbf{w}\right\|_{2}^{2}, \tag{8}\] where \(\mathbf{K}=(K_{S}(\mathbf{x}_{j}-\mathbf{x}_{k}))_{jk}\), \(\mathbf{w}=(w_{1},...,w_{n})^{T}\), and \(\mathbf{y}=(y_{1},...,y_{n})^{T}\). As stated in Raskutti et al. (2014), it is more natural to perform gradient descent on the transformed vector \(\mathbf{\theta}=\sqrt{\mathbf{K}}\mathbf{w}\), where the square root can be taken because \(\mathbf{K}\) is positive (semi-)definite. Then, we apply gradient descent on the square loss (8) with the transformed vector \(\mathbf{\theta}\). Initialize \(\mathbf{\theta}_{0}=\mathbf{w}_{0}=0\). Taking gradient with respect to \(\mathbf{\theta}\), direct computation shows that the gradient update is \[\mathbf{\theta}_{t+1}=\mathbf{\theta}_{t}-\beta_{t}\left(\mathbf{K}\mathbf{\theta}_{t}- \sqrt{\mathbf{K}}\mathbf{y}\right), \tag{9}\] where \(\beta_{t}>0\), \(t=0,1,2,\ldots\) is the learning rate (step size). With parameter \(\mathbf{w}_{t}\) obtained at the \(t\)-th iteration, the corresponding estimator of \(f^{*}(\mathbf{x})\) for any point \(\mathbf{x}\in\Omega\) is defined by \[f_{t}(\mathbf{x})=\mathbf{w}_{t}^{T}\mathbf{k}(\mathbf{x}), \tag{10}\] where \(\mathbf{k}(\mathbf{x})=(K_{S}(\mathbf{x}-\mathbf{x}_{1}),\ldots,K_{S}(\mathbf{x}-\mathbf{x}_{n}))^ {T}\). In practice, gradient descent is often paired with weight decay (Krogh and Hertz, 1992) to prevent overfitting and improve generalization (Hu et al., 2021). Therefore, we also consider the gradient descent with weight decay, where the parameter \(\mathbf{\theta}\) is updated by \[\mathbf{\theta}_{t+1}=\mathbf{\theta}_{t}-\beta_{t}\left(\mathbf{K}\mathbf{\theta}_{t}- \sqrt{\mathbf{K}}\mathbf{y}\right)-\alpha_{t}\mathbf{\theta}_{t}, \tag{11}\] with \(\alpha_{t}>0\), \(t=0,1,2,\ldots\) being the strength of weight decay. The learning rate \(\beta_{t}\) and weight decay parameter \(\alpha_{t}\) can be varied with \(t\), but for mathematical convenience, we assume that the step sizes \(\beta_{t}\) and the weights decay parameter \(\alpha_{t}\) are not related to the iteration number \(t\), i.e., \(\beta_{t}=\beta\) and \(\alpha_{t}=\alpha\) for all \(t=0,1,2,\ldots\). In this work, we are interested in the prediction error \[\|f^{*}-f_{t}\|_{L_{2}(P_{\mathbf{X}})}. \tag{12}\] In the rest of this paper, the following definitions are used. For two positive sequences \(a_{n}\) and \(b_{n}\), we write \(a_{n}\asymp b_{n}\) if, for some \(C,C^{\prime}>0\), \(C\leq a_{n}/b_{n}\leq C^{\prime}\). Similarly, we write \(a_{n}\gtrsim b_{n}\) if \(a_{n}\geq Cb_{n}\) for some constant \(C>0\), and \(a_{n}\lesssim b_{n}\) if \(a_{n}\leq C^{\prime}b_{n}\) for some constant \(C^{\prime}>0\). Also, \(C,C^{\prime},c_{j},C_{j},j\geq 0\) are generic positive constants, of which value can change from line to line. ## 4 Main Results In this section, we present our main theoretical results. We begin by collecting all the assumptions that will be used throughout the paper in Section 4.1. Then, in Section 4.2, we consider the case where \(\Omega\) has a finite intrinsic dimension. Finally, in Section 4.3, we consider the case where \(\mathcal{H}(\Omega)\) is a tensor RKHS. ### Assumptions In this work, we will use the following assumptions. **Assumption 4.1**.: _The error \(\epsilon_{j}\)'s in (1) are i.i.d. sub-Gaussian (van de Geer, 2000), i.e., satisfying_ \[C^{2}(\mathbb{E}e^{|\epsilon_{j}|^{2}/C^{2}}-1)\leq C^{\prime},\quad j=1,...,n.\] **Assumption 4.2**.: _There exists \(m_{0}>D/2\) such that_ \[c_{1}(1+\|\mathbf{\omega}\|_{2}^{2})^{-m_{0}}\leq\mathcal{F}(K)( \mathbf{\omega})\leq c_{2}(1+\|\mathbf{\omega}\|_{2}^{2})^{-m_{0}},\forall\mathbf{\omega} \in\mathbb{R}^{D}. \tag{13}\] **Assumption 4.3** (Tensor kernel function).: _The kernel function \(K\) can be expressed as \(K=\prod_{j=1}^{D}K_{j}\), where \(K_{j}\)'s are one-dimensional kernel functions. There exists \(m_{0}>1/2\) such that for \(j=1,\ldots,D\),_ \[c_{1}(1+\omega_{j}^{2})^{-m_{0}}\leq\mathcal{F}(K_{j})(\omega) \leq c_{2}(1+\omega_{j}^{2})^{-m_{0}},\forall\omega_{j}\in\mathbb{R}. \tag{14}\] **Example 4.1**.: _A class of kernel functions satisfying Assumption 4.2 is the isotropic Matern kernel functions (Williams and Rasmussen, 2006). With reparameterization, the Matern kernel function is given by_ \[K(\mathbf{x})=\frac{(2\phi\sqrt{m_{0}-D/2}\|\mathbf{x}\|_{2})^{m_{0}-D/2 }}{\Gamma(m_{0}-D/2)2^{m_{0}-D/2-1}}B_{m_{0}-D/2}(2\phi\sqrt{m_{0}-D/2}\|\mathbf{x} \|_{2}), \tag{15}\] _with the Fourier transform (Tuo and Wu, 2016)_ \[\mathcal{F}(K)(\mathbf{\omega})=\pi^{-D/2}\frac{\Gamma(m_{0})}{\Gamma(m_{0}-D/2)}(4 \phi^{2}(m_{0}-D/2))^{m_{0}-D/2}(4\phi^{2}(m_{0}-D/2)+\|\mathbf{\omega}\|^{2})^{-m_{ 0}}, \tag{16}\] _where \(\phi>0\), and \(B_{m_{0}-D/2}\) is the modified Bessel function of the second kind. It can be seen that (16) is bounded above and below by \((1+\|\mathbf{\omega}\|_{2}^{2})^{-m_{0}}\), up to a constant multiplier._ _Another example satisfying Assumption 4.2 is the generalized Wendland kernel function (Wendland, 2004; Gneiting, 2002; Chernih and Hubbert, 2014; Bevilacqua et al., 2019; Fasshauer and McCourt, 2015), defined as_ \[K_{GW}(\mathbf{x})=\left\{\begin{array}{ll}\frac{1}{\operatorname{ Beta}(2\kappa,\mu+1)}\int_{\|\phi\mathbf{x}\|_{2}}^{1}u(u^{2}-\|\phi\mathbf{x}\|_{2}^{2})^{ \kappa-1}(1-u)^{\mu}\mathrm{d}u,&0\leq\|\mathbf{x}\|<\frac{1}{\phi},\\ 0,&\|\mathbf{x}\|_{2}\geq\frac{1}{\phi},\end{array}\right. \tag{17}\] _where \(\phi,\kappa>0\) and \(\mu\geq(D+1)/2+\kappa\), and \(\operatorname{Beta}\) denotes the beta function. Theorem 1 of Bevilacqua et al. (2019) shows that (17) satisfies Assumption 4.2 with \(m_{0}=(D+1)/2+\kappa\). If the kernel function \(K=\prod_{j=1}^{D}K_{j}\), and each \(K_{j}\) is a one-dimensional Matern kernel function or generalized Wendland kernel function, then Assumption 4.3 is satisfied._ **Assumption 4.4** (Random smoothing noise).: _The elements of \(\mathbf{\varepsilon}_{k}\) are i.i.d. mean zero sub-Gaussian random variables. Furthermore, we consider three cases of \(\mathbf{\varepsilon}_{k}\) as follows, where \(\sigma_{n}^{2}\)'s are positive parameters to be specified later in Section 4._ 1. _(Polynomial noise) There exists_ \(m_{\varepsilon}>D/2\) _such that the characteristic function of_ \(\mathbf{\varepsilon}_{k}\) _satisfies_ \[c_{1}(1+\sigma_{n}^{2}\|\mathbf{\omega}\|_{2}^{2})^{-m_{\varepsilon}}\leq\mathbb{ E}(e^{i\mathbf{\omega}^{T}\mathbf{\varepsilon}_{k}})\leq c_{2}(1+\sigma_{n}^{2}\|\mathbf{ \omega}\|_{2}^{2})^{-m_{\varepsilon}},\forall\mathbf{\omega}\in\mathbb{R}^{D}.\] 2. _(Tensor Polynomial noise) There exists_ \(m_{\varepsilon}>1/2\) _such that the characteristic function of_ \(\mathbf{\varepsilon}_{k}\) _satisfies_ \[c_{1}\prod_{j=1}^{D}(1+\sigma_{n}^{2}\omega_{j}^{2})^{-m_{ \varepsilon}}\leq\mathbb{E}(e^{i\mathbf{\omega}^{T}\mathbf{\varepsilon}_{k}})\leq c_{ 2}\prod_{j=1}^{D}(1+\sigma_{n}^{2}\omega_{j}^{2})^{-m_{\varepsilon}},\forall \mathbf{\omega}=(\omega_{1},\ldots,\omega_{D})\in\mathbb{R}^{D}.\] 3. _(Gaussian noise) The elements of_ \(\mathbf{\varepsilon}_{k}\) _are normally distributed with variance_ \(\sigma_{n}^{2}\)_._ _Here the constants \(c_{1}\) and \(c_{2}\) do not depend on \(\sigma_{n}\) and \(m_{\varepsilon}\). We call \(\sigma_{n}\) the smoothing scale in this work._ **Example 4.2**.: _It is easy to construct distributions satisfying (C1) or (C2). For example, the generalized Laplace distribution with parameter \(s\) has a density function (Kozubowski et al., 2013; Kotz et al., 2001)_ \[p_{\varepsilon}(\mathbf{x})=\frac{2^{1-s}}{(2\pi)^{D/2}\Gamma(s)}(\sqrt{2}\|\mathbf{x }\|_{2})^{s+D/2}B_{s-D/2}\left(\sqrt{2}\|\mathbf{x}\|_{2}\right), \tag{18}\] _where \(\Gamma\) is the Gamma function, and \(B_{s-D/2}\) is the modified Bessel function of the second kind. It can be shown that the generalized Laplace distribution has the characteristic function_ \[\mathbb{E}_{\mathbf{X}}(e^{i\mathbf{\omega}^{T}\mathbf{X}})=\left(1+\frac{1}{2}\mathbf{\omega}^{ T}\mathbf{\omega}\right)^{-s}.\] _Then \(\mathbf{\varepsilon}_{k}=\sigma_{n}\mathbf{X}\) satisfies Assumption 4.4 (C1)._ _If each component of \(\mathbf{\varepsilon}_{k}/\sigma_{n}\) has a univariate generalized Laplace distribution and all components are independent, then Assumption 4.4 (C2) is satisfied._ Assumption 4.1 assumes that the observation error is sub-Gaussian, which is a standard assumption in nonparametric literature. See van de Geer (2000) for example. Assumption 4.2 assumes that the Fourier transform of the kernel function \(K(\cdot-\cdot)\) has an algebraic decay. Under this assumption, Corollary 10.13 of Wendland (2004) shows that the reproducing kernel Hilbert space \(\mathcal{H}_{K}(\mathbb{R}^{D})\) coincides with the Sobolev space \(\mathcal{W}^{m_{0}}(\mathbb{R}^{D})\), with equivalent norms. More details on this can be found in Section 3.1. Assumption 4.3 states that the kernel function \(K\) has a tensor structure, and the Fourier transform of each component \(K_{j}\) has an algebraic decay. Assumptions 4.2 and 4.3 will be used in Sections 4.2 and 4.3, respectively. Assumption 4.4 imposes conditions on the noise \(\mathbf{\varepsilon}_{k}\)'s and considers three types of augmentations: polynomial noise, tensor polynomial noise, and Gaussian noise. The corresponding smoothing techniques are referred to as _polynomial smoothing_, _tensor polynomial smoothing_, and _Gaussian smoothing_, respectively. ### Low Intrinsic Dimension Space We first consider \(\Omega\) with finite intrinsic dimension. The intrinsic dimension provides a "measure of the complexity" for the region of interest \(\Omega\). The definition of the intrinsic dimension depends on the covering number; see Definition 2.1 of van de Geer (2000) for example. **Definition 4.1** (Covering number).: _Consider a subset \(\mathcal{A}\subset\mathcal{G}\) where \(\mathcal{G}\) is a normed space. For a given \(\delta>0\), the covering number of \(\mathcal{A}\), denoted by \(\mathcal{N}_{\mathcal{G}}(\delta,\mathcal{A})\), is defined by the smallest integer \(M\) such that \(\mathcal{A}\) can be covered by \(M\) balls with radius \(\delta\) and centers \(\mathbf{x}_{1},...,\mathbf{x}_{M}\in\mathcal{G}\)._ **Assumption 4.5** (Low intrinsic dimension).: _There exist positive constants \(c_{1}\) and \(d\leq D\) such that for all \(\delta\in(0,1)\), we have_ \[\mathcal{N}_{\ell_{\infty}^{D}}(\delta,\Omega)\leq c_{1}\delta^{-d},\] _where \(\ell_{\infty}^{D}\) is the \(\mathbb{R}^{D}\) space equipped with \(\ell_{\infty}\) norm._ For discussion and examples of regions that satisfy Assumption 4.5, we refer to Hamm and Steinwart (2021a). In particular, if \(\Omega\subset\mathbb{R}^{D}\) is a bounded region with positive Lebesgue measure or a bounded \(D^{\prime}\)-dimensional differentiable manifold, then Assumption 4.5 holds with \(d=D\) and \(d=D^{\prime}\), respectively. Besides the low intrinsic dimension, our theoretical results depend on the smoothness of the underlying function. Because we are considering function space on a finite intrinsic dimensional space, which may have Lebesgue measure zero, the usual definition of (fractional) Sobolev space via Fourier transform stated in Section 3.1 cannot be directly applied in our case. Thus, we need to introduce our notion of the smoothness of functions on finite intrinsic dimension space. Specifically, we impose the following assumption on the underlying true function \(f^{*}\). **Assumption 4.6**.: _There exists a region \(\Omega_{1}\) with positive Lebesgue measure and a Lipschitz boundary such that \(\Omega\subset\Omega_{1}\). The underlying true function \(f^{*}\) is well-defined on \(\Omega_{1}\) and \(m_{f}=\operatorname*{arginf}_{m>D/2}\{m:f^{*}\in\mathcal{W}^{m}(\Omega_{1})\}\) with \(f^{*}\in\mathcal{W}^{m_{f}}(\Omega_{1})\), and \(m_{f}>D/2\)._ In Assumption 4.6, we assume that the boundary of \(\Omega_{1}\) is "sufficiently regular" (see Leoni (2017) for the definition of Lipschitz boundary) and \(\Omega\) can be contained by \(\Omega_{1}\). Thus, the extension theorem (DeVore and Sharpley, 1993) ensures that there exists an extension operator from \(L_{2}(\Omega_{1})\) to \(L_{2}(\mathbb{R}^{D})\) and the smoothness of each function is maintained. With Assumption 4.6, we use \(m_{f}\) to denote the smoothness of \(f^{*}\). By some well-known extension theorems (see, for example, DeVore and Sharpley (1993); Evans (2009); Stein (1970)), if \(D=d\), then our notion of smoothness coincides with the smoothness of functions on the whole space \(\mathbb{R}^{D}\). Now we are ready to present the main theorems in this subsection. Theorems 4.1 and 4.2 state the convergence rates when applying polynomial smoothing and Gaussian smoothing, respectively. **Theorem 4.1** (Polynomial smoothing).: _Suppose Assumptions 4.1, 4.2, 4.4 (C1), 4.5 and 4.6 are satisfied. Let \(f_{t}(\mathbf{x})\) be as in (10) and \(\beta=n^{-1}C_{1}\) with \(C_{1}\leq(2\sup_{\mathbf{x}\in\mathbb{R}^{D}}K_{S}(\mathbf{x}))^{-1}\). Suppose the smoothing scale \(\sigma_{n}\asymp n^{\nu}\) with \(\nu\leq 0\). Suppose one of the following holds:_ 1. _There is no weight decay in the gradient descent, and the iteration number_ \(t\) _satisfies_ \(t\asymp n^{\frac{2(m_{0}+m_{\varepsilon})}{2m_{f}+d}}\sigma_{n}^{2m_{\varepsilon}}\)__ 2. _There is weight decay in the gradient descent with_ \(\alpha\asymp n^{-1-\frac{2(m_{0}+m_{\varepsilon})}{2m_{f}+d}}\sigma_{n}^{-2m_{ \varepsilon}}\)_, and the iteration number satisfies_ \(t\geq C_{2}(\frac{m_{f}}{2m_{f}+d}+1/2)\log n/(\log(1-\alpha))\)_._ _Then by setting \(m_{\varepsilon}=2d^{-1}(2D\max(m_{0},m_{f})+m_{0}d)\log n-m_{0}\) and_ \[\nu=\left\{\begin{array}{ll}-\frac{2(2m_{0}+2m_{\varepsilon})D-(2m_{0}+2m_{ \varepsilon}-D)d}{(2m_{f}+d)(4m_{\varepsilon}D-(2m_{0}+2(1-(\log n)^{-1})m_{ \varepsilon}-D)d)}<0,&D>d,\\ 0,&D=d,\end{array}\right.\] _we have_ \[\|f_{t}-f^{*}\|_{L_{2}(P_{\mathbf{X}})}^{2}= O_{\mathbb{P}}\left(n^{-\frac{2m_{f}}{2m_{f}+d}}(\log n)^{2m_{f}+1} \right).\] _for \(N>N_{0}\), where \(N\) is the number of augmentations, and \(N_{0}\) depends on \(n\) (specified in (43))._ **Theorem 4.2** (Gaussian smoothing).: _Suppose Assumptions 4.1, 4.2, 4.4 (C3), 4.5, and 4.6 are satisfied. Let \(f_{t}(\mathbf{x})\) be as in (10), \(\beta=n^{-1}C_{1}\) with \(C_{1}\leq(2\sup_{\mathbf{x}\in\mathbb{R}^{D}}K_{S}(\mathbf{x}))^{-1}\), and \(\sigma_{n}\asymp n^{-\frac{1}{2m_{f}+d}}\). Suppose one of the following holds:_ 1. _There is no weight decay in the gradient descent, and the iteration number_ \(t\) _satisfies_ \(t\asymp n^{\frac{2m_{0}+2m_{f}}{2m_{f}+d}}\)__ 2. _There is weight decay in the gradient descent with_ \(\alpha\asymp n^{-1-\frac{2(m_{0}+m_{\ell})}{2m_{f}+d}}\)_, and the iteration number satisfies_ \(t\geq C_{2}(\frac{m_{f}}{2m_{f}+d}+1/2)\log n/(\log(1-\alpha))\)_._ _Then we have_ \[\|f^{*}-\hat{f}_{t}\|_{L_{2}(P_{\mathbf{X}})}^{2}=O_{\mathbb{P}}(n^{-\frac{2m _{f}}{2m_{f}+d}}(\log n)^{D+1}), \tag{19}\] _when \(N>N_{0}\), where \(N\) is the number of augmentations, and \(N_{0}\) depends on \(n\) (specified in (75))._ **Remark 4.1**.: _We require \(\beta=n^{-1}C_{1}\) with \(C_{1}\leq(2\sup_{\mathbf{x}\in\mathbb{R}^{D}}K_{S}(\mathbf{x}))^{-1}\) in both Theorems 4.1 and 4.2 is because by Gershgorin's theorem (Varga, 2010), we have for sufficiently large \(n\),_ \[\beta\eta_{1}(\mathbf{K})+\alpha\leq\beta n\max_{j,k}|K_{S}(\mathbf{x}_{j},\mathbf{x}_ {k})|+\alpha<1,\] _which ensures that the gradient descent algorithm can converge._ If the region \(\Omega\) has a positive Lebesgue measure, then it has been shown that the optimal convergence rate is \(n^{-m_{f}/(2m_{f}+D)}\)(Stone, 1982). By random smoothing, the gradient descent with early stopping can achieve the optimal convergence rate in this case, up to a logarithm term. Furthermore, it can adapt to the low intrinsic dimension case, where \(\Omega\) can have Lebesgue measure zero. In Hamm and Steinwart (2021), it is strongly hypothesized that the convergence rate \(n^{-m_{f}/(2m_{f}+d)}\) is optimal. Although our definition of the smoothness is different, we have the same hypothesis and leave its exploration as a future work. It is worth noting that our approach differs from that in Hamm and Steinwart (2021), and therefore, we can investigate the effects of polynomial smoothing, which may have its own interest. Such non-smooth noise can shed light on non-smooth augmentations commonly used in practice. Furthermore, we obtain an identical result as in Hamm and Steinwart (2021) if we use Gaussian smoothing. Comparing the convergence rates in Theorems 4.1 and 4.2, we find that the convergence rate by polynomial smoothing is slightly worse than that of Gaussian smoothing, since \(m_{f}>D/2\)(Assumption 4.6). In comparison, Eberts and Steinwart (2013) achieved convergence rate of the similar form \(n^{-2m_{f}/(2m_{f}+d)+\xi}\) by applying kernel ridge regression with Gaussian kernel functions, where \(\xi\) can be any value strictly larger than zero. Clearly, this rate is slower than those in Hamm and Steinwart (2021) and ours. Under additional assumptions such as a compact Riemannian manifold input space and the underlying function having Lipschitz continuity \(m_{f}\in(0,1]\), Ye and Zhou (2008) derived convergence rates of the form \(\big{(}\log^{2}(n)/n\big{)}^{m_{f}/(8m_{f}+4d)}\). Instead of kernel ridge regression, Yang and Dunson (2016) focused on Bayesian regression with Gaussian process and proved the convergence rate \(n^{-2m_{f}/(2m_{f}+d)}(\log n)^{d+1}\). However, their theorem is limited by a compact low dimensional differentiable manifold input space, and the condition \(m_{f}\leq 2\). As a comparison, we do not require such restrictive assumptions. From a different perspective of early stopping, we consider both cases with and without weight decay, while existing studies only consider the case without weight decay. With weight decay, one can achieve the same convergence rate but with a much smaller iteration number. Specifically, the iteration number should be polynomial in \(n\) without weight decay, which can be reduced to polynomial in \(\log n\) if one applies weight decay. This also justifies the use of weight decay in practice. Besides, the random smoothing kernel enables us to establish connections with data augmentation and we further explain the effectiveness of using augmentation, which may lead to a new interpretation of using augmentations in deep learning. Our approach to studying early stopping is distinct from previous studies in the literature (see, e.g., Dieuleveut and Bach (2016); Yao et al. (2007); Pillaud-Vivien et al. (2018); Raskutti et al. (2014)), which typically use integral operator techniques and impose assumptions on the eigenvalues of the kernel function (which always exists by Mercer's theorem). However, such assumptions cannot be easily applied to the low intrinsic dimension case, as it is unclear how eigenvalues behave in this regime. Additionally, previous studies often impose a "source condition" that requires the kernel function to have finite smoothness, which is not satisfied when using Gaussian smoothing to construct the random smoothing kernel. Therefore, even for the special case where the intrinsic dimension is equal to the ambient dimension, Theorems 4.1 and 4.2 improve upon previous results in the early stopping literature. **Remark 4.2**.: _In general, the Bessel potential space used in our work is different from the Besov space used in Hamm and Steinwart (2021). Specifically, the Bessel potential space is obtained via complex interpolation, while the Besov space is constructed by real interpolation. For a more thorough explanation, readers may refer to Edmunds and Triebel (2008). We chose to use the Bessel potential space because of its natural connection to the Fourier transform and the characteristic function of a random variable, which allowed us to study the impact of the augmentations considered in our work._ **Remark 4.3**.: _There are some other notions of smoothness in the literature. For example, Hamm and Steinwart (2021) define the smoothness induced by the Besov spaces, and Yang and Dunson (2016) assume \(f^{*}\) has \(k\)-th continuous derivatives. Another alternative definition of the Sobolev space on \(\Omega\) is via Sobolev-Slobodeckij spaces. For simplicity, let \(\Omega\subset\mathbb{R}^{D-1}\). For a function \(f\), \(\theta\in(0,1)\), and \(s>0\), define the Slobodeckij seminorm_ \[|f|_{\theta,\Omega}=\left(\int_{\Omega\times\Omega}\frac{|f(\mathbf{x})-f(\mathbf{x}^ {\prime})|^{2\theta+D-1}}{\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2}^{2\theta+D-1}}\mathrm{ d}\mathbf{x}\mathrm{d}\mathbf{x}^{\prime}\right)^{1/2}.\] _Then the Sobolev-Slobodeckij space on \(\Omega\), denoted by \(W^{s}(\Omega)\), is defined by_ \[W^{s}(\Omega)=\left\{f:f\in W^{|s|}(\Omega):\sup_{\alpha=\lfloor s\rfloor}|D^ {\alpha}f|_{\theta,\Omega}<\infty\right\},\] _with norm_ \[\|f\|_{W^{s}(\Omega)}=\|f\|_{W^{\lfloor s\rfloor}(\Omega)}+\sup_{\alpha=[s]}|D^{ \alpha}f|_{\theta,\Omega},\] _and \(D^{\alpha}f:=\frac{\partial^{|\alpha|}}{\partial x_{1}^{\alpha_{1}}\ldots x_{d} ^{\alpha_{d}}}f\) denotes the \(\alpha\)-th (weak) derivative of a function \(f\) with \(|\alpha|=\alpha_{1}+\ldots+\alpha_{d}\) for a multi-index \(\alpha=(\alpha_{1},\ldots,\alpha_{d})\in\mathbb{N}_{0}^{d}\). By the trace extension theorem (Triebel, 2010), there exists an extension operator such that the extended function \(f_{E}\in W^{s+1/2}(\mathbb{R}^{D})\) and \(f_{E}|_{\mathbb{R}^{D-1}}=f\), which implies \(m_{f}=s+1/2\) if \(\Omega\) has a positive Lebesgue measure in \(\mathbb{R}^{D-1}\)._ ### Tensor Reproducing Kernel Hilbert Space In this section, we consider a low-dimensional structure for the function class, specifically a _tensor reproducing kernel Hilbert space_. Let \(K=\prod_{j=1}^{D}K_{j}\) be kernel functions that satisfy Assumption 4.3, while \(\Omega\) can have a low intrinsic dimensional structure, as discussed in Section 4.2, or have a positive Lebesgue measure in \(\mathbb{R}^{D}\). Our theoretical results in this section are based on mixed smooth Sobolev spaces, denoted by \(\mathcal{MW}^{m}(\mathbb{R}^{D})\), where \(m>1/2\). For a function \(f\) defined on \(\mathbb{R}^{D}\), the mixed smooth Sobolev norm is defined as \[\|f\|_{\mathcal{MW}^{m}(\mathbb{R}^{D})}=\left(\int_{\mathbb{R}^{D}}|\mathcal{ F}(f)(\mathbf{\omega})|^{2}\prod_{j=1}^{D}(1+|\omega_{j}|^{2})^{m}\mathrm{d}\mathbf{ \omega}\right)^{1/2}, \tag{20}\] and the mixed smooth Sobolev spaces on \(\Omega\) can be defined via restriction similar to the Sobolev spaces. In fact, the mixed smooth Sobolev space is a tensor product of one-dimensional Sobolev spaces, and it can be shown that \(\mathcal{MW}^{m_{0}}(\mathbb{R}^{D})\) is equivalent to the tensor reproducing kernel Hilbert space generated by kernel function \(K=\prod_{j=1}^{D}K_{j}\) satisfying Assumption 4.3. Because of such a tensor structure, it is often considered as a reasonable model reducing the complexity in high-dimensional spaces (Kuhn et al., 2015; Dung, 2021). For instance, the mixed smooth Sobolev spaces are utilized in high-dimensional approximation and numerical methods of PDE (Bungartz and Griebel, 1999), data mining (Garcke et al., 2001), and deep neural networks (Dung, 2021). If the underlying function belongs to some mixed smooth Sobolev space, then it can be shown that by applying appropriate augmentations, we can achieve a fast convergence rate, which nearly coincides with the minimax rate in the one-dimensional case, up to a logarithmic term. Similar to Assumption 4.6, we assume that \(f^{*}\) can be extended to some "regular space" with positive Lebesgue measure, as follows. **Assumption 4.7**.: _There exists a region \(\Omega_{1}\) with positive Lebesgue measure and a Lipschitz boundary such that \(\Omega\subset\Omega_{1}\), and the underlying true function \(f^{*}\) is well-defined on \(\Omega_{1}\) and \(f^{*}\in\mathcal{MW}^{m_{f}}(\Omega_{1})\)._ The following theorem states the convergence rate when applying tensor polynomial smoothing in the tensor RKHS case. **Theorem 4.3** (Tensor polynomial smoothing).: _Suppose Assumptions 4.1, 4.3, 4.4 (C2), 4.5, and 4.7 are satisfied. Let \(f_{t}(\mathbf{x})\) be as in (10) and \(\beta=n^{-1}C_{1}\) with \(C_{1}\leq(2\sup_{\mathbf{x}\in\mathbb{R}^{D}}K_{S}(\mathbf{x}))^{-1}\). Let \(m_{\varepsilon}+m_{0}\geq m_{f}\), and the smoothing scale \(\sigma_{n}\asymp 1\)._ _Then the following statements are true with \(N>N_{0}\), where \(N\) is the number of augmentations, and \(N_{0}\) depends on \(n\) (specified in (85)). Suppose one of the following holds:_ 1. _There is no weight decay in the gradient descent, and the iteration number_ \(t\) _satisfies_ \(t\asymp n^{\frac{2(m_{0}+m_{\varepsilon})}{2m_{f}+1}}(\log n)^{\frac{2(D-1)(m_ {0}+m_{\varepsilon})+1}{2m_{f}+1}}\)__ 2. _There is weight decay in the gradient descent with_ \(\alpha\asymp n^{-1-\frac{2(m_{0}+m_{\varepsilon})}{2m_{f}+d}}(\log n)^{\frac{ 2(D-1)(m_{0}+m_{\varepsilon})+1}{2m_{f}+1}}\)_, and the iteration number satisfies_ \(t\geq C_{2}(\frac{m_{f}}{2m_{f}+1}+1/2)\log n/(\log(1-\alpha))\)_._ _Then we have_ \[\|f_{t}-f^{*}\|_{L_{2}(P_{\mathbf{\mathrm{X}}})}^{2}= O_{\mathbb{P}}\left(n^{-\frac{2m_{f}}{2m_{f}+1}}(\log n)^{ \frac{2m_{f}}{2m_{f}+1}\left(D-1+\frac{1}{2(m_{0}+m_{\varepsilon})}\right)} \right). \tag{21}\] Based on Theorem 4.3, tensor polynomial smoothing leads to a convergence rate of tensor RKHS, which is \(O_{\mathbb{P}}(n^{-\frac{2m_{f}}{2m_{f}+1}}(\log n)^{\frac{2m_{f}}{2m_{f}+1} (D-1+\frac{1}{2(m_{0}+m_{\varepsilon})})})\). This convergence rate is almost the same as the optimal convergence rate in the one-dimensional case \(O_{\mathbb{P}}(n^{-\frac{2m_{f}}{2m_{f}+1}})\), differing only by a logarithmic term. Moreover, compared to Theorem 4.1, Theorem 4.3 has less stringent requirements for tensor polynomial smoothing when Assumption 4.7 holds. Specifically, Theorem 4.3 allows for \(m_{\varepsilon}\) to be a constant as long as \(m_{\varepsilon}+m_{0}\geq m_{f}\), whereas Theorem 4.1 requires \(m_{\varepsilon}\) to be comparable to \(\log n\). Additionally, while the smoothing scale \(\sigma_{n}\) in Theorem 4.1 demands careful selection, Theorem 4.3 permits a constant smoothing scale \(\sigma_{n}\). These differences suggest that the tensor RKHS has a simpler structure than the RKHS even in a low intrinsic dimension space. The convergence rate in Theorem 4.3 does not depend on the low intrinsic dimension of \(\Omega\), and is almost dimension-free. Moreover, because the power of the logarithmic term in (21) decreases as \(m_{\varepsilon}\) increases, the convergence rate in Theorem 4.3 decreases as \(m_{\varepsilon}\) increases, encouraging the use of a smoother tensor polynomial smoothing for faster convergence. This aligns with the results in Theorem 4.1 and Theorem 4.2, as Gaussian smoothing may yield faster convergence rates than polynomial smoothing. Few studies have explored tensor RKHSs with early stopping, and our findings can provide valuable insights into this area. **Remark 4.4**.: _For any \(\mathcal{W}^{m_{f}}(\mathbb{R}^{D})\) with \(m_{f}>D/2\), there exist \(m^{*}>1/2\) such that \(\mathcal{W}^{m_{f}}(\mathbb{R}^{D})\hookrightarrow\mathcal{M}\mathcal{W}^{m^{ *}}(\mathbb{R}^{D})\) and \(\mathcal{M}\mathcal{W}^{m^{*}}(\mathbb{R}^{D})\hookrightarrow\mathcal{C}( \mathbb{R}^{D})\). Thus, the capacity of \(\mathcal{M}\mathcal{W}^{m^{*}}(\mathbb{R}^{D})\) is high-enough for any approximation problem which can be solved by assuming that the underlying true function lies in some Sobolev space._ Numerical Studies In this section, we enhance our theoretical findings by experimentally validating the effectiveness of the random smoothing kernel with data augmentation and early stopping on synthetic datasets. We focus on three data spaces with dimensions \(D=1,2,3\), as illustrated in Figure 1, where \(\mathbf{x}_{j}\) samples are uniformly drawn. In our experiments, the underlying function \(f^{*}\) is obtained by drawing random sample paths from the Gaussian process with the Matern covariance function. This covariance function is widely used in Gaussian process modeling. We adopt the Matern covariance function with the following form: \[K_{\nu}(\mathbf{x})=\sigma^{2}\frac{2^{1-\nu}}{\Gamma(\nu)}\left(\sqrt{2\nu}\frac{ \|\mathbf{x}\|_{2}}{\rho}\right)^{\nu}B_{\nu}\left(\sqrt{2\nu}\frac{\|\mathbf{x}\|_{2} }{\rho}\right), \tag{22}\] where \(\sigma,\phi,\nu>0\), \(\Gamma\) is the Gamma function, and \(B_{\nu}\) is the modified Bessel function of the second kind. In order to make \(f^{*}\) smoother, we set the smoothness parameter \(\nu=5.0\) for Matern kernel (22). The error \(\epsilon_{j}\)'s are i.i.d. Gaussian with mean zero and variance \(0.01\). We utilize two-hidden-layer neural networks with ReLU activation (Nair and Hinton, 2010) as our predictor. Each hidden layer of the neural network comprises 100 nodes, and all weights are initialized using Kaiming Initialization (He et al., 2015). For random smoothing, we experiment with both non-smooth Laplace noise and smooth Gaussian noise. To be precise, each element of \(\mathbf{\varepsilon}_{k}\) is randomly sampled from either \(\mathcal{N}(0,\sigma^{2})\) or \(Laplace(0,b)\). For more experiment details and additional results, we refer to Appendix N. Figure 2 presents a visualization of the underlying truth (blue curve), training data (blue dots), and neural network predictions (orange dots) when the training size is 50. The underlying truth is smooth since we use a smooth kernel. However, the neural network predictions without random smoothing are not smooth due to the low smoothness of the ReLU activation function and tend to overfit the noise. Upon applying random smoothing, the neural network predictions become smoother and approach the underlying truth. Figure 3 and Figure 4 further show the underlying truth (blue curve), training data (blue dots), and neural network predictions (orange dots) when the training size is 100 and 200, Figure 1: Simulated data spaces in the forms of: line (\(D=1\)), circle (\(D=2\)) and sphere (\(D=3\)). respectively. Although increasing the training size improves smoothness in cases like size 200 with weight decay, the fitted curve still experiences a perturbation from overfitted noise compared to examples where random smoothing is applied. Table 1 presents a summary of the test \(l_{2}\) loss under different settings. Both Gaussian Figure 3: Underlying truth (blue curve), training data (blue dots), and neural network predictions (orange dots) when training size is 100. Figure 2: Visualization of the underlying truth (blue curve), training data (blue dots), and neural network predictions (orange dots) when training size is 50, where the first and second rows represent cases with weight decay and early stopping, respectively. It is obvious to see that the optimization without random smoothing will be more vulnerable to noise. smoothing and polynomial smoothing (random smoothing with Laplacian noise) improve the \(l_{2}\) loss in all settings, demonstrating the effectiveness of random smoothing. Figure 5 further investigates how the \(l_{2}\) loss changes concerning the smoothing scale \(\sigma_{n}\) when \(D=1\). The plot shows a U-shaped curve, indicating that an optimal smoothing can minimize the \(l_{2}\) loss, while either smaller or larger values will result in a larger \(l_{2}\) loss. It is worth noting that when the training size is small, such as size 50, the U-shape curve in Figure 5 may be less distinct due to noise introduced by early stopping based on a small validation set. Another observation from Figure 5 is that the optimal smoothing scales exhibit a decreasing trend as the sample size increases, as indicated by Theorem 4.1 and Theorem 4.2. Additionally, Figure 6 and Figure 7 depict the U-shaped curves of \(l_{2}\) loss changes concerning smoothing scale when \(D=2\) and \(D=3\), respectively. While it is possible that some red points may not be accurately placed due to a small validation set, the optimal smoothing scales exhibit a decreasing trend with respect to training size, which is consistent with the trend observed in \(D=1\) as depicted in Figure 5. ## 6 Conclusions and Discussion This work studies random smoothing kernel and random smoothing regularization, which have a natural relationship with data augmentations. We consider two cases: when the region \(\Omega\) has a low intrinsic dimension, or when the kernel function can be presented as a product of one-dimensional kernel functions. In both cases, we show that by applying random smoothing, with appropriate early stopping and/or weight decay techniques, the resulting estimator can achieve fast convergence rates, regardless of the kernel function used in the construction of the random smoothing kernel estimator. Figure 4: Underlying truth (blue curve), training data (blue dots), and neural network predictions (orange dots) when training size is 200. There are several directions that could be pursued in future research. First, while we consider noise injection to construct augmentations and use non-smooth noise to interpret practical non-smooth augmentation techniques, such as random crop, random mask, and random flip, this interpretation may not be perfect. For example, the behavior of adding noise may differ from that of random crop. Furthermore, these practical techniques may also introduce some prior knowledge on the geometry of the low intrinsic dimension. A sharper characterization of practical augmentation techniques is needed and will be pursued in future work. Second, while we consider gradient descent, we believe that our results can be generalized to the stochastic gradient descent method. However, the discussion of the latter is beyond \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dim} & \multirow{2}{*}{Type} & \multicolumn{3}{c|}{With weight decay} & \multicolumn{3}{c|}{Early stopping} \\ \cline{3-8} & & \multicolumn{3}{c|}{Training size} & \multicolumn{3}{c|}{Training size} \\ \cline{3-8} & & 50 & 100 & 200 & 50 & 100 & 200 \\ \hline \multirow{3}{*}{D=1} & G & 1.7466e-03 & 9.8343e-04 & 9.1924e-04 & 1.3468e-03 & 7.5579e-04 & 5.8775e-04 \\ & L & 1.6765e-03 & 9.3367e-04 & 8.2806e-04 & 2.0638e-03 & 9.2128e-04 & 6.5118e-04 \\ & N & 1.9381e-03 & 1.3045e-03 & 1.1135e-03 & 2.2168e-03 & 1.2985e-03 & 8.4292e-04 \\ \hline \multirow{3}{*}{D=2} & G & 6.4208e-03 & 3.1423e-03 & 2.1842e-03 & 6.7205e-03 & 3.5027e-03 & 1.7132e-03 \\ & L & 6.4676e-03 & 2.9491e-03 & 2.2136e-03 & 8.2725e-03 & 3.9418e-03 & 1.7674e-03 \\ & N & 9.2474e-03 & 4.5782e-03 & 2.5810e-03 & 1.2628e-02 & 6.2301e-03 & 3.1396e-03 \\ \hline \multirow{3}{*}{D=3} & G & 1.6498e-02 & 7.2578e-03 & 3.9938e-03 & 1.4852e-02 & 7.1306e-03 & 3.7147e-03 \\ & L & 1.6599e-02 & 6.9336e-03 & 4.4334e-03 & 1.5167e-02 & 6.6471e-03 & 3.8615e-03 \\ \cline{1-1} & N & 2.0987e-02 & 8.1158e-03 & 4.5752e-03 & 2.0178e-02 & 8.4932e-03 & 4.9460e-03 \\ \hline \end{tabular} \end{table} Table 1: Test \(l_{2}\) loss of SGD with early stopping. “G”, “L”, and “N” correspond to random smoothing with Gaussian noise, random smoothing with Laplacian noise, and no random smoothing. The smallest losses are underlined. Figure 5: Loss changes according to smoothing scale with training size increase from 50 to 200 in 1d data space. The red points represent the optimal smoothing scales selected based on the validation set. the scope of the current work. Third, we mainly consider regression in this work, where the square loss is a natural choice. An interesting extension is to study whether the results remain true when considering classification, which requires the study of other loss functions, such as cross-entropy loss and hinge loss. Figure 6: Loss changes according to smoothing scale with training size increase from 50 to 200 in 2d data space. The red points represent the optimal smoothing scales selected based on the validation set. Figure 7: Loss changes according to smoothing scale with training size increase from 50 to 200 in 3d data space. The red points represent the optimal smoothing scales selected based on the validation set.
2304.03618
The stable conjugation-invariant word norm is rational in free groups
We establish the rationality of the stable conjugation-invariant word norm on free groups and virtually free Coxeter groups.
Henry Jaspars
2023-04-07T12:31:54Z
http://arxiv.org/abs/2304.03618v2
# The stable conjugation-invariant word norm is rational in free groups ###### Abstract. We establish the rationality of the stable conjugation-invariant word norm on free groups and virtually free Coxeter groups. Key words and phrases:Bi-invariant word metric, Stable word length, Context-Free Language, Semilinear sets, Parikh's Theorem 2020 Mathematics Subject Classification: Primary: 20F65, 20E05; Secondary: 68R15, 68Q45 ## 1. Introduction A _norm_ on a group \(G\) is a non-negative function \(\nu\colon G\to\mathbf{R}\) such that for all \(g,h\in G\), 1. \(\nu(g)=0\) if and only if \(g=1\); 2. \(\nu(gh)\leq\nu(g)+\nu(h)\). Moreover, if \(\nu(ghg^{-1})=\nu(h)\) for all \(g,h\in G\), then the norm is called _conjugation-invariant_. Examples of conjugation-invariant norms include the Hofer norm on the group of Hamiltonian diffeomorphisms of a symplectic manifold [13], the commutator length [3], and other verbal norms [5], as well as word norms associated with generating sets invariant under conjugations [2]. The _stable (or translation) length_ of \(g\in G\) with respect to \(\nu\) is the limit \[\tau_{\nu}(g):=\lim_{k\to\infty}\frac{\nu(g^{k})}{k}.\] In this paper we prove that the stable length with respect to the conjugation-invariant word norm on certain virtually free groups is rational. More precisely, throughout, let \(G\) be a group with symmetric generating set \(S\). Let \(\|g\|_{S}\) denote the _conjugation-invariant word norm_ associated with \(\bar{S}=\bigcup_{s\in S}\operatorname{Conj}(s)\), where \(\operatorname{Conj}(s)\) denotes the conjugacy class of \(s\). Define the _cancellation length_ of a word with characters in \(S\) to be the minimum number of letters which need to be deleted to obtain a word representing the trivial element. Then, if \(G\) is a free group, \(\|g\|_{S}\) is equal to the cancellation length of any word with characters in \(S\) representing \(g\). The question of rationality for stable word norms over finitely presented groups is a subject of interest in geometric group theory and analysis. In [4], Calegari showed that the stable commutator norm over free groups is rational using techniques from topology and geometry. It is also known that rationality of the stable commutator norm does not hold for general finitely presented groups [18], proving in the negative a conjecture of Gromov [10]. This motivates the analogous study of the stable conjugation-invariant word norm, for which we prove the following result. **Theorem 1.1**.: _Let \(G\) be a virtually free group with generating set \(S\), such that the conjugation-invariant norm is equal to the cancellation norm with respect to \(S\). Further let \(g_{1},\ldots,g_{n}\in G\). Then the sequence \((\|g_{1}^{k}\ldots g_{n}^{k}\|_{S})_{k}\) is uniformly semi-arithmetic in the sense of Definition 2.1. Moreover, if \((\|g_{1}^{k}\ldots g_{n}^{k}\|_{S})_{k}\) has period \(m>0\) and difference \(d\geq 0\), then_ \[\tau_{S}(g_{1},\ldots,g_{n}):=\lim_{k\to\infty}\frac{\|g_{1}^{k}\ldots g_{n}^ {k}\|_{S}}{n}\] _exists and is equal to \(\frac{d}{m}\in\mathbf{Q}\)._ **Example 1.2**.: Theorem 1.1 applies to free groups and virtually free Coxeter groups by [2, Erratum], which are characterised by [7, Proposition 8.8.5]. In particular, the theorem applies to the free product \(\mathbf{Z}/2\mathbf{Z}*\cdots*\mathbf{Z}/2\mathbf{Z}\) of cyclic groups of order two. _Acknowledgements._ The author would like to offer thanks to both Jarek Kedra and Assaf Libman, without whose indispensible advice, time and support this paper would not have been possible. ## 2. Semi-arithmetic sequences Let \(\widehat{\mathbf{N}}:=\mathbf{N}\cup\{\infty\}\) with the usual total order, where \(x+\infty=\infty+x=\infty\) for any \(x\in\widehat{\mathbf{N}}\). Moreover, let an _arithmetic sequence with difference \(d\geq 0\)_ have the usual meaning. Observe that the constant infinite sequence \((a_{k})_{k}\) where \(a_{k}=\infty\) for \(k\geq 0\) is arithmetic with difference \(d\) for any \(d\geq 0\). **Definition 2.1**.: Let \((a_{k})_{k}\) be a sequence in \(\widehat{\mathbf{N}}\). Then, it is: 1. _Eventually finite_ if \(a_{k}\in\mathbf{N}\) for all \(k\) sufficiently large; 2. _Eventually arithmetic_ if \((a_{k+n})_{k}\) is arithmetic for some \(n\geq 0\); 3. _Semi-arithmetic_ with period \(m>0\) if for all \(n\geq 0\), the sequences \((a_{mk+n})_{k}\) are eventually arithmetic with differences \(d_{n}\) 4. _Uniformly semi-arithmetic_ with period \(m>0\) and difference \(d\geq 0\) if for all \(n\geq 0\), the sequences \((a_{mk+n})_{k}\) are eventually arithmetic with difference \(d\). _Remark 2.2_.: Let \((a_{k})_{k}\) be a semi-arithmetic sequence with period \(m\). Since, for all \(n\geq 0\), \((a_{mk+n+m})_{k}\) is a tail of \((a_{mk+n})_{k}\), it follows that \(d_{n}=d_{n+m}\) for all \(n\geq 0\). The following two results are immediate consequences of Definition 2.1: **Proposition 2.3**.: _Let \((a_{k})_{k}\) and \((a_{k}^{\prime})_{k}\) be semi-arithmetic sequences with periods \(m\) and \(m^{\prime}\) respectively. Then the sequence \((\min\{a_{k},a_{k}^{\prime}\})_{k}\) is semi-arithmetic with period \(mm^{\prime}\). _ **Proposition 2.4**.: _Let \((a_{k})_{k}\) be a eventually finite uniformly semi-arithmetic sequence with period \(m>0\) and difference \(d\geq 0\). Then_ \[\lim_{k\to\infty}\frac{a_{k}}{k}\] _exists and is equal to \(\frac{d}{m}\in\mathbf{Q}\). _ **Lemma 2.5**.: _Let \((a_{k})_{k}\) be semi-arithmetic. Suppose there exists \(D\geq 0\) such that \(a_{k+1}\leq a_{k}+D\) for all \(k\geq 0\). Then \((a_{k})_{k}\) is uniformly semi-arithmetic._ Proof.: If \(a_{k}=\infty\) for all \(k\geq 0\), the result is immediate. Otherwise, \((a_{k})_{k}\) is eventually finite. Hence, without loss of generality, assume \(a_{k}<\infty\) for all \(k\). Let \((a_{k})_{k}\) have period \(m>0\). By Definition 2.1, for all \(n\geq 0\), the sequences \((a_{mk+n})_{k}\) are arithmetic with some difference \(d_{n}\geq 0\). By the hypothesis, the sequence \((a_{mk+n+1}-a_{mk+n})_{k}\) is bounded above by \(D\). However, observe that \((a_{mk+n+1}-a_{mk+n})_{k}\) is eventually arithmetic with difference \(d_{n+1}-d_{n}\). Therefore, \(d_{n+1}-d_{n}\leq 0\), and hence by Remark 2.2, \[d_{0}\geq d_{1}\geq\ldots\geq d_{m}=d_{0}.\] Hence by Remark 2.2, \((d_{n})_{n}\) is constant, and \((a_{k})_{k}\) is uniformly semi-arithmetic. ## 3. Semilinear sets Let \(\mathbf{N}^{X}\) denote the free abelian monoid over \(X\), or equivalently, the set of all functions \(X\to\mathbf{N}\). **Definition 3.1** (Compare [12, Definition 3]).: A set \(\Omega\subseteq\mathbf{N}^{X}\) is called _linear_ if it is of the form \(v+M\) where \(v\in\mathbf{N}^{X}\) and \(M\subseteq\mathbf{N}^{X}\) is a finitely-generated submonoid of \(\mathbf{N}^{X}\). It called is _semilinear_ if it is a union of finitely many linear subsets of \(\mathbf{N}^{X}\). The following propositions are immediate consequences of Definition 3.1: **Proposition 3.2**.: _Let \(\phi\colon\mathbf{N}^{X}\to\mathbf{N}^{Y}\) be an homomorphism. Then if \(\Omega\subseteq\mathbf{N}^{X}\) is semilinear, \(\phi(\Omega)\subseteq\mathbf{N}^{Y}\) is also semilinear. _ **Proposition 3.3**.: _If \(\Omega_{X}\subseteq\mathbf{N}^{X}\) and \(\Omega_{Y}\subseteq\mathbf{N}^{Y}\) are semilinear, then \(\Omega_{X}\times\Omega_{Y}\subseteq\mathbf{N}^{X}\times\mathbf{N}^{Y}\) is also semilinear. _ We also state the following seminal result in the study of semilinear sets. **Lemma 3.4** (Liu-Weiner, [12, Theorem 1]).: _The family of semilinear subsets of \(\mathbf{N}^{X}\) is closed under intersections. _ **Example 3.5**.: Let \(X\) be a finite set, and let \(\mathbf{1}_{X}\colon X\to\mathbf{N}\) denote the characteristic function of \(X\). Then, define the _diagonal over \(X\)_ to be the set \(\Delta_{X}:=\{k\cdot\mathbf{1}_{X}:k\in\mathbf{N}\}\subseteq\mathbf{N}^{X}\). In particular, observe that \(\Delta_{X}\) is semilinear. **Definition 3.6**.: Let \(\Omega\subseteq\mathbf{N}^{2}\). For \(k\in\mathbf{N}\), let \(\Omega_{(k)}:=\{\ell:(k,\ell)\in\Omega\}\). Then the _lower envelope_ of the set \(\Omega\) is the sequence \[\operatorname{Env}(\Omega):=(\inf\Omega_{(k)})_{k},\] where, by convention, the infimum of the empty set is \(\infty\). The following proposition follows immediately from Definition 3.6. **Proposition 3.7**.: _Let \(\Omega,\Omega^{\prime}\subseteq\mathbf{N}^{2}\), and let \((a_{k})_{k}=\operatorname{Env}(\Omega)\) and \((a^{\prime}_{k})_{k}=\operatorname{Env}(\Omega^{\prime})\) respectively. Then \(\operatorname{Env}(\Omega\cup\Omega^{\prime})=(\min\{a_{k},a^{\prime}_{k}\})_ {k}\). _ **Proposition 3.8**.: _Let \(\Omega\) be the submonoid of \(\mathbf{N}^{2}\) generated by a finite set \(V=\{(x_{\alpha},y_{\alpha})\}_{\alpha\in A}\subseteq\mathbf{N}^{2}\). Set \(A^{\prime}:=\{\alpha\in A:x_{\alpha}\neq 0\}\) and \(\Omega^{\prime}\) be the submonoid generated by \(\{(x_{\alpha},y_{\alpha})\}_{\alpha\in A^{\prime}}\). Then \(\operatorname{Env}(\Omega)=\operatorname{Env}(\Omega^{\prime})\)._ Proof.: Since \(\Omega^{\prime}\subseteq\Omega\), it follows that \(\inf\Omega_{(k)}\leq\inf\Omega^{\prime}_{(k)}\). If \(\inf\Omega_{(k)}=\infty\), then clearly \(\inf\Omega_{(k)}=\inf\Omega^{\prime}_{(k)}\). Otherwise, without loss of generality, let \(k\geq 0\) be such that \(\inf\Omega_{(k)}<\infty\). Then, there exists \(\lambda_{\alpha}\in\mathbf{N}\) for all \(\alpha\in A\) such that \(\sum_{\alpha\in A}\lambda_{\alpha}x_{\alpha}=k\) and \(\sum_{\alpha\in A}\lambda_{\alpha}y_{\alpha}=\inf\Omega_{(k)}\). Since \(\sum_{\alpha\in A^{\prime}}\lambda_{\alpha}y_{\alpha}\) is an element of \(\Omega^{\prime}_{(k)}\), and \(\sum_{\alpha\in A^{\prime}}\lambda_{\alpha}y_{\alpha}=\sum_{\alpha\in A} \lambda_{\alpha}y_{\alpha}\), by minimality, it follows that \(\inf\Omega^{\prime}_{(k)}\leq\inf\Omega_{(k)}\) for all \(k\), from which equality follows. We now state the following classical lemma. **Lemma 3.9** (Frobenius, [17, Theorem 3.15.2]).: _Let \(x_{1},\ldots,x_{n}\in\mathbf{N}\), and let \(g=\gcd(x_{1},\ldots,x_{n})\). Then there exists \(N\geq 0\) such that for all \(k\geq N\) with \(g\mid k\), there exists \(\lambda_{1},\ldots,\lambda_{n}\in\mathbf{N}\) such that_ \[k=\sum_{i}\lambda_{i}x_{i}.\qed\] We arrive at the main result of this section, whose proof is adapted and simplified from [9, Theorem]. **Lemma 3.10**.: _Let \(\Omega\subseteq\mathbf{N}^{2}\) be the monoid generated by a finite set \(V\subseteq\mathbf{N}^{2}\). Then \(\operatorname{Env}(\Omega)\) is uniformly semi-arithmetic._ Proof.: Let \(V:=\{(x_{i},y_{i}):1\leq i\leq n\}\), and let \((a_{k})_{k}:=\operatorname{Env}(\Omega)\). By Proposition 3.8, we may assume that \(x_{i}\neq 0\) for all \(i\), and as the result is trivial if \(n=0\), we may assume \(n\geq 1\). By reordering the generating set, we may also assume that \[\frac{y_{1}}{x_{1}}\leq\frac{y_{i}}{x_{i}}\] for all \(1\leq i\leq n\). First, consider the case when \(\gcd(x_{1},\ldots,x_{n})=1\). By Lemma 3.9, there exists \(N\geq 0\) such that for all \(k\geq N\), \(a_{k}<\infty\). Let \(k\geq 0\) be such that \(a_{k}<\infty\), and choose \(\lambda_{i}\in\mathbf{N}\) for all \(1\leq i\leq n\) such that \(\sum_{i}\lambda_{i}x_{i}=k\) and \(\sum_{i}\lambda_{i}y_{i}=a_{k}\). Then \[a_{k}=\sum_{i}\lambda_{i}y_{i}=\sum_{i}\lambda_{i}x_{i}\cdot\frac{y_{i}}{x_{i }}\geq\sum_{i}\lambda_{i}x_{i}\cdot\frac{y_{1}}{x_{1}}=k\cdot\frac{y_{1}}{x_{1 }}. \tag{3.1}\] Define \(\lambda_{i}^{\prime}\) by \[\lambda_{i}^{\prime}:=\begin{cases}\lambda_{i}+1&i=1\\ \lambda_{i}&i>1.\end{cases}\] Then \(\sum_{i}\lambda_{i}^{\prime}x_{i}=k+x_{1}\) and \(\sum_{i}\lambda_{i}^{\prime}y_{i}=a_{k}+y_{1}\). By the minimality of \(a_{k}\), \[a_{k+x_{1}}\leq a_{k}+y_{1}. \tag{3.2}\] Let \(N\leq\ell<N+x_{1}\). Consider the sequence \[(b_{j})_{j\geq 0}:=\left(a_{\ell+x_{1}j}-(\ell+x_{1}j)\cdot\frac{y_{1}}{x_{1}} \right)_{j\geq 0}.\] Observe that \((b_{j})_{j}\) is decreasing, since by (3.2) \[b_{j+1}-b_{j}=a_{\ell+x_{1}j+x_{1}}-a_{\ell+x_{1}j}-y_{1}\leq 0\] for all \(j\geq 0\). Moreover, by (3.1), \(b_{j}\geq 0\). Therefore \((b_{j})_{j}\) is eventually constant. Hence there exists a constant \(c\) such that for all \(j\) sufficiently large, \[a_{\ell+x_{1}j}=c+y_{1}j.\] Therefore \((a_{\ell+x_{i}j})_{j}\) is eventually arithmetic with difference \(y_{1}\). Since \(N\leq\ell<N+x_{1}\) was arbitrary, it follows that \((a_{k})_{k}\) is uniformly semi-arithmetic with period \(x_{1}\) and difference \(y_{1}\). In the general case, where \(\gcd(x_{1},\ldots,x_{n})=g>1\), set \(x_{i}^{\prime}:=x_{i}/g\), let \(\Omega^{\prime}\) be the submonoid generated by \(V^{\prime}:=\{(x_{i}^{\prime},y_{i}):1\leq i\leq n\}\). Since \(\gcd(x_{1}^{\prime},\ldots,x_{n}^{\prime})=1\) and let \((a_{k}^{\prime})_{k}:=\operatorname{Env}(\Omega^{\prime})\). Then \((a_{k})_{k}\) is uniformly semi-arithmetic with period \(x_{1}^{\prime}\) and difference \(y_{1}\). Clearly, \[a_{k}^{\prime}=\left\{\begin{array}{ll}a_{k/g}&\text{if $g\mid k$}\\ \infty&\text{otherwise}.\end{array}\right.\] Hence, if \(g\mid k\), then \((a_{k+x_{1}j})_{j}=(a_{k/g+x_{i}^{\prime}j}^{\prime})_{j}\) is eventually arithmetic with difference \(y_{1}\). Otherwise, \(a_{k+x_{1}j}=\infty\) for all \(j\geq 0\), and hence \((a_{k+x_{1}j})_{j}\) is an arithmetic sequence in \(\widehat{\mathbf{N}}\). It follows that \((a_{k})_{k}\) is uniformly semi-arithmetic with period \(x_{1}\) and difference \(y_{1}\). **Corollary 3.11**.: _Let \(\Omega\subseteq\mathbf{N}^{2}\) be semilinear. Then \(\operatorname{Env}(\Omega)\) is semi-arithmetic._ Proof.: Let \(M\) be a finitely generated submonoid of \(\mathbf{N}^{2}\). By Lemma 3.10, \((a_{k})_{k}=\operatorname{Env}(M)\) is a uniformly semi-arithmetic sequence in \(\mathbf{N}\). It can be seen that if \(v=(x_{0},y_{0})\in\mathbf{N}^{2}\), then for the linear set \(v+M\), \[\operatorname{Env}(v+M)_{k}=\begin{cases}\infty&\text{if $k<x_{0}$}\\ a_{k-x_{0}}+y_{0}&\text{otherwise}.\end{cases}\] Therefore \(\operatorname{Env}(v+M)\) is uniformly semi-arithmetic. Since a semilinear subset of \(\mathbf{N}^{2}\) is the union of finitely many linear sets, the result follows by repeated application of Proposition 3.7 and Proposition 2.3. Observe that, even if \((a_{k})_{k}\) and \((a_{k}^{\prime})_{k}\) are uniformly semi-arithmetic, is is possible that the sequence \((\min\{a_{k},a_{k}^{\prime}\})_{k}\) is non-uniformly semi-arithmetic. Thus, Corollary 3.11 cannot be improved to uniform semi-arithmeticity. ## 4. Formal Languages Let \(X^{*}\) denote the free monoid generated by a set \(X\), or equivalently, the set of all finite sequences of elements in \(X\) with concatenation as monoidal structure. Call the elements of \(X^{*}\)_words_ in the _alphabet_\(X\), and denote the empty word by \(\varepsilon\). For \(w\in X^{*}\), define _length function \(|\cdot|\colon X^{*}\to\mathbf{N}\) to be the unique homomorphism defined on generators by \(|x|=1\) for all \(x\in X\). Observe that \[|w_{1}^{k_{1}}\ldots w_{n}^{k_{n}}|=\sum_{i=1}^{n}k_{i}\cdot|w_{i}|. \tag{4.1}\] A subset \(\mathcal{L}\subseteq X^{*}\) is called a _language_ on the alphabet \(X\). Given languages \(\mathcal{L},\mathcal{M}\subseteq X^{*}\) let \(\mathcal{L}\cdot\mathcal{M}\) denote the language \[\mathcal{L}\cdot\mathcal{M}:=\{uv\,:\,u\in\mathcal{L},v\in\mathcal{M}\}.\] Write \(\mathcal{L}^{n}\) for the \(n\)-fold concatenation \(\mathcal{L}\cdot\ldots\cdot\mathcal{L}\), where by convention \(\mathcal{L}^{0}:=\{\varepsilon\}\). Further define the _Kleene star_ of \(\mathcal{L}\) to be \[\mathcal{L}^{*}:=\bigcup_{n\geq 0}\mathcal{L}^{n}.\] **Definition 4.1**.: Let \(X,Y\) be disjoint finite alphabets. Then the _projection homomorphism_ of free monoids is the homomorphism \[\pi_{X}\colon(X\sqcup Y)^{*}\to X^{*}\] defined on generators by \(\pi_{X}(x)=x\) for all \(x\in X\) and \(\pi_{X}(y)=\varepsilon\) for all \(y\in Y\). **Definition 4.2**.: Given a word \(w\in X^{*}\), a _subword_ is a subsequence of \(w\). Let \(\mathcal{P}(w)\) denote the finite language consisting of the subwords of \(w\). Observe that, for words \(w_{1},\ldots,w_{n}\in X^{*}\), \[\mathcal{P}(w_{1}\ldots w_{n})=\mathcal{P}(w_{1})\cdot\ldots\cdot\mathcal{P}( w_{n}). \tag{4.2}\] **Definition 4.3**.: Let \(\mathcal{L}\subseteq X^{*}\) be a language. The _cancellation length relative to \(\mathcal{L}\)_ of \(w\in X^{*}\) is defined as \[\|w\|_{\mathcal{L}}:=\inf\{|w|-|u|:u\in\mathcal{P}(w)\cap\mathcal{L}\}.\] Equivalently, \(\|w\|_{\mathcal{L}}\) is the infimum of all \(\ell\) such that it is possible to delete \(\ell\) characters from \(w\) to yield an element of \(\mathcal{L}\). **Definition 4.4** (Compare [11, Section 3.1.1], [1, Theorem 3.1]).: The collection of _regular_ languages over the alphabet \(X\) is the smallest collection of languages satisfying the following: 1. If \(\mathcal{L}\subseteq X^{*}\) is finite, then \(\mathcal{L}\) is regular; 2. If \(\mathcal{L}\) and \(\mathcal{M}\) are regular, then \(\mathcal{L}^{*},\mathcal{L}\cup\mathcal{M}\) and \(\mathcal{L}\cdot\mathcal{M}\) are also regular. **Definition 4.5** (Compare [11, Section 5.12], [6, Section 19]).: A _context-free grammar_ (CFG) is a quadruple \(G=(V,X,R,v_{0})\), where \(X\subseteq V\) are finite sets of characters, \(v_{0}\in V\setminus X\) and \(R\subseteq(V\setminus X)\times V^{*}\) is a finite set. Let \(\mathcal{L}\subseteq V^{*}\) be the smallest language containing the word \(v_{0}\) such that, if \(w_{0}vw_{1}\in\mathcal{L}\) with \(w_{0},w_{1}\in V^{*}\) and \(v\in V\setminus X\), and if \((v,v^{\prime})\in R\), then \(w_{0}v^{\prime}w_{1}\in\mathcal{L}\). The language \(\mathcal{L}^{\prime}=\mathcal{L}\cap X^{*}\) is called the _context-free language_ (CFL) generated by \(G\). We will now state some closure properties of CFLs. **Theorem 4.6** ([11, Theorem 7.27]).: _Let \(\mathcal{L}\subseteq Y^{*}\) be a CFL, and let \(\phi\colon X^{*}\to Y^{*}\) be a monoid homomorphism. Then \(\phi^{-1}(\mathcal{L})\) is also a CFL. \(\square\)_ **Theorem 4.7** ([11, Theorem 7.30]).: _If \(\mathcal{R}\subseteq X^{*}\) is regular and \(\mathcal{L}\subseteq X^{*}\) is a CFL then \(\mathcal{R}\cap\mathcal{L}\) is also a CFL. \(\square\)_ **Definition 4.8**.: Let \(\Phi\colon S^{*}\to G\) be the canonical projection with \(\Phi(s)=s\) for all \(s\in S\). The _cancellation language_, \(\mathcal{W}(G,S)\), is defined to be \[\mathcal{W}(G,S)=\Phi^{-1}(1)\subseteq S^{*}.\] We now state the following classical theorem. **Theorem 4.9** (Muller-Schupp, [14, Theorem III], [8, Section 6.1]).: _The language \(\mathcal{W}(G,S)\) is a CFL if and only if \(G\) is a virtually free group. \(\square\)_ **Definition 4.10**.: Let \(X\) be a finite set. The _Parikh homomophism_ is the abelianisation map, denoted \[\psi_{X}\colon X^{*}\to\mathbf{N}^{X}.\] It follows directly from the definition that for any \(w\in X^{*}\), \(\psi(w)(x)\) is equal to the number of occurrences of the letter \(x\) in \(w\). Therefore, for any \(w\in X^{*}\), \[|w|=\sum_{x\in X}\psi_{X}(w)(x). \tag{4.3}\] _Remark 4.11_.: Considering that \(\mathbf{N}^{X\sqcup Y}\cong\mathbf{N}^{X}\times\mathbf{N}^{Y}\), the Parikh homomorphism \(\psi_{X\sqcup Y}\colon(X\sqcup Y)^{*}\to\mathbf{N}^{X\sqcup Y}\) has the form \[\psi_{X\sqcup Y}=(\psi_{X}\circ\pi_{X},\psi_{Y}\circ\pi_{Y}).\] An important result in the theory of CFLs is Parikh's theorem. **Theorem 4.12** (Parikh, [15]).: _Let \(\mathcal{L}\subseteq X^{*}\) be CFL. Then \(\psi_{X}(\mathcal{L})\) is semilinear. \(\square\)_ **Definition 4.13**.: Let \(\mathcal{L}_{1},\ldots,\mathcal{L}_{n}\subseteq X^{*}\) be languages, and further let \(Y=\{y_{1},\ldots,y_{n}\}\) be a set disjoint from \(X\). Define the _enumeration language_ of \(\mathcal{L}_{1},\ldots,\mathcal{L}_{n}\) in \((X\sqcup Y)^{*}\) to be \[\mathcal{E}=(\mathcal{L}_{1}\cdot\{y_{1}\})^{*}\cdot\ldots\cdot(\mathcal{L}_{ n}\cdot\{y_{n}\})^{*}.\] For simplicity, let us identify \(\mathbf{N}^{Y}\) with \(\mathbf{N}^{n}\). Observe that \(\mathcal{E}\) is the disjoint union \[\mathcal{E}=\bigsqcup_{\mathbf{k}\in\mathbf{N}^{Y}}\mathcal{E}_{\mathbf{k}}, \tag{4.4}\] where for every \(\mathbf{k}=(k_{1},\ldots,k_{n})\in\mathbf{N}^{Y}\), \[\mathcal{E}_{\mathbf{k}}=(\mathcal{L}_{1}\cdot\{y_{1}\})^{k_{1}}\cdot\ldots \cdot(\mathcal{L}_{n}\cdot\{y_{n}\})^{k_{n}}\] is the preimage of \(\mathbf{k}\) with respect to \(\psi_{Y}\circ\pi_{Y}\colon\mathcal{E}\to\mathbf{N}^{Y}\). Moreover, \[\pi_{X}(\mathcal{E}_{\mathbf{k}})=\mathcal{L}_{1}^{k_{1}}\cdot\ldots\cdot \mathcal{L}_{n}^{k_{n}}. \tag{4.5}\] _Remark 4.14_.: Enumeration languages can be considered as graded unions of combinatorial cubes, in the sense of [16, Section 3]. ## 5. Multivariate stability The purpose of this section is to prove the main result of the paper. **Theorem 5.1**.: _Let \(\mathcal{L}\subseteq X^{*}\) be a CFL, and let \(w_{1},\ldots,w_{n}\in X^{*}\). Then the sequence \((\|w_{1}^{k}\ldots w_{n}^{k}\|_{\mathcal{L}})_{k}\) is uniformly semi-arithmetic in the sense of Definition 2.1. In particular, if \((\|w_{1}^{k}\ldots w_{n}^{k}\|_{\mathcal{L}})_{k}\) is eventually finite, and has period \(m>0\) and difference \(d\geq 0\), then_ \[\lim_{k\to\infty}\frac{\|w_{1}^{k}\ldots w_{n}^{k}\|_{\mathcal{L}}}{k}\] _exists and is equal to \(\frac{d}{m}\in\mathbf{Q}\)._ Proof.: Let \(U\subseteq\mathbf{N}^{2}\) be defined by \[U\colonequals\{(k,|w_{1}^{k}\ldots w_{n}^{k}|-|u|):u\in\mathcal{P}(w_{1}^{k} \ldots w_{n}^{k})\cap\mathcal{L},\,k\in\mathbf{N}\}.\] Observe that by Definition 4.3, \[U_{(k)}=\{|w_{1}^{k}\ldots w_{n}^{k}|-|u|:u\in\mathcal{P}(w_{1}^{k}\ldots w_{n }^{k})\cap\mathcal{L}\},\] and hence by Definition 3.6, \[\operatorname{Env}(U)=(\|w_{1}^{k}\ldots w_{n}^{k}\|_{\mathcal{L}})_{k}. \tag{5.1}\] Let \(Y=\{y_{1},\ldots,y_{n}\}\) be disjoint from \(X\), and consider the enumeration language \[\mathcal{R}=(\mathcal{P}(w_{1})\cdot\{y_{1}\})^{*}\cdot\ldots\cdot(\mathcal{ P}(w_{n})\cdot\{y_{n}\})^{*}\subseteq(X\sqcup Y)^{*} \tag{5.2}\] in the sense of Definition 4.13. By Definition 4.4, \(\mathcal{R}\) is a regular language. With \(\pi_{X}\colon(X\sqcup Y)^{*}\to X^{*}\) in the sense of Definition 4.1, set \[\mathcal{M}\colonequals\mathcal{R}\cap\pi_{X}^{-1}(\mathcal{L}). \tag{5.3}\] Theorems 4.6 and 4.7 imply that \(\mathcal{M}\) is a CFL in \((X\sqcup Y)^{*}\). By Theorem 4.12, \[M:=\psi_{X\sqcup Y}(\mathcal{M})\subseteq\mathbf{N}^{X\sqcup Y}\] is a semilinear set. Consider the diagonal \(\Delta_{Y}\subseteq\mathbf{N}^{Y}\) as defined in Example 3.5. Then by Example 3.5 and Proposition 3.3, \(\mathbf{N}^{X}\times\Delta_{Y}\) is a finitely-generated linear subset of \(\mathbf{N}^{X\sqcup Y}\). By Lemma 3.4, \[M_{\Delta}:=M\cap(\mathbf{N}^{X}\times\Delta_{Y})\] is semilinear in \(\mathbf{N}^{X\sqcup Y}\). Now define a monoid homomorphism \[\xi\colon\mathbf{N}^{X\sqcup Y} \to\mathbf{Z}^{2}\] \[(f,\mathbf{k}) \mapsto\left(k_{1},\sum_{i=1}^{n}k_{i}\cdot|w_{i}|-\sum_{x\in X}f (x)\right)\] where we identify \(\mathbf{N}^{X\sqcup Y}\) with \(\mathbf{N}^{X}\times\mathbf{N}^{Y}\). _Claim._\(\xi(M_{\Delta})=U\). _Proof of claim._ Observe that \((k,\ell)\in\xi(M_{\Delta})\) if and only if \[\exists\,u^{\prime}\in\mathcal{M}:\psi_{X\sqcup Y}(u^{\prime})\in\mathbf{N}^{ X}\times\Delta_{Y}\text{ and }\xi(\psi_{X\sqcup Y}(u^{\prime}))=(k,\ell).\] Resolving coordinates, and by both Remark 4.11 and the definition of \(\xi\), this is equivalent to \[\exists\,u^{\prime}\in\mathcal{M}:\sum_{i=1}^{n}k\cdot|w_{i}|-\sum_{x\in X} \psi_{X}(\pi_{X}(u^{\prime}))=\ell\text{ and }\psi_{Y}(\pi_{Y}(u^{\prime}))=k\cdot \mathbf{1}_{Y}.\] By (4.1), (4.3) and (4.4) this is equivalent to \[\exists\,u^{\prime}\in\mathcal{M}:|w_{1}^{k}\dots w_{n}^{k}|-|\pi_{X}(u^{ \prime})|=\ell\text{ and }u^{\prime}\in\mathcal{R}_{k\cdot\mathbf{1}_{Y}}.\] Since \(\mathcal{M}=\mathcal{R}\cap\pi_{X}^{-1}(\mathcal{L})\) and by (4.2) and (4.5), this is equivalent to \[\exists\,u\in\mathcal{P}(w_{1}^{k}\dots w_{n}^{k})\cap\mathcal{L}:|w_{1}^{k} \dots w_{n}^{k}|-|u|=\ell\] which is equivalent to \((k,\ell)\in U\), completing the proof of the claim. Finally, since \(M_{\Delta}\) is semilinear in \(\mathbf{N}^{X\sqcup Y}\) and \(\xi\) is a homomorphism of monoids, it follows that \(\xi(M_{\Delta})\) is the union of finitely many linear subsets of \(\mathbf{Z}^{2}\). Since \(\xi(M_{\Delta})=U\) is contained in \(\mathbf{N}^{2}\), this implies that \(U\subseteq\mathbf{N}^{2}\) is semilinear. Hence, by 5.1 and Corollary 3.11, \((\|w_{1}^{k}\dots w_{n}^{k}\|_{\mathcal{L}})_{k}\) is semi-arithmetic. For any \(k\geq 0\) the word \(w_{1}^{k}\ldots w_{n}^{k}\) can be obtained from \(w_{1}^{k+1}\ldots w_{n}^{k+1}\) by deleting \(D:=\sum_{i}|w_{i}|\) symbols. By the definition of \(\|\cdot\|_{\mathcal{L}}\), \[\|w_{1}^{k+1}\ldots w_{n}^{k+1}\|_{\mathcal{L}}\leq\|w_{1}^{k}\ldots w_{n}^{k} \|_{\mathcal{L}}+D.\] Hence, by Lemma 2.5, \((\|w_{1}^{k}\ldots w_{n}^{k}\|_{\mathcal{L}})_{k}\) is a uniformly semi-arithmetic sequence. The final statement of the theorem follows from Proposition 2.4. Proof of Theorem 1.1.: Let \(\Phi\colon S^{*}\to G\) be the canonical projection as defined in Definition 4.8. Now consider some \(g_{1},\ldots,g_{n}\in G\) and choose \(w_{1},\ldots,w_{n}\in S^{*}\) such that \(\Phi(w_{i})=g_{i}\). Since the conjugation-invariant length is equal to the cancellation length, \[\|g_{1}^{k}\ldots g_{n}^{k}\|_{S}=\|w_{1}^{k}\ldots w_{n}^{k}\|_{\mathcal{W}( G,S)},\] and in particular, by Theorem 5.1, \((\|g_{1}^{k}\ldots g_{n}^{k}\|_{S})_{k}\) is uniformly semi-arithmetic sequence \((a_{k})_{k}\) with period \(m>0\) and difference \(d\geq 0\). Since \(\|g\|_{S}<\infty\) for all \(g\in G\), the sequence \((\|g_{1}^{k}\ldots g_{n}^{k}\|_{S})_{k}\) is eventually finite, and hence \[\lim_{k\to\infty}\frac{\|g_{1}^{k}\ldots g_{n}^{k}\|_{S}}{k}\] exists and converges to \(\frac{d}{m}\).
2301.01809
Significant Digits: Using Large-Scale Blockchain Data to Predict Fraudulent Addresses
Blockchain systems and cryptocurrencies have exploded in popularity over the past decade, and with this growing user base, the number of cryptocurrency scams has also surged. Given the graphical structure of blockchain networks and the abundance of data generated on these networks, we use graph mining techniques to extract essential information on transactions and apply Benford's Law to extract distributional information on address transactions. We then apply a gradient-boosting tree model to predict fraudulent addresses. Our results show that our method can detect scams with reasonable accuracy and that the features generated based on Benford's Law are the most significant features.
Jared Gridley, Oshani Seneviratne
2023-01-03T17:26:22Z
http://arxiv.org/abs/2301.01809v1
# Significant Digits: Using Large-Scale Blockchain Data to Predict Fraudulent Addresses ###### Abstract Blockchain systems and cryptocurrencies have exploded in popularity over the past decade, and with this growing user base, the number of cryptocurrency scams has also surged. Given the graphical structure of blockchain networks and the abundance of data generated on these networks, we use graph mining techniques to extract essential information on transactions and apply Benford's Law to extract distributional information on address transactions. We then apply a gradient-boosting tree model to predict fraudulent addresses. Our results show that our method can detect scams with reasonable accuracy and that the features generated based on Benford's Law are the most significant features. blockchain, scams, machine learning, data mining, Benford's Law ## I Introduction Over the past decade, the cryptocurrency ecosystem has exploded in every way, from market capitalization to user interaction. In 2013, there were just seven cryptocurrencies, with a market capitalization of about 1.5 billion USD. In March 2022, there were over 10,000 active cryptocurrencies with a total market cap of over 2 trillion USD [1]. With faster, cheaper, and more user-friendly blockchain technology, cryptocurrencies have become more accessible to more people. The rising popularity of cryptocurrencies has piqued the interest of established financial institutions, with asset managers like BlackRock and J.P. Morgan Chase & Co. disclosing virtual currencies on their balance sheets [2]. However, with such a rapidly growing environment, it becomes ripe for malicious users who seek to masquerade a rather useless smart contract as the next moonshot, and unfortunately, many people fall for these traps. The rapid growth of cryptocurrency applications is paralleled by advancements in malicious tactics, particularly with Ponzi Schemes. For example, the most apparent scams on Bitcoin are Ponzi schemes where you send bitcoin to an address, and they promise to double it, often posing as a celebrity on social media [3]. Ponzi schemes are often also characterized by a "rug-pull event" in which the orchestrator will disappear with a majority of the cash flowing through the scheme [4]. However, rug-pull operations are not unique to Ponzi schemes, many other scams have similar events. As Ethereum became popular, new scams appeared that took advantage of its smart contract technology. Bartoletti et al. analyzed the significant aspects that sparked the rise of Ponzi schemes with Ethereum's smart contracts. According to their analysis, the most critical factors for the rise in cryptocurrency scams are the anonymity among smart contract initiators, the immutable presence of malicious smart contracts, and the false sense of security many investors feel when interacting with smart contracts [5]. Unlike centralized fiat currencies backed by a government and law enforcement agencies, cryptocurrencies incur much more responsibility on the user. In 2021, it was reported that over $14 billion was stolen in cryptocurrency scams, up 516% from 2020, with 72% of the stolen funds coming from Decentralized Finance (DeFi) protocols [6]. This sharp rise in scams makes it even more necessary for an identification system that tags scams before users engage in the next so-called "moonshot." As more people use cryptocurrencies, more scammers will seize the opportunity to take advantage of new users in an unfamiliar ecosystem. If a user is caught in a Ponzi scheme, there is typically very little support from law enforcement agencies such as the FBI to help bring justice and retribution. With the permanency of transactions and the diversity of DeFi applications, a robust method for flagging potential scams is crucial for the financial security of blockchain-based applications. ### _Challenges_ When building a classifier for cryptocurrency scams, there are two main challenges: 1. **Data Sourcing:** We need a reliable source of scam addresses. To our knowledge, no source exists with such a comprehensive scam dataset. In many cases, smaller datasets exist for addresses associated with Ponzi schemes or phishing attacks but often rely on user reporting, meaning many scams are likely not included. 2. **Scam Categorization:** The transaction patterns on a diverse chain such as Ethereum vary significantly. Transactions include a myriad of patterns with users, smart contracts, Maximal Extractable Value (MEV) bots [7], and token contracts [8] operating on one chain. In many cases, some addresses have irregular patterns similar to scams but are innocent. We seek to avoid mislabeling an innocent address as a scam to encourage a more open, decentralized ecosystem. Scam addresses do, however, have distinctions that allow us to separate them from non-scam addresses. In particular, using obfuscation tools is common among scammers to try and clean their funds. It is important to note that while not addresses that use obfuscation applications like _mixers_ (or _tumblers_) [9] are scammers, many malicious users use these apps. An example sub-graph is depicted in Figure 1, showing that not all addresses connected to scam addresses is necessarily malicious. While obfuscation techniques were most common on Bitcoin, where there are fewer exchanges to trickle funds through, they have quickly been adopted and built for Ethereum. In this work, we do not use a feature such as _UsedObfuscationTool_ because detecting "coinjoins" and "mixers" on a blockchain is a complicated area of research and development. A particular trait we examine in this work comes from accounting fraud detection. When users make transactions, the first three significant digits follow specific, logarithm-based, non-uniform distributions. When malicious users hack into an account or convince users to send money, they often break these naturally occurring digit distributions for a uniform one [10]. This distribution is characterized by "Benford's Law for Anomalous Numbers" [11]. Analysis leveraging Benford's Law has even been admitted as evidence in criminal trials at all levels of court in the United States [12], making Benford's Law particularly interesting with blockchain scams because it is a method that is already accepted by regulatory bodies as credible evidence. ## II Background We outline some concepts pertinent to understanding our research contribution in this section. ### _Phishing Schemes_ _Phishing_ is a social engineering attack that exploits system users to gain unauthorized access or steal funds. Traditional phishing attacks often consisted of a spam email or website that would deceive the recipient into giving up their passwords or personal information by impersonating a legitimate organization [13]. Phishing schemes on Ethereum have multiple avenues for attack. Attackers often target users directly by spreading phishing addresses and false Non-Fungible-Tokens (NFTs) or DeFi information on social media and chat rooms for other projects [14, 15]. Take the Bored Ape Yacht Club1, for example. In 2021, Calvin Becerra, the owner of three Bored Ape NFTs, sent all three to another user address that claimed to be providing technical support. The scammer had stolen over $1 million in NFT assets within this single transaction. While Becerra eventually got some of the money back, he had to transfer funds to the scammer before they were returned [16]. Footnote 1: [https://opensea.io/collection/boredapeyachtclub](https://opensea.io/collection/boredapeyachtclub) A primary challenge with detecting phishing schemes on blockchain networks is that, in many cases, most malicious activity happens off the network. Social engineering tactics often target users with malicious emails and websites, making detecting phishing schemes especially challenging before a user's funds have been stolen. However, many researchers have investigated this problem. Wen et al. developed a phishing detection framework from on-chain transaction data and an adversarial attack framework to verify its robustness [17]. The idea of an adversarial method to improve the framework's robustness is significant, although the authors also emphasize the difficulty of developing phishing detection. ### _Ponzi Schemes_ Ponzi schemes are often characterized by their advertisement as a High-Yield Investment Program (HYIP)2. They try to lure unsuspecting users with high-interest rates and the promise of high returns [18, 19]. Much research has been done to detect Ponzi schemes that occur through malicious smart contracts, which we will refer to as "Smart Ponzi Schemes." Many malicious users choose Smart Ponzi schemes because they can proliferate and bring in more money before being caught. Footnote 2: HYIPs usually advertise yields of more than 100% per year to lure in victims and regularly use new investors’ money to pay off older investors. There has been some effort towards educating investors about crypto scams from government agencies, like looking for registered investments with documented token information and strategies [18, 20]. However, given how quickly the crypto landscape changes, these sources often need more information, making an automated technique much more practical and effective. Such a solution can be implemented by leveraging machine learning techniques that classify new addresses as soon as they become active on the blockchain. ### _Benford's Law_ We utilize _Benford's Law_[11] to create features for our machine learning classifiers. Benford's law is a natural phenomenon that maps the occurrence of first and second digits in many naturally occurring numerical sets to the base ten logarithms for each respective digit [21]. For example, the frequency of the occurrence of the number 1 would be calculated by: \[P(d)=log_{10}(1+\frac{1}{d}) \tag{1}\] \[P(1)=log_{10}(1+\frac{1}{1})=0.301... \tag{2}\] Fig. 1: Transaction Subgraph for a Single Scam Address A Canadian-American astronomer, Simon Newcomb, first documented Benford's Law, who noticed the pattern by observing that in logarithm tables, the earlier pages (starting with 1 or 2) were much more worn than those that started with the latter digits [22]. The law was later formalized by physicist Frank Benford who tested it on numerous naturally occurring datasets, including the surface area of 335 rivers, values of 140 physical constants, and weights of 1800 molecules [11]. While many naturally occurring datasets follow Benford's Law, many do not. For example, square roots and reciprocals of consecutive natural numbers, a list of local telephone numbers, and terminal digits in pathology data (due to rounding) violate Benford's Law [23]. General criteria for distributions that are expected to follow Benford's Law are given below [24]: 1. Distributions where the mean is greater than the median and the skew is positive 2. Numbers resulting from a combination (add/mult) 3. Transaction-level data Benford's law has been used to detect fraud, particularly with fraudulent credit card transactions and applications in detecting money laundering and network intrusion. Each application of Benford's Law relies on the underlying distribution following Benford's Law and the fact that malicious actors tend to break this distribution and approach a more uniform one [25]. In sophisticated cases, it was found that many actors used transactions that followed Benford's Law for the first digits, but the illegal transactions still failed Benford's Law for the second digits. Previous works have shown that many aspects of cryptocurrency data follow Benford's Law [24, 25]. ## III Problem Formulation In this work, we examine the transaction graph for Ethereum addresses and extract and transform the raw data into features used with various machine-learning classifiers. We focus on two primary research questions: 1. To what extent does Benford's Law distinguish between fraudulent and legitimate users? 2. How can Benford's Law be used to build a more effective classifier for cryptocurrency Ponzi schemes? Many previous methods extract smart contract code for the basis of their features. While the scams that operate on smart contracts grow much quicker, there are still scams that happen without a smart contract. We thus examine methods for predicting traditional Ponzi schemes and Smart Ponzi schemes. We also investigate the result of using features based on statistical fraud detection methods and, in particular, measuring the similarities between transactional value distributions and Benford's Law for first and second digits. By analyzing the Ethereum blockchain, we form a transaction graph G = (V, E) where the vertices V are addresses and edges E are the transactions between addresses. The edges hold transaction information, like the amount transferred, gas limit, and transaction timestamp. Graph mining techniques are then used to supplement the features derived from the distributions. We analyze online repositories of reported scam addresses to provide labels, \(Y\), for the addresses in our graph where \(Y=+1\) indicates a scam and \(Y=-1\) indicates a non-scam. This graph is then used to extract features based on transaction statistics and distributions of the transaction values to then train a classifier. ## IV Data Throughout this work, we investigated many sources to find comprehensive and reliable sources of Ethereum transaction data and reported scam data. Our dataset consisted of 1676 addresses with approximately 2.6 million transactions in total. This set of addresses consists of user activity, smart contracts, MEV Bots, and other DeFi applications. ### _Blockchain Data Sourcing_ We used an academic license to query the Amberdata API ([https://www.amberdata.io](https://www.amberdata.io)) to collect information on cryptocurrency transactions. In addition to the raw on-chain data provided by Ethereum, they offer identifiers to transactions that belong to exchanges, DeFi applications, and transactions that span across different blockchains. We used Amberdata to get a much more comprehensive transaction history for our addresses and quickly sort out user addresses from smart contracts. ### _Class Label Sourcing_ A particular challenge when creating the dataset was to ensure the integrity of the scam and non-scam data labels. For scam addresses, we sourced addresses from online repositories associated with other works in identifying crypto scams. Xia et al. developed a dataset of scam tokens that appeared on the Uniswap Exchange [3]. In a later paper, Xia et al. developed a dataset of about 185 scam addresses across Bitcoin, Ethereum, and other blockchains and a similar dataset corresponding to scam web domains primarily used in phishing attacks [26]. In addition to these repositories, we used a GitHub repository created by Tomasz Nurkiewicz that aggregated news stories on significant crypto scams and the addresses associated with them [27]. Our most important source of scam addresses came from the Etherscan ([https://etherscan.io](https://etherscan.io)) tagging system. Their tagging system ([https://etherscan.io/labelcloud](https://etherscan.io/labelcloud)) identifies 564 different labels ranging from the addresses and smart contracts associated with Uniswap to addresses associated with reported phishing attacks. Etherscan has a free API that provides access to these labels [28]. When gathering the non-scam addresses, we similarly used Etherscan labels to pull addresses for trusted smart contracts. In particular, we pulled addresses associated with Uniswap, Aave, Compound, and OpenSea. While these applications are considered reliable, many addresses have thousands of transactions. So to account for user addresses, we looked at addresses verified by DeFi applications, particularly on OpenSea3 and Axie Infinity4. The remaining non-scam addresses came from pulling addresses that had traded on a set of blocks in March 2022 and checking them against user-reported scams on ScamAlert5. Footnote 3: [https://opense.io](https://opense.io) Footnote 4: [https://axieinfinity.com](https://axieinfinity.com) Footnote 5: [https://scam-alert.io](https://scam-alert.io) ### _Feature Extraction_ To get the features we used to train our classifiers, we extracted the transaction graph for each address and then used that to generate a statistical representation of the transaction graph. We examined the number of transactions, unique addresses, values for gas limits, and value transferred. Each feature was broken down between incoming and outgoing transactions, and the gas limit and value metrics were represented by their mean, median, and standard deviation values. We used this breakdown of the transaction graph to generate our features because, in previous work that looked to classify scam tokens [3], there was a similar representation of the transaction graph worked well in their token classifiers. We modified it by adding the gas limits and median to the feature set. These features were then supplemented with the Chi-Squared and KS test values for the first and second digits to quantify their fit with Benford's Law. ## V Methodology For this work, we broke down our investigation into two parts. The first is testing whether Benford's Law fits cryptocurrency data for legitimate and scam-labeled data. The second part is building a series of classifiers for scam addresses based solely on the transaction graph. ### _Measuring Fit with Benford's Law_ To measure the fit with Benford's Law, we first separated the addresses by their scam and non-scam labels and using two metrics, the Chi-Squared [29] and Kolmogorov-Smirnov (KS) [30] tests, to quantify the similarities. The Chi-Squared test is recommended for distributions with many sample data. However, since not all of the addresses in our dataset have many transactions, we also consider the KS test because it has been shown to better account for the minor differences in the distributions [31]. We used both features in our classifier but later found that the KS test is not significant in any of our classifiers. ### _Building Classifiers_ For this investigation, we considered five machine-learning classification methods. We used: (i) Logistic Regression [32], (ii) Random Forest [33], (iii) Support Vector Machine (SVM) [34], (iv) Decision Tree [35], and (v) LightGBM [36]. LightGBM is a gradient-boosting framework that uses tree-based learning algorithms. Microsoft initially developed it, and is now an open-source tool [37]. We randomly split our data, with 20% of it being split as test data and 80% as training. The training data is further split, with 15% being validation data and the rest as training data. ## VI Results ### _Cryptocurrency and Benford's Law_ When investigating the distribution of transactions in relation to Benford's Law, we found that the scam addresses had a clear divergence in many cases. In Figure 2, we compare two addresses, each with a similar number of transactions (the scam address had 1404, and the non-scam address had 1426). The non-scam address (blue) follows Benford's law quite closely, whereas the scam address (orange) does not fit Benford's Law at all, which is naturally not the case for all scam addresses, with some addresses having much more subtle differences. While many scam addresses had very little correlation with Benford's Law, we found that when examining the scam transactions, the distribution mapped much closer to Benford's law. However, there are still discrepancies with the digits 1 and 5 primarily, which can be seen more clearly in Figure 3 below, where we compare all the transactions in each category to Benford's law. In mapping the distribution of the digits in the second position, we found that both categories had more occurrences of the digit 0 than Benford's Law for second digits, but the scam category was still significantly higher than the non-scam. This mapping caused the non-scam category to follow Benford's Law much more closely than the scam category as it had a smaller margin in the occurrences of 0, so other digits were not as divergent from Benford's Law. Using the Chi-Squared and KS tests on all scam/non-scam transactions, we then measured the digit distributions fit with Benford's Law. We found that both distributions fit Benford's Law for the first digit quite well, although the non-scam transactions still had a closer fit. For the second digits, neither distribution fit as well as the first digits, but the Fig. 2: Examining a scam and non-scam address non-scam transactions had a significantly closer fit than the scam transactions, as seen in Table II with the Chi-Squared test in particular. For individual addresses, we found many more scam addresses with a higher Chi-Squared test value. The mean for the first digit Chi-Squared values among all the scam addresses was 1.37, compared to 1.01 for non-scam addresses. This gap significantly widens when looking at the second-digit Chi-Squared test values. The scam addresses had an average of 3.29, whereas the non-scam addresses averaged 1.13. This further indicates a distinguishing feature between the two classes. The Chi-Squared and KS tests clearly distinguish between the distributions for scam and non-scam transaction values. Both metrics supplement the statistical transaction features in training the classifiers. However, we can already predict that the second digit distributions will be a more effective separating feature than the 1st digit. Further, the Chi-Squared test will be a better separator than the KS test as it is more sensitive to the differences between two distributions. The results in Table Table II can help to answer our first research question on the effectiveness of Benford's Law at separating between a scam and non-scam cryptocurrency addresses. The results from the first digit distributions show a noticeable separation between scam and non-scam; however, it is a considerably slim margin. The second digit distributions show a much more significant margin between scam and non-scam, which draws us to the conclusion that Benford's Law for Second Digits provides a helpful distinguishing feature, whereas the first digit distribution is not very effective. This result is further reinforced by our results in the next section, which shows that the second-digit features rank much higher in importance than the first-digit features. ### _Classifiers with Benford's Law Features_ As seen in Table III, the LightGBM model performed better than the other methods examined, which was expected and followed our results with the validation dataset. The decision tree with Adaboost [35] was the second closest in correctly classifying the scam addresses (recall), but it was limited by its misclassification of the non-scam addresses (precision). The LightGBM model [36] significantly outperformed the decision tree on the test data. With the Support Vector Machine and the Logistic Regression Model, the classifier tended to fall into the trap of classifying everything as non-scam. We expected these models to perform poorly, and many features were similar to non-scam addresses, and their poor performance also likely resulted from the dataset's class imbalance. When looking at feature importance for the non-tree-based model, we found that the model only used 3-4 primary features for classification, always with a feature based on Benford's Law for second digits. The LightGBM model, however, appears to have a less skewed feature ranking, as seen in Figure 4, which is expected, given that LightGBM is designed to build more robust models. When examining the feature importance for each model, it was found that the Chi-Squared measurement for the second digit was considered an essential feature in the logistic regression, random forest, and LightGBM models and was the second-most important feature in the decision tree model. In the SVM, it did not rank high in terms of importance. However, as expected, the SVM model was the worst-performing among the methods tested. In the LightGBM model, it was an essential feature, which is seen more clearly in Figure 4. The LightGBM indicates that Benford's Law is an effective way to separate the scam from the non-scam. We tested the effectiveness of classifiers without Benford's Law features. Those results are discussed in the following section. Interestingly, in Figure 4, the KS test ranked relatively low for both the first and second digits, likely due to the nature of scam data. Most scams in the dataset had many transactions, resulting from the fact that many were operated through smart contracts and could thus grow quicker. This phenomenon is seen clearly in Table II as there is a considerable gap between the scam and non-scam results, but both performed poorly. However, according to the feature ranking results, the Chi-Squared results are essential to distinguish between scam and non-scam addresses for second digits. ### _Classifiers without Benford's Law Features_ We also trained the classifiers without the features related to Benford's Law to measure the improvement or deterioration of the Benford's Law features. We found that nearly every model performed worse with lower precision, recall, and F1-score than with Benford's Law features. The exception was the decision tree with Adaboost, which had an overall lower accuracy without the Benford's Law features, resulting from a lower precision but a higher recall. From Table III, we can see that the accuracy with Benford's Law features increased by about two percentage points on average, with the Fig. 3: Benford’s Law on First Digits on all transactions by class macro average accuracy increasing by 0.105 in the LightGBM model and 0.0421 in the decision tree model, which suggests that features related to Benford's Law can help with over-fitting as improving the macro average results from accuracy improvement in each class. These results help to answer our second research question on the effectiveness of Benford's Law at classifying addresses. With the improvement in both macro average accuracy and weighted average accuracy from the addition of Benford's Law features, we can conclude that Benford's Law is very effective as a training feature for classification. ## VII Related Work Many academic and commercial solutions have been developed to identify phishing attacks. Abdelhamid et al. proposed a multi-label classification method to tackle phishing websites by extracting correlations in website features and similarity in URLs in particular [38]. Zouina et al., on the other hand, extracted features from website URLs and trained an SVM to classify phishing scams, achieving an accuracy score of 0.956 [39]. Many of these detection systems rely on features not apparent from the transaction graph, so the assessment of an address alone is limited. For this reason, most of our scam data in this paper come from Ponzi schemes, as they are scams where most activity happens on the blockchain. In detecting malicious smart contracts, Chen et al. proposed a method that examines the bytecode of the smart contract to extract features for classification through a dual-ensemble method to address the class imbalance problem [40]. It was shown to perform well and detect smart Ponzi schemes before they attract a significant victim base [40]. While this approach is great for tackling the most significant and damaging Ponzi schemes on Ethereum, those that operate without a smart contract can slip through the cracks. This work examines transactional data (not bytecode) of addresses operating on Ethereum, including smart contracts, MEV bots, and human users. As many of the models shown in this paper likely struggled with class imbalance, using a dual-ensemble model proposed by Chen et al. [40] would be an exciting avenue for further research. Specifically, with Bitcoin, much of the research takes a graphical approach to feature extraction when examining address-based Ponzi schemes. Address-based schemes resemble traditional Ponzi schemes of sending money to another person's address. Bartoletti et al. proposed a set of features that focused on the lifetime and activity of Bitcoin addresses before applying three different classifiers: Repeated Incremental Pruning to Produce Error Reduction (RIPPER) [41], Bayes Network [42], and a Random Forest [33], with varying cost constraints. The random forest approach yielded the best results across all cost configurations. When crafting their dataset, they considered the skewed distribution of Ponzi scheme addresses to legitimate addresses, testing on a dataset of 32 Fig. 4: Feature Importances for the LightGBM Model Ponzi schemes and 6000 legitimate addresses [4]. We apply similar graph-based feature extractions with the exception of the address lifetime. The features used in this work focus on measuring the frequency and value of transactions and gas limits, then supplementing with features measuring fit with Benford's Law. Within the Ethereum ecosystem, Xia et al. proposed a method of detecting scam tokens on the Uniswap decentralized exchange [3]. They generated their dataset by looking at tokens with identical tickers to legitimate tokens and reported scam tokens from Etherscan, then applying a Guilt-By-Association expansion on the creators of these scam tokens to see which other tokens they created, further classifying them as scams. They queried their data from The Graph ([https://thegraph.com/hosted-service/](https://thegraph.com/hosted-service/)) and extracted features on both the tokens themselves and early investors before training many different machines learning classifiers to determine the best performing model. The random forest model performed best with precision, recall, and an F1 score all-around 0.96. They recognize the particular challenge of ground-truth labeling. As their model predicts scams, they must investigate the newly unclassified addresses, often finding suspicious activity but not enough to confidently say it was a scam. While this paper focused on the Uniswap token specifically, we found that the features they used to train their model were very comprehensive and used similar features when designing our model. By contrast, our work focuses on classifying all address entities on Ethereum rather than a specific exchange. Much previous work has focused on Ponzi schemes that operate with smart contracts, classified as "Smart Ponzi Schemes." Many malicious users choose Smart Ponzi schemes because they can proliferate and bring in more money before being caught. Chen et al. proposed a method that looks at the bytecode of the smart contract to extract features before training an XGBoost classification model [40]. Chen et al. furthered their work on Smart Ponzi Schemes with a novel dual-ensemble classification method focused on overcoming the class imbalance problem. It was shown to perform well and detect Ponzi schemes before they attract a significant victim base [43]. These approaches are great for tackling the most extensive and damaging Ponzi schemes, which comprise most of the Ponzi schemes on Ethereum. However, many smaller Ponzi schemes without a smart contract can slip through the cracks. An exciting field within blockchain security is Graph Neural Networks. Shen et al. developed a neural network framework to infer the identity of users on a network by examining a subgraph of the user's activity [44]. Their method significantly improved baseline models, which they attribute to a deeper convolution layer and more compelling features. Further, Liu et al. developed a hyperbolic graph neural network to identify the hierarchical structure of subsection of the Ethereum ecosystem [45]. Their method was able to identify the most influential entities on the network in accordance with the address data compiled by Etherscan. Although many works focus on using graph neural networks for identity classification, their applications to fraud detection are an exciting avenue for further research. ## VIII Conclusion and Future Work From our research into related works (Section VII), this is the first paper to examine the use of Benford's Law to predict scams in cryptocurrencies. With recent actions by the US Department of Justice to bring charges against fraudulent cryptocurrency actors, Benford's Law can prove to be a crucial piece of evidence in investigations as it has previously been admitted as evidence in local, state, and federal courts [12, 46]. Thus, methods that use Benford's Law to classify scams have been used as evidence for legal action in the United States. Further, financial scams will become a more critical research problem as cryptocurrencies become more widely used. We demonstrated the importance of a classical fraud detection method in the new financial ecosystem powered by blockchain. We created a gradient-boosted tree model using the labeled scam data and the LightGBM library. The experimental results indicate that Benford's Law distinguishes between scam addresses and non-scam addresses, and those metrics involving Benford's Law for second digits are a vital feature for classification. The most significant result of our method is that it relies solely on blockchain transaction data. By examining on-chain and internal transactions, our model can detect scams that operate with or without smart contracts or bots, spanning the range of attack sophistication. Separating the classification task into two may prove beneficial for more accurate detection of the distinctions in behavior between smart Ponzi schemes and traditional Ponzi schemes. Using a data source, such as Amberdata, that can separate smart contracts from addresses would be helpful in this direction, as you could use a more robust code analysis method to reinforce a model targeting traditional schemes. Another area for further research lies in getting a better metric to match the fit with Benford's Law. While the Chi-Squared method performs exceptionally well with larger sample sizes, it is limited by sample size. So with very few addresses, a better metric could yield a better feature set, resulting in a better-performing classifier. Conversely, the Kolmogorov-Smirnov test proved to be ineffective in classification. A robust metric for comparing small and large samples to Benford's Law would be central to improving its applicability to detecting fraudulent transactions concerning cryptocurrencies. ## Acknowledgements We acknowledge the support from Amberdata.io for giving us a free academic license to access their API. We also acknowledge the support from National Science Foundation Industry-University Cooperative Research Centers (NSF IUTCRC) Center for Research toward Advancing Financial Technologies (CRAFT) research grant for this research.
2307.14634
Fact-Checking of AI-Generated Reports
With advances in generative artificial intelligence (AI), it is now possible to produce realistic-looking automated reports for preliminary reads of radiology images. This can expedite clinical workflows, improve accuracy and reduce overall costs. However, it is also well-known that such models often hallucinate, leading to false findings in the generated reports. In this paper, we propose a new method of fact-checking of AI-generated reports using their associated images. Specifically, the developed examiner differentiates real and fake sentences in reports by learning the association between an image and sentences describing real or potentially fake findings. To train such an examiner, we first created a new dataset of fake reports by perturbing the findings in the original ground truth radiology reports associated with images. Text encodings of real and fake sentences drawn from these reports are then paired with image encodings to learn the mapping to real/fake labels. The utility of such an examiner is demonstrated for verifying automatically generated reports by detecting and removing fake sentences. Future generative AI approaches can use the resulting tool to validate their reports leading to a more responsible use of AI in expediting clinical workflows.
Razi Mahmood, Ge Wang, Mannudeep Kalra, Pingkun Yan
2023-07-27T05:49:24Z
http://arxiv.org/abs/2307.14634v1
# Fact-Checking of AI-Generated Reports ###### Abstract With advances in generative artificial intelligence (AI), it is now possible to produce realistic-looking automated reports for preliminary reads of radiology images. This can expedite clinical workflows, improve accuracy and reduce overall costs. However, it is also well-known that such models often hallucinate, leading to false findings in the generated reports. In this paper, we propose a new method of fact-checking of AI-generated reports using their associated images. Specifically, the developed examiner differentiates real and fake sentences in reports by learning the association between an image and sentences describing real or potentially fake findings. To train such an examiner, we first created a new dataset of fake reports by perturbing the findings in the original ground truth radiology reports associated with images. Text encodings of real and fake sentences drawn from these reports are then paired with image encodings to learn the mapping to real/fake labels. The utility of such an examiner is demonstrated for verifying automatically generated reports by detecting and removing fake sentences. Future generative AI approaches can use the resulting tool to validate their reports leading to a more responsible use of AI in expediting clinical workflows. Keywords:Generative AI Chest X-rays Fact-checking Radiology Report. ## 1 Introduction With the developments in radiology artificial intelligence (AI), many researchers have turned to the problem of automated reporting of imaging studies [4, 6, 13, 15, 16, 17, 21, 24]. This can significantly reduce the dictation workload of radiologists, leading to more consistent reports with improved accuracy and lower overall costs. While the previous work has largely used image captioning [22, 25] or image-to-text generation methods for report generation, more recent works have been using large language models (LLMs) such as GPT-4 [7, 14]. These newly emerged LLMs can generate longer and more natural sentences when prompted with good radiology-specific linguistic cues [8, 5]. However, with powerful language generation capabilities, hallucinations or false sentences are prevalent as it is difficult for those methods to identify their own errors. This has led to fact-checking methods for output generated by LLMs and large vision models (LVMs)[18, 1, 20]. Those methods detect errors either through patterns of phrases found repeatedly in text or by consulting other external textual sources for the veracity of information[18, 1, 20]. In radiology report generation, however, we have a potentially good source for fact checking, namely, the associated images, as findings reported in textual data must be verifiable through visual detection in the associated imaging. However, as most methods of report generation already examine the images in order to detect findings and generate the sentences, bootstrapping them with an independent source of verification is needed in order to identify their own errors. In this paper, we propose a new imaging-driven method of fact-checking of AI-generated reports. Specifically, we develop a fact-checking examiner to differentiate between real and fake sentences in reports by learning the association between an image and sentences describing real or potentially fake findings. To train such an examiner, we first create a new dataset of fake reports by perturbing the findings in the original ground truth radiology reports associated with images. Text encodings of real and fake sentences drawn from these reports are then paired with image encodings to learn the mapping to real or fake labels via a classifier. The utility of such an examiner is demonstrated for verifying automatically generated reports by detecting and removing fake sentences. Future generative AI approaches can use the examiner to bootstrap their report generation leading to potentially more reliable reports. This can lead to a more responsible use of AI in expediting future clinical workflows. ## 2 Overall approach Our overall approach to training and inference using the examiner is illustrated in Figure 1. To create a robust examiner that is not attuned for any particular automated reporting software, it is critical to create a dataset for training that encompasses a wide array of authentic and fabricated samples. Hence we first synthesize a dataset of real and fake reports using a carefully controlled process of perturbation of actual radiology reports associated with the images. We then pair each image with sentences from its corresponding actual report as real sentences with real label and the perturbed sentences from fake reports as fake sentences with fake label. Both textual sentence and images are then encoded Figure 1: Illustration of the training and inference phases of the image-driven fact-checking examiner. (a) Training of the examiner. (b) Use of examiner in inference mode for report verification. by projecting in a joint image-text embedding space using the CLIP model[19]. The encoded vectors of image and the paired sentence are then concatenated to form the feature vector for classification. A binary classifier is then trained on this dataset to produce a discriminator for real/fake sentences associated with a given image. The fact-checker can be used for report verification in inference mode. Given an automatically produced radiology report, and the corresponding input imaging study, the examiner extracts sentences from the report, and the image-sentence pair is then subjected to the same encoding process as used in training. The combined feature vector is then given to the classifier for determination of the sentence as real or fake. A revised report is assembled by removing those sentences that are deemed fake by the classifier to produce the revised report. The rest of the paper describes the approach in detail. In Section 3, we model the different types of errors found in automated reports and present an approach of synthesizing these errors by centering them around findings in sentences We then present our examiner and show how it can be applied to verify automatic reports in Section 4. Finally, in Section 5, we present results describing details of dataset created and evaluation experiments. ## 3 Generation of a synthetic report dataset The key idea in synthetic report generation is to center the perturbation operations around findings described in the finding sections of reports, as these are critical to preliminary reads of imaging studies. ### Modeling finding-related errors in automated reports The typical errors seen in the finding sections of reports can be due to (a) addition of incorrect findings not seen in the accompanying image, (b) exchange errors, where certain findings are missed and others added, (c) reverse findings reported i.e. positive instance reported when negative instances of them are seen in image and vice versa, (d) spurious or unnecessary findings not relevant for reporting, and finally (e) incorrect description of findings in terms of fine-grained appearance, such as extent of severity, location correctness, etc. From the point of real/fake detection, we focus on the first 3 classes of errors for synthesis as they are the most common. Let \(R=\{S_{i}\}\) be a ground-truthed report corresponding to an image \(I\) consisting of sentences \(\{S_{i}\}\) describing corresponding findings \(\{F_{i}\}\). Then we can simulate a random addition of a new finding by extending the report \(R\) as \(R_{a}=\{S_{i}\}\cup\{S_{a}\}\) where \(S_{a}\) describes a new finding \(F_{a}\not\in\{F_{i}\}\). Similarly, we simulate condition (b) through an exchange of finding where one finding sentence \(S_{r}\) is removed to be replaced by another finding sentence \(S_{a}\) as \(R_{e}=\{S_{i}\}-\{S_{r}\}\cup\{S_{a}\}\). Finally, we can simulate the replacement of positive with negative findings and vice versa to form a revised report \(R_{r}=\{S_{i}\}-\{S_{p}\}\cup\{S_{p^{\prime}}\}\) where \(S_{p}\) is a sentence corresponding to a finding \(F_{p}\) and \(S_{p^{\prime}}\) is a sentence corresponding to the finding \(F_{p^{\prime}}\) which is in opposite sense of the meaning. For example, a sentence "There is pneumothorax", could be replaced by "There is no pneumothorax" to represent a reversal of polarity of the finding. Figure 2 shows examples of each of the type of operations of add, exchange and reverse findings respectively. ### Detecting findings in sentences Since detecting findings is key to our approach, our synthetic dataset generation focused on chest X-ray datasets as finding detectors are well-developed for these datasets. Further, the majority of work on automated reporting has been done on chest X-rays and finding-labeled datasets are publicly available[2, 12, 11]. However, most of the existing approaches summarize findings at the report level. To locate findings at the sentence level, we used NLP tools such as Spacy to separate sentences. We then used a combination of ChexPert[11] labeler and NegSpacy[9] parser to extract positive and negative findings from sentences. Table 1 shows examples of findings detected in sentences. The detected findings were then validated against the ground truth labels provided at the report level in the datasets. All unique findings across reports were then aggregated into a pool \(\{F_{pool}\}\) and all unique sentences in the original reports were aggregated and mapped to their findings (positive or negative) to create the pool of sentences \(\{S_{pool}\}\). ### Fake report creation For each original report \(R\) associated with an image \(I\), we create three instances of fake reports \(R_{a},R_{e},R_{r}\) corresponding to the operations of addition, exchange and reversal of findings respectively. Specifically, for creating \(R_{a}\) type of reports, Figure 2: Illustration of the fake reports drawn from actual reports. (a) Frontal and lateral views of a chest X-ray. (b) Corresponding original and fake radiology reports. The affected sentences during the synthesis operation are shown in red. we randomly draw from \(S_{pool}\) a sentence that contains a randomly selected finding \(F_{a}\notin\{F_{i}\}\) where \(\{F_{i}\}\) are the set of findings in \(R\) (positive or negative). Similarly, to create \(R_{e}\), we randomly select a finding pair \((F_{ei},F_{eo})\) where \(F_{ei}\in\{F_{i}\}\) and \(F_{eo}\in\{F_{pool}\}-\{F_{i}\}\). We then remove the associated sentence with \(F_{ei}\) in \(R\) and replace it with a randomly chosen sentence associated with \(F_{eo}\) in \(\{S_{pool}\}\). Finally, to create the reversed findings reports, \(R_{r}\), we randomly select a positive or negative finding \(F_{p}\in\{F_{i}\}\), remove its corresponding sentence and swap it with a randomly chosen sentence \(S_{p^{\prime}}\in\{S_{pool}\}\), containing findings \(F_{p^{\prime}}\) that is reversed in polarity. The images, their perturbed finding and associated sentences were recorded in each case of fake reports so that they could be used for forming the pairing dataset for training the fact-checking examiner described next. ## 4 Fact-checking of AI-generated reports We now present details of our fact-checking examiner and discuss how it can be used to improve the quality of automatically generated reports through verification. ### Fact-checking Examiner The fact-checking examiner is a classifier using deep-learned features derived from a joint image-text encodings. Specifically, since we combine images with \begin{table} \begin{tabular}{l|l|l} \hline \hline **Sentences** & **Detected findings** \\ \hline There is effusion and pneumothorax. & ‘effusion’, ‘pneumothorax’ \\ \hline No pneumothorax, pleural effusion, but there is & ‘consolidation’, \\ lobar air space consolidation. & ‘pneumothorax’, ‘pleural effusion’] \\ \hline No visible pneumothorax or large pleural effusion. & ‘pneumothorax’, ‘pleural effusion’] \\ \hline Specifically, no evidence of focal consolidation, & ‘focal consolidation’], \\ pneumothorax, or pleural effusion. & ‘pneumothorax’, ‘pleural effusion’] \\ \hline No definite focal alveolar consolidation, & ‘alveolar consolidation’], \\ no pleural effusion demonstrated. & ‘[’pleural effusion’] \\ \hline \hline \end{tabular} \end{table} Table 1: Illustration of extracting findings from reports. Negated findings are shown within square brackets. \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l} \hline \hline **Dataset** & **Patients** & **Images** & **Reports** & **Pos/Neg** & **Unique** & **Image-Sent** & **Fake** \\ & & **/Views** & & **Findings** & **Sentences** & **Pairs** & **Reports** \\ \hline Original & 1786 & 7470/2557 & 2557 & 119/64 & 3850 & 25535 & 7671 \\ Training & 1071 & 2037 & 2037 & 68 & 2661 & 20326 & 4074 \\ Testing & 357 & 254 & 254 & 68 & 919 & 2550 & 508 \\ \hline \hline \end{tabular} \end{table} Table 2: Details of the fake report dataset distribution. 2557 frontal views were retained for images. 64 negative findings were retained and 114 positive findings. textual sentences, we chose a feature encoding that is already trained on joint image and text pairs. In particular we chose the CLIP joint image-text embedding model[19] to project the image and textual sentences into a common 512-length encoding each. The CLIP model we chose was originally pre-trained on natural image-text pair and subsequently trained on radiology report-image pairs as described in[4]. We then concatenate the image and textual embedding into a 1024-length feature vector to train a binary classifier. In the splits chosen, the real/fake incidence distribution was relatively balanced (2:1) so that the accuracy could be used as a reliable measure of performance. We experimented with several classifiers ranging from support vector machines (SVM) to neural net classifiers and as we observed similar performance, we retained a simple linear SVM as sufficient for the task. ### Improving the quality of reports through verification We apply the fact-checking examiner to filter our incorrect/irrelevant sentences in automatically produced reports as shown in Figure 1b. Specifically, given an automatically generated report for an image, we pair the image with each sentence of the report. We then use the same CLIP encoder used in training the examiner, to encode each pair of image and sentence to form a concatenated feature vector. The examiner predicted fake sentences are then removed to produce the revised report. We develop a new measure to judge the improvement in the quality of the automatic report after applying the fact-checking examiner. Unlike popular report comparison measures such as BLEU, ROUGE scores that perform lexical comparisons, we use a semantic distance measure formed from encoding the reports through large language models such as SentenceBERT[10]. Specifically, let \(R=\{S_{i}\}\), \(R_{auto}=\{S_{auto}\}\), \(R_{corrected}=\{S_{corrected}\}\) be the original, automated, and corrected reports with their sentences respectively. To judge the improvement in quality of the report, we adopt SentenceBERT[10] to encode the individual sentences of the respective reports to produce an average encoding per report as \(E_{R},E_{auto},E_{corrected}\) respectively. Then the quality improvement score, \(QI(R)\) per triple of reports \((R,R_{auto},R_{corrected})\) is given by the difference in the cosine similarity between the pairwise encodings as \[QI(R,R_{auto},R_{corrected})=d(E_{R},E_{corrected})-d(E_{R},R_{auto}) \tag{1}\] where \(d\) is the cosine similarity between the average encodings. This measure allows for unequal lengths of reports. A positive value indicates an improvement while a negative value indicates a worsening of the performance. The overall improvement in the quality of automatically generated reports is then given by \[QI=n_{positive}/n_{R} \tag{2}\] where \(n_{positive}=arg_{R}(c(E_{R},E_{corrected})>c(E_{R},R_{auto}))\) or the number of times the corrected reports are closer to original reports by applying the examiner, and \(n_{R}\) is the total number of automated reports evaluated. ## 5 Results To test our approach for fact-checking of radiology reports, we selected an open access dataset of chest X-rays from Indiana University[3] provided on Kaggle, which contains 7,470 chest X-Ray (frontal and lateral views) images with corresponding 2557 non-duplicate reports from 1786 patients. The dataset also came with annotations documenting important findings at the report level. Of the 1786 patients, we used a (60-20-20)% patient split for training the examiner, testing the examiner, and evaluating its effectiveness in report correction respectively, thus ensuring no patient overlap between the partitions. ### Fake report dataset created By applying NLP methods of sentence extraction, we extracted 3850 unique sentences from radiology reports. By applying the finding extractor at the sentence level as described in Section 3.2, we catalogued a total of 119 distinct positive and 64 negative findings as shown Table 2. Using these findings and their sentences in the 2557 unique reports, and the 3 types of single perturbation operations described in Section 3.1, we generated 7,671 fake reports as shown in Table 2. The training and test dataset for the fact-checking examiner was generated by randomly drawing sentences from sentence pool \(\{S_{pool}\}\). Each image was first paired with each sentence from its original report and the pair was given the "Real" label. The perturbed sentence drawn from \(\{S_{pool}\}\) from the fake reports was then retrieved from each fake report and paired with the image and given the "Fake" label. By this process, we generated 20,326 pairs of images with real/fake sentences for training, and 2,550 pairs for testing as shown in Table 2 using 80% of the 1786 patients. ### Fact-checking examiner accuracy Using the train-test splits shown in Table 2, we trained fact-checking examiner with encodings of image-sentence pairs shown in Table 2. The resulting classifier achieved an average accuracy of 84.2% and the AUC was 0.87 as shown in Figure 3: performance of real/fake report sentence differentiation. Figure 2(b). By using 10 fold cross-validation in the generation of the (60-20-20) splits for the image-report dataset, and using different classifiers provided in the Sklearn library (decision tree, logistic regression, etc.) the average accuracy lay in the range \(0.84\pm 0.02\). ### Overall report quality improvement evaluation We evaluated the efficacy of the fact-checking examiner on two report datasets, one synthetic with controlled "fakeness" and another dataset generated by a published algorithm described in [21]. Specifically, using the 20% partition of patients from the Indiana reports that was not used to train or test the examiner, we selected 3089 of the fake reports shown in Table 2. We evaluated the improvement in report quality using the method described in Section 4.2. These results are summarized in Table 3. Since our fake reports had only one fake sentence added, the performance improvement while still present, is modest around 5.3% but the quality still improved 89% of the time as shown in Table 3. To test the performance on automated reports generated by existing algorithms, we obtained a reference dataset consisting of freshly created reports on the NIH image dataset[2] created by radiologists as described in [23]. We retained the output of an automated report generation algorithm for the same images described in [21]. As this algorithm reported the highest recorded clinical accuracy on comparison with manually created reports, any improvement provided for such reports by our examiner could imply even more improvement in quality for automated reports generated by other methods. A total of 198 pairs of original and automatically created reports along with their associated imaging from the NIH dataset was used for this experiment. The results of quality improvement is shown in Table 3 row 2. As can be seen, the quality improvement is even greater for reports produced by automated report extraction methods. ## 6 Conclusion In this paper, we have proposed for the first time, an image-driven verification of automatically produced radiology reports. A dataset was carefully constructed to elicit the different types of errors produced by such methods. novel fact-checking examiner was developed using pairs of real and fake sentences with their corresponding imaging. The work will be extended in future to cover larger variety of defects and extended evaluation on a larger number automated reports. \begin{table} \begin{tabular}{l|l|l|l} \hline \hline **Dataset** & **Reports** & **QI score** & **Similarity improvement** \\ \hline Synthetic Reports from Indiana & 3089 & 0.81 & 5.3\% \\ \hline NIH Reports & 198 & 0.89 & 15.4\% \\ \hline \hline \end{tabular} \end{table} Table 3: Report quality evaluation by our examiner on two automatically generated report datasets.
2305.15353
A Virtual Reality Tool for Representing, Visualizing and Updating Deep Learning Models
Deep learning is ubiquitous, but its lack of transparency limits its impact on several potential application areas. We demonstrate a virtual reality tool for automating the process of assigning data inputs to different categories. A dataset is represented as a cloud of points in virtual space. The user explores the cloud through movement and uses hand gestures to categorise portions of the cloud. This triggers gradual movements in the cloud: points of the same category are attracted to each other, different groups are pushed apart, while points are globally distributed in a way that utilises the entire space. The space, time, and forces observed in virtual reality can be mapped to well-defined machine learning concepts, namely the latent space, the training epochs and the backpropagation. Our tool illustrates how the inner workings of deep neural networks can be made tangible and transparent. We expect this approach to accelerate the autonomous development of deep learning applications by end users in novel areas.
Hannes Kath, Bengt Lüers, Thiago S. Gouvêa, Daniel Sonntag
2023-05-24T17:06:59Z
http://arxiv.org/abs/2305.15353v1
# A Virtual Reality Tool for Representing, Visualizing and Updating Deep Learning Models ###### Abstract Deep learning is ubiquitous, but its lack of transparency limits its its impact on several potential application areas. We demonstrate a virtual reality tool for automating the process of assigning data inputs to different categories. A dataset is represented as a cloud of points in virtual space. The user explores the cloud through movement and uses hand gestures to categorise portions of the cloud. This triggers gradual movements in the cloud: points of the same category are attracted to each other, different groups are pushed apart, while points are globally distributed in a way that utilises the entire space. The space, time, and forces observed in virtual reality can be mapped to well-defined machine learning concepts, namely the latent space, the training epochs and the backpropagation. Our tool illustrates how the inner workings of deep neural networks can be made tangible and transparent. We expect this approach to accelerate the autonomous development of deep learning applications by end users in novel areas. Keywords:Virtual Reality Annotation Tool Latent Space Representation Learning ## 1 Introduction Machine learning (ML) with deep neural networks, or deep learning (DL), has achieved astonishing performance in many tasks [8], and systems based on DL are ubiquitous in our everyday lives. However, for most people these systems are black boxes--the algorithms powering them are not transparent, understandable, or even approachable. This lack of transparency raises ethical concerns [4] and limits the potential impact of ML on several novel applications. Interactive machine learning (IML) is the design and implementation of algorithms and intelligent user interface (IUI) frameworks that facilitate ML with the help of human interaction, and includes the mission to empower end users to develop their own domain-specific DL applications [13, 15]. We demonstrate a virtual reality (VR) tool for automating the common supervised ML task of assigning category labels to data inputs (e.g. classifying images). Traditionally, such ML tasks would be implemented through a pipeline that starts with data annotation, followed by model design, training, and finally deployment. Data annotation is the human-labor-intensive task of adding metadata (e.g. category labels) to a dataset with the purpose of providing examples to guide the training of an expert-designed ML model. The training process should render the model capable of generating sufficiently accurate category labels for previously unseen data inputs. At this stage, the resulting model can be deployed to power a user-facing system. In such a traditional system, end user interaction with the system is limited to providing input data and collecting back a prediction. While such black-box interaction patterns might suffice for many purposes, we propose an alternative interaction paradigm. ## 2 Demonstration Our tool is an IUI consisting of a deep neural network linked to a VR interface (see section 3 for a technical description). An input dataset is represented as a cloud of points in virtual space; for demonstration purposes, we use the MNIST dataset [2], a standard set of images of handwritten numerals. When entering the virtual space, the user stands outside the point cloud; this perspective offers a broad overview of the entire data set (figure 0(a)). At this stage, the user can already notice that the points are distributed in space so that the cloud largely occupies the entire virtual space--even if not uniformly. To get different perspectives on the data cloud the user can move in virtual space, either by physical movement or by using a teleport mechanism (figure 0(c)). Once inside the cloud of points, the user will see that each point is a cube, and the images (handwritten digits, in this case) are rendered as a texture on the surface of the cubes (figure 0(b)). Furthermore, the user might notice some degree of topological organisation in the cloud: curvy digits like 0s and 6s will be in one region of virtual space, rectilinear digits like 1s and 7s are in another region, and neighboring data points within each region tend to represent instances of the same digit. Besides moving in virtual space, the user can also use hand gestures to assign portions of the cloud to different groups reflecting class labels (i.e. digit identity)--in other words, to annotate the data. In the current implementation, that is done by creating and placing spheres in virtual space (figure 0(d)) and assigning them a label (figure 0(e)). While non-annotated data is displayed on gray cubes, user annotated data is displayed in colored cubes indicating the assigned class (figure 0(f)). Annotating data will cause the underlying network to be updated, a process perceived by the user as motion, or gradual reshaping, of the cloud. Motion will be perceived as if driven by three different forces: points of same category are attracted to each other, different groups are pushed apart, and the global distribution is such that the entire space is filled. Furthermore, the user will notice that the more data points get annotated, the more pronounced is the clustering of groups. Importantly, data annotation reshapes the entire virtual space, and the position of each data point in virtual space is independent of whether it has been manually labelled: data points that are yet unlabelled will Figure 1: Steps of the annotation process from the user’s point of view in the virtual space and model architecture. be spatially grouped together with labeled ones, as long as they represent similar handwritten digits. The topological organization and motion patterns observed in virtual space are direct, tangible consequences of the way the underlying deep neural network functions. ## 3 Tool Description The interactions experienced in VR, described in section 2, arise from the workflow presented in figure 2. Effectively, the user is annotating a dataset: the more data points are labelled, the more precisely separated the category clusters will become. As a result, annotation efficiency is expected to gradually increase. The deep neural network powering the IUI is composed of three modules: an encoder, a decoder, and a classifier (figure 3). The encoder maps input images onto a hidden layer made up of three units--in other words, it embeds images into a 3-dimensional latent representation. The choice of the number of hidden units is not arbitrary: each unit is displayed as a dimension of virtual space. While the encoder alone is responsible for _computing_ the representation of input images in virtual space, the other two components are essential to guide representation _learning_[1]. The decoder maps back from 3D latent space to a reconstruction of the input image, and together with the encoder it constitutes a variational autoencoder (VAE) [7]. The classifier, a shallow perceptron, maps from latent space onto user-provided category labels and was added to encourage cluster separation. Following standard procedures for training neural networks, each of these tasks is expressed formally through a function measuring the mismatch between generated and desired outputs (objective function). Learning takes place by iterating a two-step procedure known as gradient descent: fist computing the direction in which network parameters should change to minimize the mismatch (i.e. the gradient), then taking a small step in that direction. The motions gradually reshaping the point cloud in virtual space directly reflect the iterative update of network parameters by gradient descent. Table 1 establishes a direct parallel between the perspectives of the user and of the IUI system on the steps of the workflow shown in figure 2. ## 4 Discussion and Future Work We demonstrate an IUI tool for automating image classification in VR. An image dataset is represented as an actionable cloud of points that can be grouped into category classes with hand gestures. The architecture of the underlying neural network model consists of the combination of a VAE and a shallow classifier network, and the dynamics of the network learning process are experienced as structured motion patterns in virtual space. We chose a VR environment as IUI framework. In addition to cognitive and immersive aspects, the advantages of VR over two-dimensional screens for visualization and interaction with complex data have been demonstrated in recent publications [3, 9, 10, 11]. Although it has been shown that annotation of data The positions of the data samples are adjusted by an invisible force similar to magnetism: Samples of the same class are attracted to each other, while classes repel each other. This effect is also shown in figure 2 and makes annotation increasingly easier. The weights of the system for calculating the embeddings of the data samples are adjusted by a mathematical method called gradient descent: Samples of the same class produce similar embeddings, while classes are separated by a linear classifier using the annotations. \begin{table} \begin{tabular}{p{56.9pt} p{142.3pt} p{142.3pt}} \hline \hline State & User Perspective & Deep Learning Perspective \\ \hline Representation & Images from the dataset are represented as points in 3-dimensional virtual space. The positions of the points are not arbitrary, but show a topological organisation (e.g. curvy handwritten digits are distant from rectilinear ones, neighboring points tend to represent same digits, and the entire space is occupied). & Images from the dataset are embedded in 3-dimensional latent space of a neural network. The embeddings of the samples are not arbitrary, but shows a topological organisation (similar images produce similar embeddings, and global arrangement conforms to a prior distribution in latent space). & Images from the dataset are embedded in 3-dimensional latent space of a neural network. The embeddings of the samples are not arbitrary, but shows a topological organisation (similar images produce similar embeddings, and global arrangement conforms to a prior distribution in latent space). & User position in virtual space is used to compute 2-dimensional projections of 3-D embeddings without altering coordinate system. Manually annotated samples are associated to class labels. & User position in virtual space is used to compute 2-dimensional projections of 3-D embeddings without altering coordinate system. Manually annotated samples are associated to class labels. \\ \hline Interaction & Using hand gestures in VR (e.g. positioning a sphere around a group of data points), new data samples get annotated. & A larger fraction of data samples has associated class labels and can thus be used for supervised learning. \\ \hline Updating & After the labeling is done, the data points change their position in discrete time steps. Each discrete time step leads to a more accurate sorting of data points in virtual space. & After new annotations are available, the model is fine tuned on the partially annotated dataset. Each iteration of the learning procedure leads to a more structured representation of the data in latent space. \\ \hline \hline \end{tabular} \end{table} Table 1: Description of the four states representation, visualization, interaction and updating performed by the tool, presented from the user perspective and from the deep learning perspective. in VR has great potential in terms of time spent and cost, most projects prefer 2-dimensional interfaces [6, 14]. An example of an annotation tool in VR for labelling 3D point clouds is described in [14], and another for annotating industrial datasets using deep clustering is described in [5]. In order to make the workflow of the underlying model intuitive, and in line with the principles of direct manipulation [12], we use the metaphors of space, time, and force in VR to mediate interaction with representation and updating of the underlying neural network model. While the use of the metaphor of interface space for representing embeddings is present in previous works [5, 11], the metaphors of time and force for the gradient-descent-based learning of network parameters are novel to the best of our knowledge. As a consequence, our tool complements existing elaborations with topological organisation and dynamics that enable the annotation of multiple data samples simultaneously, thus potentially improving the efficiency of the annotation process. We are currently teaming up with domain experts in the fields of ecology and conservation sciences interested in automating sound event detection to continue co-development of the tool presented here. Our demo offers the opportunity for Figure 3: Basic scheme of the deep neural network architecture Figure 2: Illustration of the data labelling process. The schematic figures after 1 and 50 iterations are shown two-dimensionally and colour-coded for a better illustration of the clustering process. In our tool, the clusters are three-dimensional and only samples that are already annotated are coloured. exploring variants regarding user actions and sensory representations of relevant aspects of neural network design and updating. As next steps, we will run qualitative user studies to evaluate design alternatives such as integrating a dialog system, as well as different model architectures. We expect that IML tools such as the IUI illustrated here will pave the way for empowering end users in establishing a different, more transparent relation with DL, and accelerate the autonomous development of applications in novel areas.
2307.03348
Chip-firing on graphs of groups
We define the Laplacian matrix and the Jacobian group of a finite graph of groups. We prove analogues of the matrix tree theorem and the class number formula for the order of the Jacobian of a graph of groups. Given a group $G$ acting on a graph $X$, we define natural pushforward and pullback maps between the Jacobian groups of $X$ and the quotient graph of groups $X/\!/G$. For the case $G=\mathbb{Z}/2\mathbb{Z}$, we also prove a combinatorial formula for the order of the kernel of the pushforward map.
Margaret Meyer, Dmitry Zakharov
2023-07-07T01:49:22Z
http://arxiv.org/abs/2307.03348v1
# Chip-firing on graphs of groups ###### Abstract. We define the Laplacian matrix and the Jacobian group of a finite graph of groups. We prove analogues of the matrix tree theorem and the class number formula for the order of the Jacobian of a graph of groups. Given a group \(G\) acting on a graph \(X\), we define natural pushforward and pullback maps between the Jacobian groups of \(X\) and the quotient graph of groups \(X/\!/G\). For the case \(G=\mathbb{Z}/2\mathbb{Z}\), we also prove a combinatorial formula for the order of the kernel of the pushforward map. ## 1. Introduction The theory of chip-firing on graphs is a purely combinatorial theory, having a remarkable similarity to divisor theory on algebraic curves. A divisor on a graph is an integer linear combination of its vertices, and two divisors are linearly equivalent if one is obtained from another by a sequence of chip-firing moves. The set of equivalence classes of degree zero divisors on a graph \(X\) is a finite abelian group, called the _Jacobian_\(\operatorname{Jac}(X)\) or the _critical group_ of \(X\). The similarity with algebraic geometry is not accidental: graphs record degeneration data of one-dimensional families of algebraic curves, and divisors on graphs represent discrete invariants of algebraic divisor classes under degeneration. Chip-firing on graphs is functorial with respect to a class of graph maps known as _harmonic morphisms_, which may be viewed as discrete analogues of finite maps of algebraic curves. Specifically, a harmonic morphism of graphs \(f:X\to Y\) defines natural pushforward and pullback maps \(f_{*}:\operatorname{Jac}(X)\to\operatorname{Jac}(Y)\) and \(f^{*}:\operatorname{Jac}(Y)\to\operatorname{Jac}(X)\). Harmonic morphisms are characterized by a local degree assignment at the vertices of the source graph, and are a generalization of topological coverings, which have local degree one everywhere. A natural example of a topological covering, and hence of a harmonic morphism, is the quotient \(p:X\to X/G\) of a graph \(X\) by a free action of a group \(G\). The paper [14] thoroughly investigated the corresponding pushforward map \(p_{*}:\operatorname{Jac}(X)\to\operatorname{Jac}(X/G)\), and found a combinatorial formula for the degree of the kernel in the case when \(G=\mathbb{Z}/2\mathbb{Z}\). If the action of \(G\) on \(X\) has nontrivial stabilizers, however, then \(p\) is not in general harmonic, and there is no relationship between \(\operatorname{Jac}(X)\) and \(\operatorname{Jac}(X/G)\). This raises the natural problem of redefining chip-firing on the quotient graph in a way that preserves functoriality. In this paper, we solve this problem using the theory of _graphs of groups_, also known as Bass-Serre theory (see [1] and [15]). Given a \(G\)-action on a graph \(X\), the _quotient graph of groups_\(X/\!/G\) consists of the quotient graph \(X/\!/G\) together with the data of the local stabilizers, and may be thought of as the stacky quotient of \(X\) by \(G\). We define the Laplacian matrix and the Jacobian group of a graph of groups by weighting the chip-firing map using the orders of the local stabilizers. We define natural pushforward and pullback maps \(p_{*}:\operatorname{Jac}(X)\to\operatorname{Jac}(X/\!/G)\) and \(p^{*}:\operatorname{Jac}(X/\!/G)\to\operatorname{Jac}(X)\), and we investigate their properties. The paper is organized as follows. In Section 2, we recall the definitions of chip-firing for a graph, as well as harmonic morphisms of graphs and Bass-Serre theory. We define graphs and chip-firing in terms of _half-edges_ and introduce a detailed factorization of the graph Laplacian. This approach is notationally cumbersome but proves useful in Section 3, where we define chip-firing and the Jacobian group for a graph of groups. We prove two formulas for the order of the Jacobian of a graph of groups: Theorem 3.5, a weighted version of Kirchhoff's matrix tree theorem, and Theorem 3.6, which is a class number formula involving a hypothetical Ihara zeta function of a graph of groups. In Section 4, we consider a group \(G\) acting on a graph \(X\) and consider the Jacobian of the quotient graph of groups \(X/\!/G\). We define natural pushforward and pullback maps between the Jacobians \(\operatorname{Jac}(X)\) and \(\operatorname{Jac}(X/\!/G)\). We compute the Jacobians of all group quotients of two graphs with large automorphism groups: the complete graph on four vertices and the Petersen graph. Finally, in Section 5 we specialize to the case \(G=\mathbb{Z}/2\mathbb{Z}\) and find a combinatorial formula for the order of the kernel of the pushforward map \(p_{*}:\operatorname{Jac}(X)\to\operatorname{Jac}(X/\!/G)\), generalizing a result of Reiner and Tseng [14]. A natural question is to relate chip-firing on graphs of groups to algebraic geometry. A version of the chip-firing maps with edge weights (but trivial vertex weights) appears in [10] and [15], in the study of moduli spaces of curves with level structure. Curves with a \(G\)-cover with arbitrary group \(G\) are considered in [11]. It is natural to assume that chip-firing on graphs of groups should be related to the theory of line bundles on stacky curves. Investigating this connection, however, is beyond the scope of this paper. ## 2. Graphs with legs and graphs of groups We begin by recalling a number of standard definitions concerning graphs, group actions, divisor theory on graphs, harmonic morphisms, and graphs of groups. ### Graphs, morphisms, and group actions In Serre's definition (see [11]), the edges of a graph are the orbits of a fixed-point-free involution acting on a set of _half-edges_. When considering group actions on graphs, it is then necessary to require that the action not flip any edges of the graph. We can relax this constraint by allowing the involution on the set of half-edges to have fixed points. The resulting object is a _graph with legs_, where a leg is the result of folding an edge in half via an involution. Such objects have appeared before in the combinatorics literature (for example, see p. 60 in the paper [17], where they are called _half-arcs_). **Definition 2.1**.: A _graph with legs_\(X\), or simply a _graph_, consists of the following data: 1. A set of _vertices_\(V(X)\). 2. A set of _half-edges_\(H(X)\). 3. A _root map_\(r_{X}:H(X)\to V(X)\). 4. An involution \(r_{X}:H(X)\to H(X)\). The involution \(r_{X}\) partitions \(H(X)\) into orbits of size one and two. An orbit \(e=\{h,h^{\prime}\}\) of size two (so that \(t_{X}(h)=h^{\prime}\)) is an _edge_ with _root vertices_\(r_{X}(h),r_{X}(h^{\prime})\in V(X)\), and the set of edges of \(X\) is denoted \(E(X)\). An edge whose root vertices coincide is called a _loop_. A fixed point of \(t_{X}\) is called a _leg_ and has a single root vertex \(r_{X}(h)\in V(X)\), and we denote the set of legs of \(X\) by \(L(X)\). The _tangent space_\(T_{v}X=r_{X}^{-1}(v)\) of a vertex \(v\in V(X)\) is the set of half-edges rooted at \(v\), and its _valency_ is \(\operatorname{val}(v)=|T_{v}X|\) (so a leg is counted once, while a loop is counted twice). An _orientation_ of an edge \(e=\{h,h^{\prime}\}\) is a choice of order \((h,h^{\prime})\) on the half-edges, and we call \(s(e)=r_{X}(h)\) and \(t(e)=r_{X}(h^{\prime})\) respectively the _initial_ and _terminal_ vertices of an oriented edge \(e\). An _orientation_\(\mathcal{O}\) on \(X\) is a choice of orientation for each edge (each leg has a unique orientation). We consider only finite connected graphs. **Definition 2.2**.: A _morphism of graphs_\(f:\widetilde{X}\to X\) is a pair of maps \(f:V(\widetilde{X})\to V(X)\) and \(f:H(\widetilde{X})\to H(X)\) (both denoted \(f\) by abuse of notation) that commute with the root and involution maps on \(\widetilde{X}\) and \(X\). Let \(f:\widetilde{X}\to X\) be a morphism of graphs. If \(l\in L(\widetilde{X})\) is a leg then \(\iota_{X}(f(l))=f(l_{\widetilde{X}}(l))=f(l)\), so \(f(l)\in L(X)\) is also a leg. On the other hand, if \(e=\{h,h^{\prime}\}\in E(\widetilde{X})\) is an edge, then either \(f(h)\neq f(h^{\prime})\), in which case \(f\) maps \(e\) to an edge \(f(e)=\{f(h),f(h^{\prime})\}\in E(X)\), or \(f(h)=f(h^{\prime})\in L(X)\) is a leg. In other words, edges can map to edges or fold to legs. However, we do not allow morphisms to contract edges or half-legs, in other words we consider only _finite_ morphisms. **Definition 2.3**.: Let \(X\) be a graph and let \(G\) be a group acting on the right on \(X\). In other words, each \(g\in G\) defines an automorphism of \(X\), which we denote \(x\mapsto xg\) for \(x\in V(X)\cup H(X)\), such that \(x(g_{1}g_{2})=(xg_{1})g_{2}\) for all \(x\in V(X)\cup H(X)\) and all \(g_{1},g_{2}\in G\). We define the vertices and half-edges of the _quotient graph_\(X/G\) as the \(G\)-orbits of \(V(X)\) and \(H(X)\): \[V(X/G)=V(X)/G=\{vG:v\in V(X)\},\quad H(X/G)=H(X)/G=\{hG:h\in H(X)\},\] and descending the root and involution maps: \[r_{X/G}(hG)=r_{X}(h)G,\quad\iota_{X/G}(hG)=\iota_{X}(h)G.\] The quotient projection \(p:X\to X/G\) sends each element of \(X\) to its orbit. Let \(h\in H(X)\) be a half-edge with orbit \(p(h)=hG\in H(X/G)\). If \(h\) is a leg, then \(\iota_{X/G}(hG)=\iota_{X}(h)G=hG\) so \(p(h)=hG\in L(X/G)\) is also a leg. If \(h\) belongs to an edge \(e=\{h,h^{\prime}\}\in E(X)\), then there are two possibilities. If \(h^{\prime}\neq hg\) for all \(g\in G\), then the orbits \(hG\) and \(h^{\prime}G\) are distinct half-edges of \(X/G\) forming an edge \(p(e)=\{hG,h^{\prime}G\}\in E(X/G)\). However, if \(h^{\prime}=hg\) for some \(g\in G\) (in other words, if the \(G\)-action _flips the edge_\(e\)), then \(p(e)=hG=h^{\prime}G\in L(X/G)\) is a leg. In Serre's original definition, the involution \(\iota_{X}\) on a graph \(X\) is required to be fixed-point-free, and hence the set \(H(X)\) of half-edges is partitioned into edges only. Relaxing this condition enables us to consider quotients by group actions that flip edges. We give a simple example below and two extended examples in Sections 4.2 and 4.3. **Example 2.4**.: Let \(X\) be the graph with two vertices joined by an edge. There is a unique nontrivial morphism \(f:X\to X\) exchanging the two vertices, so \(\operatorname{Aut}(X)\) is the cyclic group of order two. The quotient \(X/\operatorname{Aut}(X)\) is the graph having one leg at one vertex, and is in fact the terminal object in the category of graphs with legs, while no such object exists in the category of graphs. ### The graph Laplacian and chip-firing We now recall divisor theory on a graph \(X\). We follow the framework of the paper [14], which we reformulate in terms of half-edges. Specifically, we use a detailed factorization of the Laplacian which can be conveniently generalized to graphs of groups. A minor additional advantage is that we are never required to pick an orientation for the graph. For a set \(S\), we denote \(\mathbb{Z}^{S}\) and \(\mathbb{Z}^{S}_{0}\) respectively the free abelian group on \(S\) and the subgroup consisting of elements whose coefficients sum to zero. The free abelian group \(\mathbb{Z}^{V(X)}\) is called the _divisor group_ of \(X\), and a _divisor_\(D=\sum_{v\in V(X)}a_{v}v\) is interpreted as a distribution of \(a_{v}\) chips on each vertex \(v\). The root and involution maps \(r_{X}:H(X)\to V(X)\) and \(t_{X}:H(X)\to H(X)\) induce homomorphisms \[r_{X}:\mathbb{Z}^{H(X)}\to\mathbb{Z}^{V(X)},\quad t_{X}:\mathbb{Z}^{H(X)}\to \mathbb{Z}^{H(X)}\] on the corresponding free abelian groups (denoted by the same letters by abuse of notation). Let \(\tau_{X}\) denote the transpose of \(r_{X}\): \[\tau_{X}:\mathbb{Z}^{V(X)}\to\mathbb{Z}^{H(X)},\quad\tau_{X}(v)=\sum_{h\in T_{ v}X}h. \tag{1}\] **Definition 2.5**.: The _Laplacian_ of a graph \(X\) is the homomorphism \(L_{X}:\mathbb{Z}^{V(X)}\to\mathbb{Z}^{V(X)}\) given by \[L_{X}=r_{X}\circ(Id-t_{X})\circ\tau_{X},\quad L_{X}(v)=\sum_{h\in T_{v}X}(v-r_{ X}(t_{X}(h))). \tag{2}\] Figure 1 displays all the maps involved in defining the graph Laplacian. It is elementary to verify that \(\operatorname{Im}L_{X}\subset\mathbb{Z}_{0}^{V(X)}\), where \(\operatorname{Im}L_{X}\) is the subgroup of _principal divisors_ on \(X\), and in fact \(\mathbb{Z}_{0}^{V(X)}=\operatorname{Im}(r_{X}\circ(Id-t_{X}))\) if the graph \(X\) is connected. **Definition 2.6**.: The _Jacobian_ of a graph \(X\) is the quotient group \[\operatorname{Jac}(X)=\mathbb{Z}_{0}^{V(X)}/\operatorname{Im}L_{X}.= \operatorname{Im}(r_{X}\circ(Id-t_{X}))/\operatorname{Im}L_{X}.\] The Jacobian \(\operatorname{Jac}(X)\) is also known as the _critical group_ of \(X\). Kirchhoff's matrix-tree theorem states that \(\operatorname{Jac}(X)\) is a finite group whose order is equal to the number of spanning trees of \(X\). Given a vertex \(v\in V(X)\), the divisor \(-L_{X}(v)\) is obtained by _firing the vertex_\(v\), in other words by moving a chip from \(v\) along each half-edge \(h\in T_{v}X\) to the root vertex of \(t_{X}(h)\). Chips moved along legs and loops return to \(v\), hence legs and loops of \(X\) do not contribute to the Laplacian or the Jacobian group, and \(\operatorname{Jac}(X)\) is canonically isomorphic to the Jacobian of the graph obtained by removing all legs and loops. However, legs and loops naturally occur when taking quotients by group actions, so we nevertheless consider them. We give an explicit presentation for the matrix \(L\) of the graph Laplacian \(L_{X}\). Let \(n=|V(X)|\) and \(m=|E(X)|\) denote the number of vertices and edges, respectively. Then \(L=Q-A\), where \(Q\) and \(A\) are the \(n\times n\)_valency_ and _adjacency matrices_ of \(X\): \[L_{uv}=Q_{uv}-A_{uv},\quad Q_{uv}=\delta_{uv}\operatorname{val}(v),\quad A_{uv }=|\{h\in T_{v}X:r_{X}(t_{X}(h))=u\}|.\] These matrices have the following convenient factorizations. Pick an orientation on \(X\) and define the \(n\times m\)_root matrices_ \[S_{ve}=\left\{\begin{array}{ll}1,&s(e)=v,\\ 0,&s(e)\neq v,\end{array}\right.,\quad T_{ve}=\left\{\begin{array}{ll}1,&t(e )=v,\\ 0,&t(e)\neq v.\end{array}\right. \tag{3}\] It is then easy to verify that \[Q=SS^{t}+TT^{t},\quad A=ST^{t}+TS^{t},\quad L=Q-A=(S-T)(S-T)^{t}.\] Figure 1. Factorization of the graph Laplacian. ### Harmonic morphisms of graphs Given a morphism of graphs \(f:\widetilde{X}\to X\), there is generally no relationship between \(Jac(\widetilde{X})\) and \(Jac(X)\). However, we can define functoriality with respect to a class of graph morphisms that admit a local degree function on the vertices of the source graph (see [10] and [1]). **Definition 2.7**.: A graph morphism \(f:\widetilde{X}\to X\) is called _harmonic_ if there exists a function \(d_{f}:V(\widetilde{X})\to\mathbb{Z}\), called the _local degree_, such that for any \(\widetilde{v}\in V(\widetilde{X})\) and any \(h\in T_{f(\widetilde{v})}X\) we have \[d_{f}(\widetilde{v})=\left|\left\{\widetilde{h}\in T_{\widetilde{v}}\widetilde {X}:f(\widetilde{h})=h\right\}\right|.\] For example, a covering space \(f:\widetilde{X}\to X\) (in the topological sense) is the same thing as a harmonic morphism with \(d_{f}(\widetilde{v})=1\) for all \(\widetilde{v}\in V(\widetilde{X})\). If \(X\) is connected, then any harmonic morphism \(f:\widetilde{X}\to X\) has a _global degree_ equal to \[\deg(f)=\sum_{\widetilde{v}\in f^{-1}(v)}d_{f}(\widetilde{v})=|f^{-1}(h)|\] for any \(v\in V(X)\) or any \(h\in H(X)\). In particular, any harmonic morphism to a connected graph is surjective (on the edges and the vertices). Let \(f:\widetilde{X}\to X\) be a harmonic morphism of graphs, and denote \[f_{*}:\mathbb{Z}^{V(\widetilde{X})}\to\mathbb{Z}^{V(X)},\quad f_{*}( \widetilde{v})=f(\widetilde{v}),\quad f_{*}:\mathbb{Z}^{H(\widetilde{X})}\to \mathbb{Z}^{H(X)},\quad f_{*}(\widetilde{h})=f(\widetilde{h})\] the induced homomorphisms on the free abelian groups. For any graph morphism (not necessarily harmonic) we have \[f_{*}\circ r_{\widetilde{X}}=r_{X}\circ f_{*},\quad f_{*}\circ t_{\widetilde{X }}=v_{X}\circ f_{*}.\] For any \(\widetilde{v}\in V(\widetilde{X})\) we have \[(f_{*}\circ\tau_{\widetilde{X}})(\widetilde{v})=d_{f}(\widetilde{v})(\tau_{X }\circ f_{*})(\widetilde{v}) \tag{4}\] by the harmonicity of \(f\), therefore \[(f_{*}\circ L_{\widetilde{X}})(\widetilde{v})=d_{f}(\widetilde{v})(L_{X} \circ f_{*})(\widetilde{v}).\] It follows that \(f_{*}(\operatorname{Im}L_{\widetilde{X}})\subset\operatorname{Im}L_{X}\) and the map \(f_{*}\) descends to a surjective _pushforward map_ \[f_{*}:Jac(\widetilde{X})\to Jac(X).\] Similarly, if \(X\) is connected, we define the maps \[f^{*}:\mathbb{Z}^{V(X)}\to\mathbb{Z}^{V(\widetilde{X})},\quad f^{*}(v)=\sum_{ \widetilde{v}\in f^{-1}(v)}d_{f}(\widetilde{v})\cdot\widetilde{v} \tag{5}\] Figure 2. Pushforward and pullback maps associated to a harmonic morphism. and \[f^{*}:\mathbb{Z}^{H(X)}\to\mathbb{Z}^{H(\widetilde{X})},\quad f^{*}(h)=\sum_{ \widetilde{h}\in f^{-1}(h)}\widetilde{h}.\] It is easy to verify that \[f^{*}(L_{X}(v))=\sum_{\widetilde{v}\in f^{-1}(v)}L_{\widetilde{X}}(\widetilde{v})\] for any \(v\in V(X)\), hence \(f^{*}(\operatorname{Prin}(X))\subset\operatorname{Prin}(\widetilde{X})\) and there is an induced _pullback map_ \[f^{*}:\operatorname{Jac}(X)\to\operatorname{Jac}(\widehat{X}).\] The map \(f^{*}:\operatorname{Jac}(X)\to\operatorname{Jac}(\widetilde{X})\) is injective (Theorem 4.7 in [1]), and the composition \(f_{*}\circ f^{*}\) acts by multiplication by \(\deg(f)\) on \(\operatorname{Jac}(X)\). Figure 2 displays all the maps associated to a harmonic morphism of graphs. ### Graphs of groups We now recall graphs of groups, which are the natural category for taking quotients of graphs by non-free group actions. We modify the definitions in [1] to allow graphs with legs (and thus quotients by group actions that flip edges). **Definition 2.8**.: A _graph of groups_\(\mathbb{X}=(X,\mathcal{X}_{v},\mathcal{X}_{h})\) consists of the following data: * A graph \(X\) (possibly with legs). * A group \(\mathcal{X}_{v}\) for each vertex \(v\in V(X)\). * A subgroup \(\mathcal{X}_{h}\subset\mathcal{X}_{r_{X}(h)}\) for each half-edge \(h\in H(X)\). * An isomorphism \(i_{h}:\mathcal{X}_{h}\to\mathcal{X}_{t_{X}(h)}\) for each edge \(\{h,t_{X}(h)\}\in E(X)\), where we assume that \(i_{t_{X}(h)}=i_{h}^{-1}\). Our definition differs slightly from the standard one [1], where one assumes that the two groups \(\mathcal{X}_{h}\) and \(\mathcal{X}_{t_{X}(h)}\) corresponding to an edge are the same, and instead records monomorphisms \(\mathcal{X}_{h}\to\mathcal{X}_{r(h)}\). The two approaches are equivalent in the case when there are no legs. We consider only finite graphs of groups, so that the underlying graph and all vertex groups are finite. We now define the quotient graph of groups by a right group action on a graph. The standard definition in [1] uses a trivialization with respect to a choice of spanning tree in the quotient graph and a lift of the tree to the source graph, and records the gluing data on the complementary edges (with respect to a choice of orientation). We find it more natural to instead trivialize the neighborhood of every vertex. **Definition 2.9**.: Let \(G\) be a group acting on the right on a graph \(\widetilde{X}\), let \(X=\widetilde{X}/G\) be the quotient graph, and let \(p:\widetilde{X}\to X\) be the quotient map. We define the _quotient graph of groups_\(\widetilde{X}/\!/G=(X,\mathcal{X}_{v},\mathcal{X}_{h})\) on \(X\) as follows: 1. Choose a section \(\widetilde{(\cdot)}:V(X)\to V(\widetilde{X})\) of the map \(p:V(\widetilde{X})\to V(X)\). For each vertex \(v\in V(X)\), \(\mathcal{X}_{v}=G_{\widetilde{v}}=\{g\in G:\widetilde{v}g=\widetilde{v}\}\) is the stabilizer of the chosen preimage \(\widetilde{v}\in p^{-1}(v)\). 2. Choose a section \(\widetilde{(\cdot)}:H(X)\to H(\widetilde{X})\) of the map \(p:H(\widetilde{X})\to H(X)\) with the property that \(r_{\widetilde{X}}(\widetilde{h})=\widetilde{r_{X}(h)}\) for all \(h\in H(X)\). For each half-edge \(h\in H(X)\), \(\mathcal{X}_{h}=G_{\widetilde{h}}=\{g\in G:\widetilde{h}g=\widetilde{h}\}\) is the stabilizer the chosen preimage \(\widetilde{h}\in p^{-1}(h)\). It is clear that \(\mathcal{X}_{h}=G_{\widetilde{h}}\subset G_{r_{\widetilde{X}}(\widetilde{h})} =\mathcal{X}_{r_{X}(h)}\). For \(v\in V(X)\) and \(g\in G\) we denote \(\widehat{v}_{g}=\widehat{v}g\) (so that \(\widehat{v}_{1}=\widehat{v}\)); this identifies the fiber \(p^{-1}(v)=\{\widehat{v}_{g}:g\in G\}\) with the set \(\mathcal{X}_{v}\backslash G\) of right cosets of \(\mathcal{X}_{v}\) in \(G\). Similarly, given \(h\in H(X)\) and \(g\in G\) we denote \(\widehat{h}_{g}=\widehat{h}_{g}\) (so that \(\widetilde{h}_{1}=\widehat{h}\)), so that \(p^{-1}(h)=\{\widehat{h}_{g}:g\in G\}\) is identified with \(\mathcal{X}_{h}\backslash G\). Hence \[V(\widehat{X})=\coprod_{v\in V(X)}\mathcal{X}_{v}\backslash G,\quad H( \widehat{X})=\coprod_{h\in H(X)}\mathcal{X}_{h}\backslash G \tag{6}\] as sets, and under this identification the root and projection maps and the \(G\)-action are given by \[p(\widehat{v}_{g})=v,\quad p(\widehat{h}_{g})=h,\quad r_{\widehat{X}}( \widehat{h}_{g})=\widehat{r_{X}(h)}_{g},\quad\widehat{v}_{g}g^{\prime}= \widehat{v}_{gg^{\prime}},\quad\widehat{h}_{g}g^{\prime}=\widehat{h}_{gg}, \tag{7}\] for \(v\in V(X)\), \(h\in H(X)\), and \(g,g^{\prime}\in G\). Finally, let \(h\in H(X)\) be a half-edge. Applying the involution on \(\widehat{X}\) to \(\widetilde{h}\) gives a half-edge lying over \(h^{\prime}=v_{X}(h)\) (it may be that \(h^{\prime}=h\)). Therefore there exists an element \(\beta(h)\in G\), unique up to left multiplication by \(\mathcal{X}_{h^{\prime}}\), such that \(\iota_{\widehat{X}}(\widetilde{h})=\widetilde{h^{\prime}}_{\beta(h)}\). It follows that \[\iota_{\widehat{X}}(\widetilde{h}_{g})=\widetilde{\iota_{X}(h)}_{\beta(h)g} \tag{8}\] for all \(h\in H(X)\) and \(g\in G\). We observe that \(\mathcal{X}_{h^{\prime}}=\beta(h)\mathcal{X}_{h}\beta(h)^{-1}\). We can choose the \(\beta(h)\) so that \(\beta(h^{\prime})=\beta(h)^{-1}\) for all \(h\) (in general, they only satisfy \(\beta(h^{\prime})\beta(h)\in\mathcal{X}_{h}\)). The required isomorphism \(i_{h}:\mathcal{X}_{h}\to\mathcal{X}_{v_{X}(h)}\) is then given by conjugation by \(\beta(h)\). We can run the construction in reverse and recover the morphism \(p:\widehat{X}\to X\) together with the \(G\)-action on \(\widehat{X}\) from the quotient graph of groups \(X/\!/G\) together with the chosen elements \(\beta(h)\in G\) (in keeping with graph-theoretic terminology, we may call the \(\beta(h)\) a _generalized \(G\)-voltage assignment_ on \(X/\!/G\)). First of all, we assume that the vertex and half-edge groups are given not simply as abstract groups, but as subgroups of \(G\). Hence we can define \(\widehat{X}\) as a set by Equation (6). The root and projection maps are given by Equation (7), so that \(\widehat{X}\) is trivialized in the neighborhood of each vertex. Finally, the involution map is given by Equation (8) and defines how the tangent spaces of the vertices are glued to each other. We note that for an edge \(\{h,h^{\prime}\}\in E(X)\) we may choose \(\beta(h)\in G\) arbitrarily and then set \(\beta(h^{\prime})=\beta(h)^{-1}\), but for a leg \(h\in L(X)\) the element \(\beta(h)\in G\) must have order two (or be the identity), and furthermore must lie in the normalizer of \(\mathcal{X}_{h}\). The fiber \(p^{-1}(h)\) over the leg \(h\) consists of legs if \(\beta(h)\in\mathcal{X}_{h}\) (in which case we may as well have chosen \(\beta(h)=1\)) and edges if \(\beta(h)\notin\mathcal{X}_{h}\). Two generalized \(G\)-voltage assignments on \(X/\!/G\) are equivalent if they define isomorphic \(G\)-covers \(\widehat{X}\to X\). The set of equivalence classes of voltage assignments may be constructed as the first \(C\)ech cohomology set of an appropriate constructible sheaf of non-abelian groups on \(X\). This set was explicitly described in [10] for an abelian group \(G\) (in which case the set is also an abelian group), and the construction immediately generalizes to the non-abelian case. We leave the details to the interested reader. ## 3. The Laplacian and the Jacobian group of a graph of groups Let \(G\) be a finite group acting on a finite graph \(\widehat{X}\), and let \(p:\widehat{X}\to X=\widehat{X}/G\) be the quotient map. If the action of \(G\) is free, then \(p\) is a covering space and hence a harmonic morphism, and induces pushforward and pullback homomorphisms \(p_{*}:\operatorname{Jac}(\widehat{X})\to\operatorname{Jac}(X)\) and \(p^{*}:\operatorname{Jac}(X)\to\operatorname{Jac}(\widehat{X})\). However, for an arbitrary \(G\)-action there is no natural relationship between \(\operatorname{Jac}(\widehat{X})\) and \(\operatorname{Jac}(X)\). The solution is to replace \(X\) with the quotient graph of groups \(\mathbb{X}=\widehat{X}/\!/G\), and to define the chip-firing operation on \(\widehat{X}/\!/G\) in a way that takes into account the orders of the local stabilizers. We now describe this construction. ### Chip-firing on a graph of groups Let \(\mathbb{X}=(X,\mathcal{X}_{v},\mathcal{X}_{h})\) be a graph of groups, and let \(\mathbb{Z}^{V(X)}\) and \(\mathbb{Z}^{H(X)}\) be the free abelian groups on the vertices and half-edges of the underlying graph, respectively. As for graphs, we call \(\mathbb{Z}^{V(X)}\) the _divisor group_ of \(\mathbb{X}\), and interpret divisors as distributions of chips on the vertices of the underlying graph \(X\) (the chips are not weighted in any way). As before, the root and involution maps induce homomorphisms \[r_{X}:\mathbb{Z}^{H(X)}\to\mathbb{Z}^{V(X)},\quad t_{X}:\mathbb{Z}^{H(X)}\to \mathbb{Z}^{H(X)}.\] For \(v\in V(X)\) denote \(c(v)=|\mathcal{X}_{v}|\) the order of the local group at \(v\), and similarly for \(h\in H(X)\) denote \(c(h)=|\mathcal{X}_{h}|\). Given an edge \(e=\{h,h^{\prime}\}\in E(X)\), we denote \(c(e)=c(h)=c(h^{\prime})\). For each half-edge \(h\in H(X)\) rooted at \(v=r_{X}(h)\), there is an inclusion \(\mathcal{X}_{h}\subset\mathcal{X}_{v}\) of the local groups, hence \(c(h)\) divides \(c(v)\). We now define the weighted transpose of \(r_{X}\) by the formula \[\tau_{X}:\mathbb{Z}^{V(X)}\to\mathbb{Z}^{H(X)},\quad\tau_{X}(v)=\sum_{h\in T_ {v}X}\frac{c(v)}{c(h)}h. \tag{9}\] **Definition 3.1**.: The _Laplacian_ of the graph of groups \(\mathbb{X}=(X,\mathcal{X}_{v},\mathcal{X}_{h})\) is the homomorphism \(L_{\mathbb{X}}:\mathbb{Z}^{V(X)}\to\mathbb{Z}^{V(X)}\) given by \[L_{\mathbb{X}}=r_{X}\circ(Id-t_{X})\circ\tau_{X},\quad L_{\mathbb{X}}(v)=\sum_ {h\in T_{v}X}\frac{c(v)}{c(h)}(v-r_{X}(t_{X}(h))). \tag{10}\] Given a vertex \(v\in V(G)\), the divisor \(-L_{\mathbb{X}}(v)\) is the result of _firing_ the vertex \(v\). It is obtained by moving, along each edge \(e=\{h,h^{\prime}\}\) rooted at \(v\), a stack of \(c(v)/c(e)\) chips from \(v\) to the other root vertex of \(e\). As in the case of graphs, if \(h\) is a leg or belongs to a loop then \(r_{X}(h)=r_{X}(t_{X}(h))\), so loops and legs do not contribute to the Laplacian. However, the chip-firing operation is not symmetric: firing two adjacent vertices will in general cause them to exchange chips. As before, if \(X\) is connected then \(\mathbb{Z}_{0}^{V(X)}=\operatorname{Im}(r_{X}\circ(Id-t_{X}))\), so the group of _principal divisors_\(\operatorname{Im}L_{\mathbb{X}}\) lies in \(\mathbb{Z}_{0}^{V(X)}\). Hence we can define the Jacobian group of \(\mathbb{X}\) in the same way as for graphs: **Definition 3.2**.: The _Jacobian_ group of a graph of groups \(\mathbb{X}\) is the quotient group \[\operatorname{Jac}(\mathbb{X})=\mathbb{Z}_{0}^{V(X)}/\operatorname{Im}L_{ \mathbb{X}}.\] We give an explicit formula for the matrix \(L\) of the Laplacian \(L_{\mathbb{X}}\) of a graph of groups \(\mathbb{X}=(X,\mathcal{X}_{v},\mathcal{X}_{h})\). Assume that \(X\) has no legs (this does not affect the Laplacian), and let \(n=|V(X)|\) and \(m=|E(X)|\) be the number of vertices and edges, respectively. Then \(L=Q-A\), where \(Q\) is the diagonal _valency matrix_ and \(A\) is the _adjacency matrix_ of the graph of groups \(\mathbb{X}\): \[L_{uv}=Q_{uv}-A_{uv},\quad Q_{uv}=\delta_{uv}\sum_{h\in T_{v}X}\frac{c(v)}{c(h) },\quad A_{uv}=\sum_{h\in T_{v}X:\,r_{X}(t_{X}(h))=u}\frac{c(v)}{c(h)}. \tag{11}\] We note that \(L\) and \(A\) are not symmetric in general. The Laplacian \(L\) is degenerate, specifically its rows sum to zero (but generally not the columns). Figure 3. Factorization of the Laplacian of a graph of groups. We introduce the following matrix factorizations. Let \(C_{V}\) and \(C_{E}\) be the respectively \(n\times n\) and \(m\times m\) diagonal matrices \[(C_{V})_{uv}=c(u)\delta_{uv},\quad(C_{E})_{ef}=c(e)\delta_{ef}\] recording the orders of the local groups. Let \(S\) and \(T\) be the root matrices (3) of \(X\), with respect to a choice of orientation. It is then elementary to verify that \[Q=SC_{E}^{-1}S^{t}C_{V}+TC_{E}^{-1}T^{t}C_{V},\quad A=SC_{E}^{-1}T^{t}C_{V}+TC_{ E}^{-1}S^{t}C_{V},\quad L=(S-T)C_{E}^{-1}(S-T)^{t}C_{V}. \tag{12}\] For future use, we also require the adjugate of the Laplacian. **Lemma 3.3**.: _The adjugate of the Laplacian matrix \(L\) of a graph of groups \(\mathbb{X}=(X,\mathcal{X}_{v},\mathcal{X}_{h})\) is equal to_ \[\operatorname{adj}(L)=C_{V}^{-1}J\xi.\] _Here \(J\) is the matrix whose entries are all equal to \(1\), and the constant \(\xi\) is equal to_ \[\xi=\prod_{v\in V(X)}c(v)\sum_{T\subset X}\prod_{e\in E(T)}c(e)^{-1},\] _where the sum is taken over all spanning trees \(T\) of \(X\)._ Proof.: The adjugate of the Laplacian \(L\) of an ordinary graph \(X\) is equal to \(J\cdot\kappa(X)\), where \(\kappa(X)=|\operatorname{Jac}(X)|\) is the number of spanning trees, and is computed by applying the Cauchy-Binet formula to the factorization \(L=(S-T)(S-T)^{t}\) (see, for example, Theorem 6.3 in [1]). Applying the same proof to the Laplacian of a graph of groups and using the factorization in Equation (12) gives the desired result. **Remark 3.4**.: We note that defining chip-firing on a graph of groups \(\mathbb{X}=(X,X_{v},X_{h})\) uses only the underlying graph and the orders \(c(v)=|\mathcal{X}_{v}|\) and \(c(h)=|\mathcal{X}_{h}|\) of the local groups. The structure of the groups is irrelevant, which is not surprising given that chip-firing is an abelian theory. In particular, given a group action of \(G\) on \(X\), the choices of the local stabilizers that are made when defining the quotient graph of groups \(X/G\) do not affect chip-firing. Furthermore, this definition of chip-firing makes sense for any graph whose vertices and edges are equipped with weights \(c(v)\) and \(c(e)\), with the condition that the weight of any edge divides the weights of its root vertices. The weights themselves need not be integers, so for example rescaling all weights by an arbitrary factor does not change the chip-firing map. This framework allows one to modify the edges and edge weights of a graph without changing the chip-firing map. For example, one may eliminate edge weights entirely by dividing all weights by a sufficiently large number such that each edge \(e\) has weight \(1/n(e)\) for some integer \(n(e)\), and then replacing each edge \(e\) with \(n(e)\) unweighted edges. Conversely, a set \(\{e_{1},\ldots,e_{n}\}\) of edges joining two vertices can be replaced by a single edge \(e\) with weight \(c(e)=(c(e_{1})^{-1}+\cdots+c(e_{n})^{-1})^{-1}\), so chip-firing on any weighted graph is equivalent to chip-firing on a simple graph (without multi-edges). Vertex weights, however, cannot be modified away. ### The order of the Jacobian via spanning trees We now compute the order of the Jacobian \(\operatorname{Jac}(\mathbb{X})=(X,\mathcal{X}_{v},\mathcal{X}_{h})\) of a graph of groups \(\mathbb{X}\) in two different ways. The first formula generalizes Kirchhoff's theorem and computes \(\operatorname{Jac}(\mathbb{X})\) as a weighted sum over the spanning trees of \(X\). A similar formula for a graph with trivial vertex weights appears in Theorem 4.1 in [11]. **Theorem 3.5**.: _Let \(\mathbb{X}=(X,\mathcal{X}_{v},\mathcal{X}_{h})\) be a graph of groups. For each vertex \(v\in V(X)\) and edge \(e=\{h,h^{\prime}\}\in E(X)\), let \(c(v)=|\mathcal{X}_{v}|\) and \(c(e)=|\mathcal{X}_{h}|=|\mathcal{X}_{h^{\prime}}|\) be the orders of the local groups. The order of the Jacobian of \(\mathbb{X}\) is equal to_ \[\big{|}\operatorname{Jac}(\mathbb{X})\big{|}=c_{v}^{-1}\prod_{v\in V(X)}c(v) \sum_{T\subset X}\prod_{e\in E(T)}c(e)^{-1},\] _where \(c_{v}\) is the least common multiple of the vertex weights \(c(v)\), and the sum is taken over all spanning trees \(T\) of \(X\)._ Proof.: Denote \(n=|V(X)|\) and \(m=|V(E)|\) and label the vertices of \(X\) as \(V(X)=\{v_{1},\ldots,v_{n}\}\). Fix an orientation on \(X\), then the matrix \(L\) of the Laplacian of \(\mathbb{X}\) admits the factorization (12) \[L=BC_{E}^{-1}B^{T}C_{V},\quad B=S-T,\] where \(C_{E}\) and \(C_{V}\) are diagonal matrices recording the \(c(e)\) and the \(c(v)\). Let \(L=[u_{1}\cdots u_{n}]\) denote the columns of \(L\), these vectors satisfy the relation \[\frac{u_{1}}{c(v_{1})}+\cdots+\frac{u_{n}}{c(v_{n})}=0. \tag{13}\] The matrix \(L\) defines the chip-firing map \(L:\mathbb{Z}^{n}\to\mathbb{Z}^{n}\), whose image lies in the kernel of the degree map \(\deg:\mathbb{Z}^{n}\to\mathbb{Z}\) (which sums the components). Fix the vertex \(v_{n}\) and let \(\mathbb{Z}^{n}\to\mathbb{Z}^{n-1}\) be the homomorphism that forgets the last coordinate; it is clear that it maps \(\operatorname{Ker}\deg\) isomorphically onto \(\mathbb{Z}^{n-1}\). The matrix of the composed map \(\mathbb{Z}^{n}\to\mathbb{Z}^{n}\to\mathbb{Z}^{n-1}\) is \(L^{\prime}=[u_{1}^{\prime}\cdots u_{n}^{\prime}]\), which is \(L\) with the last row removed. Then the Jacobian is \[\operatorname{Jac}(\mathbb{X})=\operatorname{Ker}\deg/\operatorname{Im}L= \mathbb{Z}^{n-1}/\operatorname{Im}L^{\prime}.\] Let \(\widetilde{L}=[u_{1}^{\prime}\cdots u_{n-1}^{\prime}]\) be the matrix obtained by removing the last column from \(L^{\prime}\), then \[\big{|}\operatorname{Jac}(\mathbb{X})\big{|}=\frac{\big{|}\mathbb{Z}^{n-1}/ \operatorname{Im}\widetilde{L}\big{|}}{\big{|}\operatorname{Im}L^{\prime}/ \operatorname{Im}\widetilde{L}\big{|}}. \tag{14}\] The group \(\operatorname{Im}L^{\prime}/\operatorname{Im}\widetilde{L}\) is the finite cyclic group generated by the vector \(u_{n}^{\prime}\) over the lattice \(\langle u_{1}^{\prime}\cdots u_{n-1}^{\prime}\rangle\). Clearing denominators in (13), we obtain the minimal relation between the \(u_{i}^{\prime}\): \[\frac{c_{v}}{c(v_{1})}u_{1}^{\prime}+\cdots+\frac{c_{v}}{c(v_{n})}u_{n}^{ \prime}=0.\] Hence the order of \(u_{n}^{\prime}\) and thus the denominator in (14) is equal to \[\big{|}\operatorname{Im}L^{\prime}/\operatorname{Im}\widetilde{L}\big{|}= \left|\frac{\langle u_{1}^{\prime}\cdots u_{n}^{\prime}\rangle}{\langle u_{1}^ {\prime}\cdots u_{n-1}^{\prime}\rangle}\right|=\frac{c_{v}}{c(v_{n})}. \tag{15}\] The numerator in (14) is the determinant of the \((n-1)\times(n-1)\) matrix \(\widetilde{L}\) obtained from \(L\) by deleting the last row and column. By Lemma 3.3, it is equal to \[\big{|}\mathbb{Z}^{n-1}/\operatorname{Im}\widetilde{L}\big{|}=\det\widetilde{ L}=\frac{1}{c(v_{n})}\xi=\prod_{i=1}^{n-1}c(v_{i})\sum_{T\subset X}\prod_{e\in E(T)}c(e)^{ -1}.\] Plugging the above two equations into (14), we obtain the result. ### The order of the Jacobian via the zeta function We give an alternative method for computing the order of the Jacobian group \(\operatorname{Jac}(\mathbb{X})\) of a graph of groups. Recall that the Ihara zeta function \(\zeta(u,X)\) of a graph \(X\) is an analogue of the Dedekind zeta function of a number field. It is defined as an Euler product over the primes of \(X\), which are equivalence classes of certain closed walks on \(X\). Unlike its arithmetic analogue, the Ihara zeta function \(\zeta(u,X)\) is the reciprocal of an explicit polynomial associated to \(X\). Specifically, let \(n=|V(X)|\) and \(m=|E(X)|\), and let \(Q\) and \(A\) be the \(n\times n\) valency and adjacenty matrices of \(X\), then Bass's three-term determinant formula (see [1] and [13]) states that \[\zeta(u,X)^{-1}=(1-u^{2})^{m-n}\det(I_{n}-Au+(Q-I_{n})u^{2}).\] The Ihara zeta function of a graph exhibits a number of remarkable similarities to the Dedekind zeta function. For example, it satisfies a graph-theoretic analogue of the class number formula, with \(\operatorname{Jac}(X)\) playing the role of the ideal class group. Specifically, at \(u=1\) the zeta function has a pole of order \(g=m-n+1\) (if \(g\geq 2\)) and its reciprocal has the following Taylor expansion (see [14]): \[\zeta(u,X)^{-1}=2^{g}(-1)^{g+1}(g-1)\big{|}\operatorname{Jac}(X)\big{|}\cdot(u -1)^{g}+O\left((u-1)^{g+1}\right). \tag{16}\] It is a natural problem to generalize closed walks and the Ihara zeta function to graphs of groups. In [15], the second author defined \(\zeta(u,X)\) for a graph of groups \(X\) having trivial edge groups and proved an analogue of Bass's three-term determinant formula for \(\zeta(u,X)\) (see Theorem 3.8 in [15]), and in upcoming work will extend these results to arbitrary graphs of groups. It is natural to expect that the Ihara zeta function \(\zeta(u,X)\) of a graph of groups \(X\) computes the order of \(\operatorname{Jac}(X)\). We show that this is indeed the case, provided that \(\zeta(u,X)\) satisfies an analogue of Bass's three-term determinant formula (which it does in the edge-trivial case by Theorem 3.8 of [15]). **Theorem 3.6**.: _Let \(X=(X,X_{v},X_{h})\) be a finite graph of groups on a graph with \(n=|V(X)|\) vertices and \(m=|E(X)|\) edges. Define the Ihara zeta function of \(X\) by the formula_ \[\zeta(u,X)^{-1}=(1-u^{2})^{m-n}\det(I_{n}-Au+(Q-I_{n})u^{2}),\] _where \(Q\) and \(A\) are the valency and adjacency matrices (11) of \(X\). Then \(\zeta(u,X)^{-1}\) has a zero of order \(g=m-n+1\) at \(u=1\), and has leading coefficient_ \[\zeta(u,X)^{-1}=2^{g}(-1)^{g+1}c_{v}\left(\sum_{e\in E(X)}c(e)^{-1}-\sum_{v\in V (X)}c(v)^{-1}\right)\big{|}\operatorname{Jac}(X)\big{|}\cdot(u-1)^{g}+O\left( (u-1)^{g+1}\right),\] _where \(c_{v}\) is the least common multiple of the vertex weights \(c(v)\)._ Proof.: Plugging \(u=1\) into the determinant we get \[\det(I_{n}-A+(Q-I_{n}))=\det L=0,\] since the Laplacian is singular. The term \((1-u^{2})^{m-n}\) has a zero of order \(g-1\) at \(u=1\) with leading coefficient \(2^{g-1}(-1)^{g+1}\). Therefore \(\zeta(u,X)\) has a zero of order at least \(g\) at \(u=1\), and it is sufficient to show that \[\frac{d}{du}\det(I_{n}-Au+(Q-I_{n})u^{2})\bigg{|}_{u=1}=2c_{v}\big{|} \operatorname{Jac}(X)\big{|}\left(\sum_{e\in E(X)}c(e)^{-1}-\sum_{v\in V(X)}c (v)^{-1}\right).\] We follow the proof of Theorem 2.11 in [10]. Using Jacobi's formula, we have \[\frac{\mathrm{d}}{\mathrm{d}u}\det(I_{n}-Au+(Q-I_{n})u^{2})\bigg{|}_{u=1}=\tr\left[ \mathrm{adj}(I_{n}-Au+(Q-I_{n})u^{2})\frac{\mathrm{d}}{\mathrm{d}u}(I_{n}-Au+(Q- I_{n})u^{2})\right]\bigg{|}_{u=1}=\] \[=\tr\left[\mathrm{adj}(Q-A)\cdot(2Q-A-2I_{n})\right]=\tr\mathrm{adj}(L)\cdot Q -2\tr\mathrm{adj}(L), \tag{17}\] where we used that \(L=Q-A\) and therefore \[\mathrm{adj}(L)\cdot(Q-A)=\mathrm{adj}(L)\cdot L=\det L\cdot I_{n}=0.\] By Lemma 3.3 and Equation (12) we have \[\tr\mathrm{adj}(L)=\xi\tr(C_{V}^{-1}J)=\xi\tr(C_{V}^{-1})=\xi\sum_{v\in V(X)}c (v)^{-1},\] \[\tr\mathrm{adj}(L)\cdot Q=\xi\tr[C_{V}^{-1}J(SC_{E}^{-1}S^{t}C_{V}+TC_{E}^{-1} T^{t}C_{V})]=\xi\tr[J(SC_{E}^{-1}S^{t}+TC_{E}^{-1}T^{t})]=2\xi\sum_{e\in E(X)}c(e)^{-1},\] where \[\xi=\prod_{v\in V(X)}c(v)\sum_{T\subset X}\prod_{e\in E(T)}c(e)^{-1}=c_{v} \big{|}\Jac(\mathbb{X})\big{|}\] by Theorem 3.5. Plugging these into Equation (17), we obtain the desired result. ## 4. The Jacobian of a quotient graph of groups We now determine the relationship between the Jacobians \(\Jac(\widetilde{X})\) and \(\Jac(\mathbb{X})\), where \(\widetilde{X}\) is a graph with a right \(G\)-action and \(\mathbb{X}=X/\!/G=(X,\mathcal{X}_{v},\mathcal{X}_{h})\) is the quotient graph of groups. ### Pushforward and pullback to the quotient Let \(X=\widetilde{X}/G\) be the quotient graph, let \(p:\widetilde{X}\to X\) be the quotient map, and let \(c(v)=|\mathcal{X}_{v}|\) and \(c(h)=|\mathcal{X}_{h}|\) be the vertex and edge weights. We recall the description of \(\widetilde{X}\) in terms of \(\mathbb{X}\) and a voltage assignment \(\beta:H(X)\to G\) given in Section 2.4. Following Equation (6), we make the identifications \[\mathbb{Z}^{V(\widetilde{X})}=\bigoplus_{v\in V(X)}\mathbb{Z}^{\mathcal{X}_{ v}\setminus G},\quad\mathbb{Z}^{H(\widetilde{X})}=\bigoplus_{h\in H(X)} \mathbb{Z}^{\mathcal{X}_{h}\setminus G}, \tag{18}\] where the summands correspond to the fibers of \(p\). The generators of \(\mathbb{Z}^{\mathcal{X}_{v}\setminus G}\) are denoted \(\widetilde{v}_{g}\) for \(v\in V(X)\) and \(g\in G\), where \(\widetilde{v}_{g}=\widetilde{v}_{g^{\prime}}\) if and only if \(\mathcal{X}_{v}g=\mathcal{X}_{v}g^{\prime}\), and similarly for half-edges. It is elementary to verify that, in terms of these identifications, the maps \(\tau_{\widetilde{X}}\), \(t_{\widetilde{X}}\), and \(\tau_{\widetilde{X}}\) are given by the following formulas on the generators: \[r_{\widetilde{X}}(\widetilde{h}_{g})=\widehat{r_{X}(h)}_{g},\quad t_{ \widetilde{X}}(\widetilde{h}_{g})=\widehat{\iota_{X}(h)}_{\beta(h)g},\quad \tau_{\widetilde{X}}(\widetilde{v}_{g})=\sum_{h\in T_{V}X}\sum_{g^{\prime}\in \mathcal{X}_{h}\setminus\mathcal{X}_{v}}\widetilde{h}_{g^{\prime}g}. \tag{19}\] We note that the \(G\)-action on \(\widetilde{X}\) naturally defines right \(\mathbb{Z}G\)-module structures on \(\mathbb{Z}^{V(\widetilde{X})}\) and \(\mathbb{Z}^{H(\widetilde{X})}\), but we do not use this. The various homomorphisms between the free abelian groups associated to the quotient \(p:\widetilde{X}\to X\) are shown on Figure 4 (the objects in the top row are described in Section 4.4). We define the _pushforward_ homomorphisms \[p_{*}:\mathbb{Z}^{V(\widetilde{X})}\to\mathbb{Z}^{V(X)},\quad p_{*}:\mathbb{Z} ^{H(\widetilde{X})}\to\mathbb{Z}^{H(X)}\] on the generators by the formulas \[p_{*}(\widetilde{v}_{g})=v,\quad v\in V(X),\quad p_{*}(\widetilde{h}_{g})=h, \quad h\in H(X).\] We note that the formulas are the same as for a harmonic morphism, in other words, \(p_{*}\) simply adds up the chips in each fiber without any additional weights. **Proposition 4.1**.: _The pushforward homomorphism \(p_{*}:\mathbb{Z}^{V(\widetilde{X})}\to\mathbb{Z}^{V(X)}\) commutes with the Laplacians_ \[p_{*}\circ L_{X}=L_{X}\circ p_{*}\] _and defines a surjective homomorphism \(p_{*}:\operatorname{Jac}(\widetilde{X})\to\operatorname{Jac}(\mathbb{X})\)._ Proof.: The identities \[p_{*}\circ r_{\widetilde{X}}=r_{X}\circ p_{*},\quad p_{*}\circ t_{\widetilde{ X}}=t_{X}\circ p_{*}\] hold because \(p\) is a morphism of the underlying graphs (though not harmonic in general). It remains to see how \(p_{*}\) interacts with \(\tau_{\widetilde{X}}\) and \(\tau_{\mathbb{X}}\). Let \(\widetilde{v}_{g}\in V(\widetilde{X})\) be a vertex lying over \(p(\widetilde{v}_{g})=v\). By Equation (19), we have \[(p_{*}\circ\tau_{\widetilde{X}})(\widetilde{v}_{g})=p_{*}\left[\sum_{h\in T_{ v}X}\sum_{g^{\prime}\in X_{h}\setminus X_{v}}\widetilde{h}_{g^{\prime}g} \right]=\sum_{h\in T_{v}X}\sum_{g^{\prime}\in X_{h}\setminus X_{v}}h=\sum_{h \in T_{v}X}\frac{|\chi_{v}|}{|\chi_{h}|}h,\] which is exactly \[(\tau_{\mathbb{X}}\circ p_{*})(\widetilde{v}_{g})=\tau_{\mathbb{X}}(v)=\sum_{ h\in T_{v}X}\frac{c(v)}{c(h)}h.\] We therefore see that \[p_{*}\circ\tau_{\widetilde{X}}=\tau_{\mathbb{X}}\circ p_{*},\quad p_{*}\circ L _{X}=L_{X}\circ p_{*}, \tag{20}\] and hence \(p_{*}\) induces a homomorphism \(p_{*}:\operatorname{Jac}(\widetilde{X})\to\operatorname{Jac}(\mathbb{X})\), which is surjective because the original map \(p_{*}:\mathbb{Z}^{V(\widetilde{X})}\to\mathbb{Z}^{V(X)}\) is surjective. Figure 4. Pushforward and pullback maps associated to a quotient We also define a _pullback_ homomorphism as follows. Define homomorphisms \[p^{*}:\mathbb{Z}^{V(X)}\to\mathbb{Z}^{V(\widetilde{X})},\quad p^{*}:\mathbb{Z}^{H (X)}\to\mathbb{Z}^{H(\widetilde{X})}\] on the generators as follows: \[p^{*}(v)=c(v)\sum_{g\in\mathcal{X}_{v}\setminus G}\widetilde{v}_{g},\quad p^{* }(h)=c(h)\sum_{g\in\mathcal{X}_{h}\setminus G}\widetilde{h}_{g}. \tag{21}\] **Proposition 4.2**.: _The pullback homomorphism \(p^{*}:\mathbb{Z}^{V(X)}\to\mathbb{Z}^{V(\widetilde{X})}\) commutes with the Laplacians_ \[L_{\widetilde{X}}\circ p^{*}=p^{*}\circ L_{\mathbb{X}}\] _and defines a homomorphism \(p^{*}:\operatorname{Jac}(\mathbb{X})\to\operatorname{Jac}(\widetilde{X})\). Furthermore, the homomorphism \(p_{*}\circ p^{*}\) acts by multiplication by \(|G|\) on \(\operatorname{Jac}(\mathbb{X})\)._ Proof.: Let \(h\in H(X)\) be a half-edge rooted at \(v=r_{X}(h)\in V(X)\). Then \[(r_{\widetilde{X}}\circ p^{*})(h)=r_{\widetilde{X}}\left[|\mathcal{X}_{h}| \sum_{g\in\mathcal{X}_{h}\setminus G}\widetilde{h}_{g}\right]=|\mathcal{X}_{h }|\sum_{g\in\mathcal{X}_{h}\setminus G}\widetilde{v}_{g}=|\mathcal{X}_{h}| \sum_{g\in\mathcal{X}_{h}\setminus G}\frac{|\mathcal{X}_{v}|}{|\mathcal{X}_{h }|}\widetilde{v}_{g}=p^{*}(v)=(p^{*}\circ r_{X})(h),\] hence \(r_{\widetilde{X}}\circ p^{*}=p^{*}\circ r_{X}\). Similarly, \(t_{\widetilde{X}}\circ p^{*}=p^{*}\circ t_{X}\) because \(c(t_{X}(h))=c(h)\) for all \(h\in H(X)\). Finally, let \(v\in V(X)\), then by Equation (9) we have \[(p^{*}\circ\tau_{\widetilde{X}})(v)=p^{*}\left[\sum_{h\in T_{v}X}\frac{| \mathcal{X}_{v}|}{|\mathcal{X}_{h}|}h\right]=\sum_{h\in T_{v}X}\frac{| \mathcal{X}_{v}|}{|\mathcal{X}_{h}|}|\mathcal{X}_{h}|\sum_{g\in\mathcal{X}_{h }\setminus G}\widetilde{h}_{g}=|\mathcal{X}_{v}|\sum_{h\in T_{v}X}\sum_{g\in \mathcal{X}_{h}\setminus G}\widetilde{h}_{g},\] while by Equation (19) \[(\tau_{\widetilde{X}}\circ p^{*})(v)=\tau_{\widetilde{X}}\left[|\mathcal{X}_{v }|\sum_{g\in\mathcal{X}_{v}\setminus G}\widetilde{v}_{g},\right]=|\mathcal{X} _{v}|\sum_{g\in\mathcal{X}_{v}\setminus G}\sum_{h\in T_{v}X}\sum_{g^{\prime} \in\mathcal{X}_{h}\setminus\mathcal{X}_{v}}\widetilde{h}_{g^{\prime}g},\] and the two sums agree since each right \(\mathcal{X}_{v}\)-coset is naturally partitioned into \(\mathcal{X}_{h}\)-cosets. Therefore \(\tau_{\widetilde{X}}\circ p^{*}=p^{*}\circ\tau_{\mathbb{X}}\), and putting everything together we get \(L_{\widetilde{X}}\circ p^{*}=p^{*}\circ L_{\mathbb{X}}\). Hence the pullback map induces a homomorphism \(p^{*}:\operatorname{Jac}(\mathbb{X})\to\operatorname{Jac}(\widetilde{X})\), and \((p_{*}\circ p^{*})(v)=|G|v\) for any \(v\in V(X)\) by the orbit-stabilizer theorem. We note that, unlike the case of graphs, the pullback homomorphism \(p^{*}\) need not be injective. For example, let \(G\) act trivially on any graph \(X\), then \(\operatorname{Jac}(X/\!/G)=\operatorname{Jac}(X)\) and \(p^{*}:\operatorname{Jac}(X/\!/G)\to\operatorname{Jac}(X)\) acts by multiplication by \(|G|\), which is the trivial map if \(|G|\) is divisible by \(|\operatorname{Jac}(X)|\). **Remark 4.3**.: It is instructive to compare the pushforward \(p_{*}\) and pullback \(p^{*}\) homomorphisms associated to a \(G\)-cover \(p:\widetilde{X}\to X\) to those associated to a harmonic morphism \(f:\widetilde{X}\to X\). Comparing Equation (4) with (20), and similarly (5) with (21), we offer the following stack-theoretic interpretation of the morphisms \(p_{*}\) and \(p^{*}\). The map \(p\) views a vertex \(\widetilde{v}\in V(\widetilde{X})\) lying over \(v=p(\widetilde{v})\) as a set of \(c(v)\) indistinguishable vertices that have been identified by the \(G\)-action. The morphism \(p\) may then be viewed as a _harmonic morphism_ having local degree one at each of these identified vertices. This explains why no degree coefficient appears in Equation (20), in contrast to Equation (4). Similarly, the coefficient \(c(v)\) in Equation (21) should be viewed as a count of these identified vertices, and not as a local degree coefficient as in Equation (5). With this interpretation, \(p\) is a covering space map (in the stacky sense) of global degree \(|G|\). **Remark 4.4**.: More generally, one can define the notion of a _harmonic morphism of graphs of groups_\(f:\mathbb{X}\to\mathbb{Y}\) inducing pushforward and pullback homomorphisms \(f_{*}:\operatorname{Jac}(\mathbb{X})\to\operatorname{Jac}(\mathbb{Y})\) and \(f^{*}:\operatorname{Jac}(\mathbb{Y})\to\operatorname{Jac}(\mathbb{X})\). Such a map \(f\) is required to satisfy a balancing condition at vertices that takes the local weighs on both \(\mathbb{X}\) and \(\mathbb{Y}\) into account. A natural example is the subquotient map \(X/\!/H\to X/\!/G\) corresponding a subgroup \(H\subset G\) of a group \(G\) acting on a graph \(X\). We leave the details to the interested reader. ### Quotients of the tetrahedron As a simple example, we consider all interesting quotients of \(K_{4}\), the complete graph on \(4\) vertices. Denote \(V(K_{4})=\{a,b,c,d\}\). It is well-known that \[\operatorname{Jac}(K_{4})\simeq\mathbb{Z}/4\mathbb{Z}\oplus\mathbb{Z}/4 \mathbb{Z}.\] Specifically, \(\operatorname{Jac}(K_{4})\) is generated by the classes of the divisors \[D_{a}=a-d,\quad D_{b}=b-d,\quad D_{c}=c-d\] subject to the relations \[4D_{a}=4D_{b}=4D_{c}=D_{a}+D_{b}+D_{c}=0.\] The automorphism group of \(K_{4}\) is \(S_{4}\), and we consider the quotients \(K_{4}/\!/G\) for all subgroups \(G\subset S_{4}\) that act non-transitively on the vertices (otherwise the quotient graph has a single vertex and its divisor theory is trivial). There are, up to conjugation, four such subgroups, which we enumerate below. The corresponding quotient graphs of groups are shown in Figure 5. Vertices are marked by bold dots, so a line segment with one end vertex represents a leg. Nontrivial stabilizers are labeled by their degree. 1. \(C_{2}\), the order \(2\) subgroup generated by \((ab)\). The valency, adjacency, and Laplacian matrices of \(K_{4}/\!/C_{2}\) are \[Q=\left(\begin{array}{ccc}3&0&0\\ 0&3&0\\ 0&0&3\end{array}\right),\quad A=\left(\begin{array}{ccc}1&2&2\\ 1&0&1\\ 1&1&0\end{array}\right),\quad L=\left(\begin{array}{ccc}2&-2&-2\\ -1&3&-1\\ -1&-1&3\end{array}\right)\,.\] Finding the Smith normal form of \(L\), we see that \(\operatorname{Jac}(K_{4}/\!/C_{2})\simeq\mathbb{Z}/4\mathbb{Z}\). In fact, the Jacobian is generated by the class of \(D=p_{*}(D_{a})=p_{*}(D_{b})\), and the pullback map is given by \(p^{*}(D)=D_{a}+D_{b}\). Figure 5. Quotients of \(K_{4}\) by non-vertex-transitive group actions. 2. \(C_{2,2}\), the order \(2\) subgroup generated by \((\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{ \text{ \text where we use the identification (18), and let \(i_{*}:V_{0}\to\mathbb{Z}^{V(\widetilde{X})}\) and \(i_{*}:H_{0}\to\mathbb{Z}^{H(\widetilde{X})}\) denote the canonical injections. It is elementary to verify that the maps \(r_{\widetilde{X}}\), \(t_{\widetilde{X}}\), and \(\tau_{\widetilde{X}}\) descend to maps (see Figure 4) \[\tau_{0}:H_{0}\to V_{0},\quad\iota_{0}:H_{0}\to H_{0},\quad\tau_{0}:V_{0}\to H_{ 0}.\] Following the terminology of [14], we introduce the following definitions: Figure 7. Quotients of the Petersen graph having non-trivial Jacobian. **Definition 4.5**.: The _voltage Laplacian_ of the cover \(p:\widetilde{X}\to X\) is the map \[L_{0}:V_{0}\to V_{0},\quad L_{0}=r_{0}\circ(\operatorname{Id}-\iota_{0})\circ \tau_{0}.\] The _voltage Jacobian_ of the cover \(p:\widetilde{X}\to X\) is the quotient \[\operatorname{Jac}_{0}=\operatorname{Im}(r_{0}\circ(\operatorname{Id}-\iota_{0 }))/\operatorname{Im}L_{0}.\] An elementary rank count shows that the lattices \(\operatorname{Im}(r_{0}\circ(\operatorname{Id}-\iota_{0}))\) and \(\operatorname{Im}L_{0}\) have full rank in \(V_{0}\). Therefore the voltage Laplacian is non-degenerate, unlike the case of a graph \(X\), where \(\operatorname{Im}L_{X}\) has full rank in \(\operatorname{Im}(r_{X}\circ(\operatorname{Id}-\iota_{X}))=\mathbb{Z}_{0}^{V (X)}\). However, \(r_{0}\circ(\operatorname{Id}-\iota_{0})\) is not generally surjective, and the quotients \(V_{0}/\operatorname{Im}L_{0}\) and \(\operatorname{Jac}_{0}\) need to be carefully distinguished. It is clear that \(\operatorname{Jac}_{0}\) embeds into the kernel of \(p_{*}:\operatorname{Jac}(\widetilde{X})\to\operatorname{Jac}(\mathbb{X})\). In fact, the two are isomorphic. **Proposition 4.6**.: _The natural inclusion map \(\operatorname{Jac}_{0}\to\operatorname{Ker}\left(p_{*}:\operatorname{Jac}( \widetilde{X})\to\operatorname{Jac}(\mathbb{X})\right)\) is an isomorphism, hence the voltage Jacobian fits into an exact sequence_ (23) _In particular, \(|\operatorname{Jac}_{0}|=|\operatorname{Jac}(\widetilde{X})|/|\operatorname{ Jac}(\mathbb{X})|\)._ Proof.: This result generalizes Theorem 1.1 in [14] to the case of non-free \(G\)-actions, and our proof is essentially a copy of their proof. First, we recall Proposition 2.2 from [14], which states that, given a diagram \(A\xrightleftharpoons[g]{f}B\) of abelian groups, the map \(f\) induces an isomorphism \[A/(\operatorname{Im}g+\operatorname{Ker}f)\simeq\operatorname{Im}f/ \operatorname{Im}(f\circ g).\] Hence, denoting \[\partial_{0}=r_{0}\circ(\operatorname{Id}-\iota_{0}),\quad\partial_{\widetilde {X}}=r_{\widehat{X}}\circ(\operatorname{Id}-\iota_{\widetilde{X}}),\quad \partial_{X}=r_{X}\circ(\operatorname{Id}-\iota_{X}),\] we instead work with the groups \[\operatorname{Jac}_{0}\simeq H_{0}/(\operatorname{Im}\tau_{0}+\operatorname{ Ker}\partial_{0}),\quad\operatorname{Jac}(\widetilde{X})\simeq\mathbb{Z}^{H( \widetilde{X})}/(\operatorname{Im}\tau_{\widetilde{X}}+\operatorname{Ker} \partial_{\widetilde{X}}),\quad\operatorname{Jac}(\mathbb{X})\simeq\mathbb{Z} ^{H(X)}/(\operatorname{Im}\tau_{\mathbb{X}}+\operatorname{Ker}\partial_{ \widetilde{X}}).\] Second, we replace each of the three finite abelian groups \(A=\operatorname{Jac}_{0},\operatorname{Jac}(\widetilde{X}),\operatorname{Jac} (\mathbb{X})\) with its Pontryagin dual \(A^{\vee}=\operatorname{Hom}(A,\mathbb{Q}/\mathbb{Z})\). The dual groups are isomorphic, but the arrows now point in the opposite direction: To show that \(\operatorname{Ker}p_{*}\simeq\operatorname{Jac}_{0}\), we instead show that \(\operatorname{Coker}p_{*}^{\vee}\simeq\operatorname{Jac}_{0}\). For each \(h\in H(X)\), the map \(p_{*}:\mathbb{Z}^{H(\widetilde{X})}\to\mathbb{Z}^{H(X)}\) sends the generator corresponding to each half-edge \(\widetilde{h}\in p^{-1}(h)=\mathcal{X}_{h}\backslash G\) to \(h\). Hence the Pontryagin dual \(p_{*}:\mathbb{Z}^{H(X)}\to\mathbb{Z}^{H(\widetilde{X})}\) sends \(h\in H(X)\) to the sum of the \(\widetilde{h}\) over all \(\widetilde{h}\in\mathcal{X}_{h}\backslash G\). It is therefore clear that \(\mathbb{Z}^{H(X)}/p_{*}^{\vee}(\mathbb{Z}^{H(X)})\simeq H_{0}\), and hence \[\operatorname{Coker}p_{*}^{\vee}=\mathbb{Z}^{H(\widetilde{X})}/(\operatorname{ Im}\tau_{\widetilde{X}}+\operatorname{Ker}\partial_{\widetilde{X}}+p_{*}^{\vee}( \mathbb{Z}^{H(X)}))\simeq H_{0}/(\operatorname{Im}\tau_{0}+\operatorname{Ker} \partial_{0})=\operatorname{Jac}_{0}.\] **Remark 4.7**.: Let \(p:\widetilde{X}\to X\) be a free \(G\)-cover, in other words assume that the \(G\)-action on \(\widetilde{X}\) is free. By Equation (16), the orders of \(\operatorname{Jac}(\widetilde{X})\) and \(\operatorname{Jac}(X)\) can be computed from the Taylor expansions at \(u=1\) of the Ihara zeta functions \(\zeta(u,\widetilde{X})\) and \(\zeta(u,X)\). In fact, \(\zeta(u,X)\) divides \(\zeta(u,\widetilde{X})\), and the ratio is a product of the Artin-Ihara L-functions \(L(u,X,\rho)\) associated to the cover \(p:\widetilde{X}\to X\) corresponding to the nontrivial irreducible representations \(\rho\) of \(G\) (the \(L\)-function of the trivial representation is equal to \(\zeta(u,X)\), see [10] or [11]). Hence the order of \(\operatorname{Jac}_{0}\) can likewise be computed by looking at the \(u=1\) Taylor expansion of this product. Assuming that the Ihara zeta function of a graph of groups is defined and satisfies Bass's three-term determinant formula, Theorem 3.6 shows that the order \(\operatorname{Jac}(\mathbb{X})\) can be computed from the Taylor expansion of \(\zeta(u,\mathbb{X})\) at \(u=1\). It is therefore natural to expect that \(\zeta(u,\widetilde{X})\) is equal to the product of the Artin-Ihara L-functions \(L(u,\mathbb{X},\rho)\) of the graph of groups \(\mathbb{X}\), suitably defined, where the product runs over the irreducible representations of \(G\) and where \(L(u,\mathbb{X},1)=\zeta(u,\mathbb{X})\). If this is the case, then \(|\operatorname{Jac}_{0}|=|\operatorname{Jac}(\widetilde{X})|/|\operatorname{ Jac}(\mathbb{X})|\) can be found from the Taylor expansion of the product of the L-functions of the cover \(\widetilde{X}\to X\) associated to the nontrivial irreducible representations of \(G\). The project of defining the Ihara zeta function and the Artin-Ihara L-function of a graph of groups was carried out by the second author in [15] in the case then \(G\) acts with trivial stabilizers on the edges of \(\widetilde{X}\). In future work, the second author intends to complete this project and define these functions for arbitrary graphs of groups. ## 5. Double covers We now consider the group \(G=\mathbb{Z}/2\mathbb{Z}\) acting on a graph \(\widetilde{X}\). We call the quotient map \(p:\widetilde{X}\to X\) a _double cover_, and introduce some terminology borrowed from tropical geometry. Let \(v\in V(X)\) be a vertex. We say that \(v\) is _unidated_ if it has two preimages in \(\widetilde{X}\) exchanged by the involution, which we arbitrarily label \(p^{-1}(v)=\{\widetilde{v}^{\pm}\}\), and _dilated_ if it has a unique preimage, which we label \(p^{-1}(v)=\{\widetilde{v}\}\). We similarly say that a half-edge \(h\in H(X)\) is _undilated_ if \(p^{-1}(h)=\{\widetilde{h}^{\pm}\}\) and _dilated_ if \(p^{-1}(h)=\{\widetilde{h}\}\). A dilated half-edge is rooted at a dilated vertex, so the set of dilated half-edges and vertices forms a subgraph \(X_{\operatorname{dil}}\subset X\), called the _dilation subgraph_. The root vertex \(v=r_{X}(h)\) of an undilated half-edge \(h\in H(X)\) may be dilated or undilated. In the latter case, we label the preimages in such a way that \(r_{\widetilde{X}}(\widetilde{h}^{\pm})=\widetilde{v}^{\pm}\), in other words a half-edge with a sign is rooted at either a vertex with the same sign or a vertex with no signs. Finally, we say that the double cover \(p:\widetilde{X}\to X\) is _free_ if \(X_{\operatorname{dil}}=\emptyset\) (in other words, if the \(\mathbb{Z}/2\mathbb{Z}\)-action is free) and _dilated_ otherwise. We now construct the _free graph_\(X_{\operatorname{fr}}\) corresponding to the double cover \(p:\widetilde{X}\to X\) as follows. The vertices of \(X_{\operatorname{fr}}\) are the undilated vertices of \(X\), so \(V(X_{\operatorname{fr}})=V(X)\backslash V(X_{\operatorname{dil}})\). The edges of \(X_{\operatorname{fr}}\) are the undilated edges of \(X\) both of whose root vertices are undilated. The legs of \(X_{\operatorname{fr}}\) come in two types. First, each undilated leg of \(X\) that is rooted at an undilated vertex is a leg of \(X\). Second, consider an edge \(e=\{h,h^{\prime}\}\in E(X)\) having an undilated root vertex \(r(h)=u\) and a dilated root vertex \(r(h^{\prime})=v\). For each such edge, we attach \(h\) to \(X_{\operatorname{fr}}\) as a _leg_ rooted at \(u\) (so that \(r_{X_{\operatorname{fr}}}(h)=r_{X}(h)=u\) as before but \(v_{X_{\operatorname{fr}}}(h)=h\) instead of \(v_{X}(h)=h^{\prime}\)). We call these _null legs_, in order to distinguish them from the legs coming from \(X\). In other words, \(X_{\operatorname{fr}}\) is obtained from \(X\) by removing \(X_{\operatorname{dil}}\), and turning each loose edge (having one root vertex on \(X_{\operatorname{fr}}\) and one missing root vertex) into a leg. We now define a parity assignment \(\varepsilon\) on the half-edges of \(X_{\operatorname{fr}}\) as follows: 1. Let \(\epsilon=\{h_{1},h_{2}\}\in\mathsf{E}(X_{\mathrm{fr}})\) be a edge (having undilated root vertices, which may be the same). Our choice of labels for the preimages of the root vertices determines a labeling \(\widetilde{h}_{1}^{\pm}\), \(\widetilde{h}_{2}^{\pm}\) for the preimages of the half-edges. With respect to this choice, we define \[\epsilon(\epsilon)=\epsilon(h_{1})=\epsilon(h_{2})=\begin{cases}+1,&\iota_{ \widetilde{X}}(\widetilde{h}_{1}^{\pm})=\widetilde{h}_{2}^{\pm},\\ -1,&\iota_{\widetilde{X}}(\widetilde{h}_{1}^{\pm})=\widetilde{h}_{2}^{\mp}. \end{cases}\] We say that \(\epsilon\) is _even_ if \(\epsilon(e)=1\) and _odd_ if \(\epsilon(e)=-1\). 2. Let \(l\in\mathsf{L}(X_{\mathrm{fr}})\) be a leg. If \(l\) is a leg of \(X\) (in other words, if it is not a null leg), then \(p^{-1}(1)=\{\widetilde{l}^{\pm}\}\), and there are two possibilities: either \(\iota_{\widetilde{X}}(\widetilde{l}^{\pm})=\widetilde{l}^{\pm}\), so \(p^{-1}(1)\) is a pair of legs exchanged by the involution, or \(\iota_{\widetilde{X}}(\widetilde{l}^{\pm})=\widetilde{l}^{\mp}\), so \(e=\{\widetilde{l}^{+},\widetilde{l}^{-}\}\) is an edge folded by the involution. We therefore set \[\epsilon(l)=\begin{cases}+1,&\iota_{\widetilde{X}}(\widetilde{l}^{\pm})= \widetilde{l}^{\pm},\\ -1,&\iota_{\widetilde{X}}(\widetilde{l}^{\pm})=\widetilde{l}^{\mp},\\ 0,&l\text{ is a null leg.}\end{cases}\] We say that a non-null leg \(l\) is _even_ if \(\epsilon(l)=1\) and _odd_ if \(\epsilon(l)=-1\). The parity assignment \(\epsilon\) gives \(X_{\mathrm{fr}}\) the structure of a _signed graph_, and this construction already occurs in [22] for the case of free double covers (so null legs do not appear). The values of \(\epsilon\) on the edges depend the labeling \(\widetilde{v}^{\pm}\) of the preimages \(\widetilde{v}^{\pm}\) of the undilated vertices. The cocycle \([\epsilon]\in\mathsf{H}^{1}(X_{\mathrm{fr}},\mathbb{Z}/2\mathbb{Z})\) in the simplicial cohomology group, however, is well-defined. The leg parity assignement does not depend on any choices, and the cover \(p:\widetilde{X}\to X\) can be uniquely reconstructed from the choice of a dilation subgraph \(X_{\mathrm{dil}}\subset X\), an element \([\epsilon]\in\mathsf{H}^{1}(X_{\mathrm{fr}},\mathbb{Z}/2\mathbb{Z})\) defining the edge parity, and a choice of leg parity. ### The voltage Laplacian of a double cover We now compute the voltage Laplacian \(\mathsf{L}_{0}\) and the voltage Jacobian \(\mathrm{Jac}_{0}\) of the double cover \(p:\widetilde{X}\to X\) in terms of the free graph \(X_{\mathrm{fr}}\). We introduce the following diagram: (24) Here \(r_{\mathrm{fr}}=r_{X_{\mathrm{fr}}}\) is the ordinary root map of \(X_{\mathrm{fr}}\) and \(\tau_{\mathrm{fr}}=r_{X_{\mathrm{fr}}}\) is its transpose (see Equation (1)). The involution, however, is twisted by the parity assignment: \[\iota_{\mathrm{fr}}(h)=\epsilon(h)\iota_{X_{\mathrm{fr}}}(h). \tag{25}\] In terms of the identification given by Equation (22), we have \(\mathbb{Z}_{0}^{X_{\mathrm{v}}\setminus G}=\mathbb{Z}(\widetilde{v}^{+}- \widetilde{v}^{-})\) for an undilated vertex \(v\in V(X_{\mathrm{fr}})\), while if \(v\) is dilated then \(\mathbb{Z}_{0}^{X_{\mathrm{v}}\setminus G}\) is trivial. Hence we can identify \(V_{0}\) with \(\mathbb{Z}^{V(X_{\mathrm{fr}})}\). Similarly, \(\mathbb{Z}_{0}^{X_{\mathrm{h}}\setminus G}=\mathbb{Z}(\widetilde{h}^{+}- \widetilde{h}^{-})\) if \(h\in H(X)\) is an undilated half-edge and is trivial otherwise. However, \(\mathsf{H}_{0}\) is larger than \(\mathbb{Z}^{H(X_{\mathrm{fr}})}\), since it has generators corresponding to undilated half-edges rooted at dilated vertices. These generators, however, do not appear in the image of \(r_{0}\), and hence we can compute the Laplacian \(\mathsf{L}_{0}\) by restricting to \(\mathbb{Z}^{H(X_{\mathrm{fr}})}\). **Proposition 5.1**.: _Let \(\widetilde{X}\) be a graph with a \(\mathbb{Z}/2\mathbb{Z}\)-action, let \(p:\widetilde{X}\to X\) be the quotient map, let \(X_{\mathrm{fr}}\) be the free graph, and let \(\epsilon\) be the parity assignment on \(H(X_{\mathrm{fr}})\) defined above. Under the identification of \(V_{0}\) with \(V(X_{\mathrm{fr}})\), the voltage Laplacian \(\mathsf{L}_{0}:V_{0}\to V_{0}\) and the voltage Jacobian are equal to_ \[\mathsf{L}_{0}=r_{\mathrm{fr}}\circ(\mathrm{Id}-\iota_{\mathrm{fr}})\circ\tau_{ \mathrm{fr}},\quad\mathrm{Jac}_{0}=(\mathrm{Im}\,r_{\mathrm{fr}}\circ(\mathrm{Id }-\iota_{\mathrm{fr}}))/\mathrm{Im}\,\mathsf{L}_{0}.\] _The matrix of the voltage Laplacian \(L_{0}:V_{0}\to V_{0}\) is explicitly given by_ \[L_{0,uv}=\begin{cases}|\{\text{non-loop edges at }u\}|+4|\{\text{odd loops at }u\}|+2|\{\text{odd legs at }u\}|+|\text{null legs at }u\}|,&u=v,\\ |\{\text{odd edges between }u\text{ and }v\}|-|\{\text{even edges between }u\text{ and }v\}|,&u\neq v.\end{cases}\] Proof.: By abuse of notation, for an undilated vertex \(v\in V(X_{\text{fr}})\) we denote \(v=\widehat{v}^{+}-\widehat{v}^{-}\) the corresponding generator of \(V_{0}\); this identifies the generators of \(\mathbb{Z}^{V(X_{\text{fr}})}\) and \(V_{0}\). Similarly, if \(h\in H(X)\backslash H(X_{\text{dil}})\) is an undilated edge we denote \(h=\widehat{h}^{+}-\widehat{h}^{-}\) the corresponding generator of \(H_{0}\). If \(r_{\chi}(h)\) is an undilated vertex then \(h\) is also a generator of \(\mathbb{Z}^{H(X_{\text{fr}})}\), so we view the latter as a subgroup of \(H_{0}\). It is clear that the maps \(\tau_{0}:\mathbb{Z}^{V_{0}}\to\mathbb{Z}^{H_{0}}\) and \(\tau_{\text{fr}}:\mathbb{Z}^{V(X_{\text{fr}})}\to\mathbb{Z}^{H(X_{\text{fr}})}\) agree under these identifications. Given an undilated half-edge \(h\in H(X)\backslash H(X_{\text{dil}})\) rooted at \(v=r_{\chi}(h)\), we have \[r_{0}(\widehat{h}^{+}-\widehat{h}^{-})=\begin{cases}\widehat{v}^{+}-\widehat {v}^{-},&v\text{ is undilated},\\ 0,&v\text{ is dilated}.\end{cases}\] Hence the restriction of \(r_{0}:\mathbb{Z}^{H_{0}}\to\mathbb{Z}^{V_{0}}\) to \(\mathbb{Z}^{H(X_{\text{fr}})}\) agrees with \(r_{\text{fr}}:\mathbb{Z}^{H(X_{\text{fr}})}\to\mathbb{Z}^{V(X_{\text{fr}})}\). Now let \(h\in H(X_{\text{fr}})\) be a half-edge rooted at an undilated vertex \(v=\tau_{\text{fr}}(h)\). We need to check that \(r_{\text{fr}}\circ(\operatorname{Id}-\iota_{\text{fr}})(h)\) agrees with \(r_{0}\circ(\operatorname{Id}-\iota_{0})(\widehat{h}^{+}-\widehat{h}^{-})\). There are several cases to consider. 1. \(h\) is part of an even edge \(e=\{h,h^{\prime}\}\in E(X_{\text{fr}})\), where the vertex \(v^{\prime}=r_{\text{fr}}(h^{\prime})\) is also undilated. Then \(\iota_{\widehat{\chi}}(\widehat{h}^{\pm})=\widehat{h}^{\prime\pm}\), so \[r_{\text{fr}}\circ(\operatorname{Id}-\iota_{\text{fr}})(h)=r_{\text{fr}}(h-h^ {\prime})=v-v^{\prime}=\widehat{v}^{+}-\widehat{v}^{-}-\widehat{v}^{\prime+}+ \widehat{v}^{\prime-}=r_{0}\circ(\operatorname{Id}-\iota_{0})(\widehat{h}^{+ }-\widehat{h}^{-}).\] The half-edge \(h\) contributes \(+1\) to \(L_{0,vv}\) and \(-1\) to \(L_{0,vv^{\prime}}\), and these contributions cancel if \(e\) is a loop. 2. \(h\) is part of an odd edge \(e=\{h,h^{\prime}\}\in E(X_{\text{fr}})\), where the vertex \(v^{\prime}=r_{\text{fr}}(h^{\prime})\) is also undilated. Then \(\iota_{\widehat{\chi}}(\widehat{h}^{\pm})=\widehat{h}^{\prime\mp}\), so \[r_{\text{fr}}\circ(\operatorname{Id}-\iota_{\text{fr}})(h)=r_{\text{fr}}(h+ h^{\prime})=v+v^{\prime}=\widehat{v}^{+}-\widehat{v}^{-}+\widehat{v}^{\prime+}- \widehat{v}^{\prime-}=r_{0}\circ(\operatorname{Id}-\iota_{0})(\widehat{h}^{+ }-\widehat{h}^{-}).\] The half-edge \(h\) contributes \(+1\) to \(L_{0,vv}\), and \(+1\) to \(L_{0,vv^{\prime}}\). If \(v=v^{\prime}\) (\(e\) is an odd loop), the total contribution from \(h\) and \(h^{\prime}\) to \(L_{0,vv}\) is equal to \(4\). 3. \(h\) is an even leg, then \(\iota_{\text{fr}}(h)=h\) and \(\iota_{\widehat{\chi}}(\widehat{h}^{\pm})=\widehat{h}^{\pm}\) since \(\widehat{h}^{\pm}\) are also legs. Thus \[r_{\text{fr}}\circ(\operatorname{Id}-\iota_{\text{fr}})(h)=0=r_{0}\circ( \operatorname{Id}-\iota_{0})(\widehat{h}^{+}-\widehat{h}^{-})\] and \(h\) does not contribute to the voltage Laplacian. 4. \(h\) is an odd leg and \(\widehat{h}^{\pm}\) form an edge of \(\widetilde{\chi}\). Then \(\iota_{\text{fr}}(h)=-h\) and \(\iota_{\widehat{\chi}}(\widehat{h}^{\pm})=\widehat{h}^{\mp}\), hence \[r_{\text{fr}}\circ(\operatorname{Id}-\iota_{\text{fr}})(h)=2r_{\text{fr}}(h)=2v =2\widehat{v}^{+}-2\widehat{v}^{-}=r_{0}\circ(\operatorname{Id}-\iota_{0})( \widehat{h}^{+}-\widehat{h}^{-})\] and \(h\) contributes \(+2\) to \(L_{0,vv}\). 5. \(h\) is a null leg corresponding to an edge \(e=\{h,h^{\prime}\}\in E(X)\) with dilated root vertex \(v^{\prime}=r_{\chi}(h^{\prime})\). Then \(\iota_{\text{fr}}(h)=0\) and we can assume that \(\iota_{\widehat{\chi}}(\widehat{h}^{\pm})=\widehat{h}^{\prime\pm}\), so \[r_{\text{fr}}\circ(\operatorname{Id}-\iota_{\text{fr}})(h)=r_{\text{fr}}(h)=v= \widehat{v}^{+}-\widehat{v}^{-}=r_{0}\circ(\operatorname{Id}-\iota_{0})( \widehat{h}^{+}-\widehat{h}^{-})\] because \(r_{0}(\widehat{h}^{\prime+}-\widehat{h}^{\prime-})=0\). Hence \(h\) contributes \(+1\) to \(L_{0,vv}\). It follows that \(L_{0}=r_{\mathrm{fr}}\circ(\operatorname{Id}-t_{\mathrm{fr}})\circ r_{\mathrm{fr}}\), and to complete the proof it is sufficient to show that the image of \(H(X_{\mathrm{fr}})\subset H_{0}\) under the map \(r_{0}\circ(\operatorname{Id}-t_{0})\) is equal to the image of all of \(H_{0}\). Let \(e=\{h,h^{\prime}\}\in E(X)\) be an undilated edge with undilated root vertex \(v=r_{X}(h)\) and dilated root vertex \(v^{\prime}=r_{X}(h^{\prime})\), then \(\widetilde{h}^{\prime+}-\widetilde{h}^{\prime-}\) is a generator of \(H_{0}\) but not \(H(X_{\mathrm{fr}})\). We verify that \[r_{0}\circ(\operatorname{Id}-t_{0})(\widetilde{h}^{\prime+}-\widetilde{h}^{ \prime-})=r_{0}(\widetilde{h}^{\prime+}-\widetilde{h}^{\prime-}-\widetilde{h} ^{+}+\widetilde{h}^{-})=-\widetilde{v}^{+}+\widetilde{v}^{-}=-v=-r_{\mathrm{fr }}\circ(\operatorname{Id}-t_{\mathrm{fr}})(h),\] where \(h=\widetilde{h}^{+}-\widetilde{h}^{-}\) is a generator of \(H(X_{\mathrm{fr}})\). Hence adding the \(\widetilde{h}^{\prime+}-\widetilde{h}^{\prime-}\) as a generator to \(H(X_{\mathrm{fr}})\) does not increase the image. We observe that the matrix of the voltage Laplacian \(L_{0}\) of the double cover \(p:\widetilde{X}\to X\) is obtained from the signed graph Laplacian of the free subgraph \(X_{\mathrm{fr}}\) (see Definition 9.4 in [14]) by adding the contributions from the null legs. ### Ogods and the order of the voltage Jacobian of a double cover We now derive a combinatorial formula for the order of the voltage Jacobian of a double cover \(p:\widetilde{X}\to X\). To make our formula self-contained, we express it in terms of \(\widetilde{X}\) and \(X\), and not in terms of the auxiliary graph \(X_{\mathrm{fr}}\). The only terminology that we retain is that we distinguish _odd_ and _even_ undilated legs of \(X\): the preimage of the former is a single edge folded by the involution, while the preimage of the latter is a pair of legs. The following paragraphs are expository, and the interested reader may skip directly to Definition 5.2 and Theorem 5.3. Kirchhoff's matrix tree theorem states that the order of the Jacobian of a connected graph \(X\) is equal to the number of spanning trees of \(X\), and a spanning tree of \(X\) may be characterized as a minimal connected subgraph containing all vertices of \(X\). Our goal is to define an analogous property for subgraphs of the target graph of a double cover. Let \(\widetilde{X}\) be a graph with a \(\mathbb{Z}/2\mathbb{Z}\)-action and let \(p:\widetilde{X}\to X\) be the corresponding double cover. We say that a (possibly disconnected) subgraph \(Y\subset X\) is _relatively connected_ if each connected component of \(Y\) has connected preimage in \(\widetilde{X}\). We now characterize connected subgraphs \(Y\subset X\) that are minimal with respect to this property, in other words we require that \(p^{-1}(Y)\) be connected but that the graph obtained from \(Y\) by removing any edge or leg (and retaining the root vertices) have a connected component with disconnected preimage in \(\widetilde{X}\). We make the following simple observations. 1. A connected subgraph \(Y\subset X\) having at least one dilated vertex is relatively connected. In particular, \(Y\) is not minimally relatively connected if it has at least one dilated edge or leg, since this edge or leg may be removed, or if it has at least two dilated vertices. Similarly, if \(Y\) has exactly one dilated vertex but is not a tree, then \(Y\) is not minimally relatively connected. 2. A relatively connected subgraph \(Y\subset X\) having at least one even leg is not minimally relatively connected, since the leg may be removed. 3. A connected subgraph \(Y\subset X\) having at least one odd leg \(l\in L(Y)\) is relatively connected, since the preimage edge \(e=p^{-1}(l)\) connects the (possibly disjoint) preimages of \(Y\backslash\{l\}\). The subgraph \(Y\) is not minimally relatively connected unless it is a tree. 4. Let \(Y\subset X\) be a subgraph containing no dilated vertices and no legs. By covering space theory, the restricted double cover \(p|_{p^{-1}(Y)}:p^{-1}(Y)\to Y\) corresponds to an element of \(\operatorname{Hom}(\pi_{1}(Y),\mathbb{Z}/2\mathbb{Z})=H^{1}(Y,\mathbb{Z}/2 \mathbb{Z})\). If \(Y\) is a tree then the cover is trivial and hence disconnected, so \(Y\) is not relatively connected. If \(Y\) has genus one (in other words, if it has a unique cycle), then \(H^{1}(Y,\mathbb{Z}/2\mathbb{Z})=\mathbb{Z}/2\mathbb{Z}\) and \(Y\) has two covers: the trivial disconnected one and the nontrivial connected one. In the latter case, it is clear that \(Y\) is minimally relatively connected, since removing any edge produces a tree. Finally, suppose that \(Y\) has genus at least two (in other words, it has at least two independent cycles) and \(p|_{p^{-1}(Y)}:p^{-1}(Y)\to Y\) is a nontrivial double cover. It is an easy exercise to show that \(Y\) is not minimally relatively connected, in other words there is an edge \(e\in E(Y)\) such that each connected component of \(Y\backslash\{e\}\) (there may be one or two) has connected preimage in \(\widetilde{X}\). We can therefore characterize minimal relatively connected subgraphs of \(X\) that contain all vertices of \(X\), which are the double cover analogues of spanning trees. One important difference is that these subsets now come with a weight assignment. **Definition 5.2**.: Let \(\widetilde{X}\) be a graph with a \(\mathbb{Z}/2\mathbb{Z}\)-action and let \(p:\widetilde{X}\to X\) be the quotient map. An _ogod component_\(Y\) of _weight_\(w(Y)\) is a connected subgraph \(Y\subset X\) having no dilated edges, dilated legs, or even legs, and that is of one of the following three types: 1. \(Y\) is a tree having a unique dilated vertex, and no legs. We say that \(w(Y)=1\). 2. \(Y\) is a tree having no dilated vertices and a unique odd leg. We say that \(w(Y)=2\). 3. \(Y\) has no legs and a unique cycle, and \(p^{-1}(Y)\subset\widetilde{X}\) is connected. We say that \(w(Y)=4\). Now let \(B\) be a set of \(n\) undilated edges and odd legs of \(X\), where \(n\) is the number of undilated vertices of \(X\). Let \(X|_{B}\) be the graph obtained from \(X\) by deleting all edges and legs not in \(B\), including all dilated edges and legs, and retaining all vertices, and let \(X_{1},\ldots,X_{k}\) be the connected components of \(X|_{B}\). We say that \(B\) is an _ogod_ if each of the \(X_{i}\) is an ogod component, and the _weight_\(w(B)\) of the ogod is the product of the weights of the \(X_{i}\). The term _ogod_ is an acronym for _odd genus one decomposition_: for a free double cover \(p:\widetilde{X}\to X\) without legs, the connected components \(X_{i}\) of an ogod are graphs of genus one such that the restricted covers \(p|_{p^{-1}(X_{i})}:p^{-1}(X_{i})\to X_{i}\) are given by the odd (nontrivial) elements of \(H^{1}(X_{i},\mathbb{Z}/2\mathbb{Z})\). This terminology was introduced by the second author in [11], who was unaware of the history of this definition going back to the seminal paper [10]. However, to the best of the authors' knowledge, there does not appear to be an established term describing such subsets in the combinatorics literature. We are now ready to state the analogue of Kirchhoff's matrix tree theorem for a dilated double cover \(p:\widetilde{X}\to X\), with ogods playing the role of spanning trees. **Theorem 5.3**.: _Let \(\widetilde{X}\) be a graph with a non-free \(\mathbb{Z}/2\mathbb{Z}\)-action and let \(p:\widetilde{X}\to X\) be the quotient map. The order of the voltage Laplacian is equal to_ \[|Jac_{0}|=\sum_{B}w(B), \tag{26}\] _where the sum is taken over all ogods \(B\) of \(X\)._ For free double covers, this result already occurs in [10], and was explicitly interpreted as a formula for the order of the voltage Laplacian in [12]. It was subsequently independently derived by the second author in [11]. We note that for a free double cover there is an additional \(1/2\) coefficient in the right hand side of Equation (26). Proof.: Let \(X_{\text{fr}}\) be the free graph, and let \(\varepsilon\) be the parity assignment on \(H(X_{\text{fr}})\) defined above. By Proposition 5.1, we may compute the voltage Laplacian \(L_{0}=r_{\text{fr}}\circ(Id-t_{\text{fr}})\circ\tau_{\text{fr}}\) and voltage Jacobian \(\operatorname{Jac}_{\phi}=(\operatorname{Im}\tau_{\mathrm{fr}}\circ( \operatorname{Id}-t_{\mathrm{fr}}))/\operatorname{Im}\operatorname{L}_{0}\) using the diagram (24) of \(X_{\mathrm{fr}}\). Let \(n=|V(X_{\mathrm{fr}})|\) and \(m=|H(X_{\mathrm{fr}})|\). The \(n\times n\) matrix of the voltage Laplacian factors as \(L_{0}=DT\), where \(D\) is the \(n\times m\) matrix of \(\tau_{\mathrm{fr}}\circ(\operatorname{Id}-t_{\mathrm{fr}})\) and \(T\) is the \(m\times n\) matrix of \(\tau_{\mathrm{fr}}\): \[D_{\mathrm{vh}}=\begin{cases}+1,&r(h)=v\text{ and }h\text{ lies on a non-loop edge or is a null leg},\\ +1,&r(t(h))=v\text{ and }h\text{ lies on an odd non-loop},\\ -1,&r(t(h))=v\text{ and }h\text{ lies on an even non-loop},\\ +2,&r(h)=v\text{ and }h\text{ lies on an odd loop or is an odd leg},\\ 0,&\text{otherwise},\end{cases}\qquad T_{\mathrm{hv}}=\begin{cases}+1,&v=r_{ \mathrm{fr}}(h),\\ 0,&\text{otherwise}.\end{cases}\] By the Cauchy-Binet formula, \[\det L_{0}=\sum_{B\subset H(X_{\mathrm{fr}})\cdot|B|=n}\det D|_{B}\det T|_{B}, \tag{27}\] where we sum over all \(n\)-element subsets \(B\subset H(X_{\mathrm{fr}})\) of half-edges of \(X_{\mathrm{fr}}\) and where \(D|_{B}\) and \(T|_{B}\) are the matrices obtained from \(D\) and \(T\) by deleting respectively all columns and all rows except those indexed by \(B\). We make a number of simple observations: 1. \(\det D|_{B}=0\) if \(B\) contains a half-edge that lies on an even loop or is an even leg. Indeed, the corresponding column of \(D\) is zero. 2. \(\det D|_{B}=0\) if \(B\) contains both half-edges of a single edge \(e=\{h,h^{\prime}\}\). Indeed, the \(h\)- and \(h^{\prime}\)-columns of \(D\) are equal if \(e\) is odd and sum to zero if \(e\) is even. Hence we only consider only those \(n\)-element subsets \(B\subset H(X_{\mathrm{fr}})\) that have at most one half-edge from each edge. We represent each such \(B\) as a choice of a total of \(n\) edges and legs, as well as an _orientation_ for each edge, in other words an arrow pointing in the direction of the chosen half-edge. 3. \(\det T|_{B}=0\) unless each half-edge in \(B\) is rooted at a distinct vertex of \(X_{\mathrm{fr}}\). Viewing \(B\) as a choice of oriented edges and legs, we require that each arrow point to a different vertex. We now show that the nonzero contributions in Equation (27) come from ogods, and that the contribution from each ogod \(B\) is exactly \(w(B)\). Fix \(B\), and let \(X_{\mathrm{fr}}|_{B}\) be the subgraph of \(X\) obtained by deleting all edges and legs not in \(B\). Let \(X_{\mathrm{fr}}|_{B}=X_{1}\cup\cdots\cup X_{k}\) be the decomposition into connected components, and let \(B_{i}=H(X_{i})\cap B\) for \(i=1,\ldots,k\). The matrices \(D|_{B}\) and \(T|_{B}\) are block-diagonal with blocks corresponding to the \(X_{i}\), and a block-diagonal matrix has nonzero determinant only if each block is square, in other words if \(|B_{i}|=|V(X_{i})|\) for each \(i\). In other words, the product \(\det D|_{B}\det T|_{B}\) is nonzero only if each \(X_{i}\) is a connected oriented graph having an equal number of legs and edges as vertices, with each leg and edge pointing to a distinct vertex. A moment's thought shows that there are only two possibilities for each \(X_{i}\): 1. \(X_{i}\) has a unique leg (odd or null but not even) and is a tree, and all edges are oriented away from the root vertex of the leg. Hence \(X_{i}\) is an ogod component of weight \(w(X_{i})=1\) if the leg is null and \(w(X_{i})=2\) if the leg is odd. 2. \(X_{i}\) has no legs and a unique cycle. The edges on the cycle are oriented cyclically, while the remaining edges (lying on trees attached to the cycle) are oriented away from the cycle. Hence \(X_{i}\) is an ogod component of weight \(w(X_{i})=4\) if the preimage of the cycle is connected, which happens if an odd number of edges on the cycle are odd. If there is an even number of odd edges, then the preimage of the cycle is disconnected and \(X_{i}\) is not an ogod. It is now an elementary linear algebra exercise to show that the product \(\det\operatorname{D}\nolimits_{|_{\operatorname{B}_{i}}}\det\operatorname{T} \nolimits_{|_{\operatorname{B}_{i}}}\) equals 1 or 2 in the first case, depending on whether the unique leg is null or odd. Similarly, in the second case the product is equal to 2 if there is an odd number of odd edges along the cycle and zero if there is an even number. In this case, there are two contributions corresponding to the two possible choices of orientation along the cycle. Hence we see that the total contribution of \(\det\operatorname{D}\nolimits_{|_{\operatorname{B}_{i}}}\det\operatorname{T} \nolimits_{|_{\operatorname{B}_{i}}}\) from an ogod component \(X_{i}\) is equal to \(w(X_{i})\). Since weights and determinants are multiplicative in connected components, it follows that the contribution of each ogod \(B\) to the sum of the \(\det\operatorname{D}\nolimits_{|_{\operatorname{B}}}\det\operatorname{T} \nolimits_{|_{\operatorname{B}}}\) (taken over the possible choices of orientations) is equal to \(w(B)\). We have shown that \(\det\operatorname{L}_{0}\) is equal to the right hand side of Equation (26). To complete the proof, we show that the map \(r_{\operatorname{fr}}\circ(\operatorname{Id}-\iota_{\operatorname{fr}}):2^{ \operatorname{H}(X_{\operatorname{fr}})}\to\mathbb{Z}^{V(X_{\operatorname{fr}})}\) is surjective (this is in contrast to free double covers, where the image has index two). Again, we may pass to connected components and assume that \(X_{\operatorname{fr}}\) is connected. Since the double cover \(p:\widetilde{X}\to X\) is dilated, there is at least one dilated vertex \(v\in V(X)\backslash V(X_{\operatorname{fr}})\) connected by an undilated edge to an undilated vertex \(u\in V(X_{\operatorname{fr}})\). Let \(l\in L(X_{\operatorname{fr}})\) be the corresponding null leg rooted at \(u\). By the proof of Proposition 5.1 we have \(r_{\operatorname{fr}}\circ(\operatorname{Id}-\iota_{\operatorname{fr}})(l)=u\), so \(u\in\operatorname{Im}(r_{\operatorname{fr}}\circ(\operatorname{Id}-\iota_{ \operatorname{fr}}))\). Now let \(e=\{h,h^{\prime}\}\in E(X_{\operatorname{fr}})\) be an edge rooted at \(r(h)=u\) and another vertex \(r(h^{\prime})=u^{\prime}\). Again by the proof of Proposition 5.1 we have \(r_{\operatorname{fr}}\circ(\operatorname{Id}-\iota_{\operatorname{fr}})(h)=u \pm u^{\prime}\), and since \(u\in(\operatorname{Im}r_{\operatorname{fr}}\circ(\operatorname{Id}-\iota_{ \operatorname{fr}}))\) we have \(u^{\prime}\in(\operatorname{Im}r_{\operatorname{fr}}\circ(\operatorname{Id} -\iota_{\operatorname{fr}}))\). Since \(X_{\operatorname{fr}}\) is connected, we may proceed in this way and show that \(w\in(\operatorname{Im}r_{\operatorname{fr}}\circ(\operatorname{Id}-\iota_{ \operatorname{fr}}))\) for every generator \(w\) of \(\mathbb{Z}^{V(X_{\operatorname{fr}})}\). This completes the proof. **Example 5.4**.: We consider the two \(\mathbb{Z}/2\mathbb{Z}\)-quotients of the Petersen graph \(P\) shown on Figure 7. We recall that \(\operatorname{Jac}(P)=\mathbb{Z}/2\mathbb{Z}\oplus(\mathbb{Z}/10\mathbb{Z})^{3}\) and thus \(|\operatorname{Jac}(P)|=2000\). Taking the quotient by the order two subgroup \(G\subset\operatorname{Aut}(P)\) generated by \((ab)\), we obtain the top center graph \(P/G\). There are three undilated vertices \(ac\), \(ad\), and \(ae\) and six undilated edges that we denote \(E_{u}=\{e_{ac,de},e_{ad,cd},e_{ae,cd},e_{ac,ad},e_{ad,ae},e_{ac,ae}\}\). We consider the 20 three-element subsets of \(E_{u}\). If we remove the three edges of \(P/G\) incident to \(ac\), then the lone vertex \(ac\in V(P/G)\) has disconnected preimage \(p^{-1}(ac)=\{ac,bc\}\). Hence \(B=\{e_{ac,de},e_{ac,ad},e_{ac,ae}\}\) is not an ogod, and the same is true for the tangent spaces to \(ad\) and \(ae\). The outside cycle \(B=\{e_{ac,ad},e_{ad,ae},e_{ae,ac}\}\) lifts to a closed loop in \(P\) and hence is an ogod of weight 4. For each of the 16 remaining 3-element subsets \(B\subset E_{u}\), every connected component of the graph \((P/G)|_{B}\) is a tree having a unique dilated vertex, hence \(B\) is an ogod of weight 1. Proposition 4.6 and Theorem 5.3 imply that \[\frac{|\operatorname{Jac}(P)|}{|\operatorname{Jac}(P/\!/G)|}=|\operatorname{Jac }_{0}|=16+1\cdot 4=20.\] This agrees with Figure 6, since \(\operatorname{Jac}(P/\!/G)=(\mathbb{Z}/10\mathbb{Z})^{2}\) and hence \(|\operatorname{Jac}(P/\!/G)|=100\). We also consider the order two subgroup \(H\subset\operatorname{Aut}(P)\) generated by \((ab)(cd)\), the quotient graph for which is the top left graph in Figure 7. The graph \(P/\!/H\) has six undilated edges \(E_{u}=\{e_{ab,ce},e_{ac,ce},e_{ac,ae},e_{ad,ae},e_{ad,ce},e_{ae,cd}\}\) and two odd legs \(L=\{l_{ac},l_{ad}\}\). Out of the 70 4-element subsets of \(E_{u}\cup L\), there are 46 ogods in 15 symmetry classes. Figure 8 lists all ogods up to symmetry together with their weights. The total weight of all ogods is 100, so by Proposition 4.6 and Theorem 5.3 we have \[\frac{|\operatorname{Jac}(P)|}{|\operatorname{Jac}(P/\!/H)|}=|\operatorname{Jac }_{0}|=100\] This agrees with Figure 6, since \(\operatorname{Jac}(P/\!/G)=\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/10\mathbb{Z}\) and hence \(|\operatorname{Jac}(P/\!/G)|=20\). which agrees with Figure 6 since \(\operatorname{Jac}(\operatorname{P}\!/\!/\!/\!\operatorname{G})=(\mathbb{Z}/10 \mathbb{Z})^{2}\) and hence \(|\operatorname{Jac}(\operatorname{P}\!/\!/\!\operatorname{G})|=100\).
2306.13272
Spotting Hallucinations in Inverse Problems with Data-Driven Priors
Hallucinations are an inescapable consequence of solving inverse problems with deep neural networks. The expressiveness of recent generative models is the reason why they can yield results far superior to conventional regularizers; it can also lead to realistic-looking but incorrect features, potentially undermining the trust in important aspects of the reconstruction. We present a practical and computationally efficient method to determine, which regions in the solutions of inverse problems with data-driven priors are prone to hallucinations. By computing the diagonal elements of the Fisher information matrix of the likelihood and the data-driven prior separately, we can flag regions where the information is prior-dominated. Our diagnostic can directly be compared to the reconstructed solutions and enables users to decide if measurements in such regions are robust for their application. Our method scales linearly with the number of parameters and is thus applicable in high-dimensional settings, allowing it to be rolled out broadly for the large-volume data products of future wide-field surveys.
Matt L. Sampson, Peter Melchior
2023-06-23T02:55:24Z
http://arxiv.org/abs/2306.13272v1
# Spotting Hallucinations in Inverse Problems with Data-Driven Priors ###### Abstract Hallucinations are an inescapable consequence of solving inverse problems with deep neural networks. The expressiveness of recent generative models is the reason why they can yield results far superior to conventional regularizers; it can also lead to realistic-looking but incorrect features, potentially undermining the trust in important aspects of the reconstruction. We present a practical and computationally efficient method to determine, which regions in the solutions of inverse problems with data-driven priors are prone to hallucinations. By computing the diagonal elements of the Fisher information matrix of the likelihood and the data-driven prior separately, we can flag regions where the information is prior-dominated. Our diagnostic can directly be compared to the reconstructed solutions and enables users to decide if measurements in such regions are robust for their application. Our method scales linearly with the number of parameters and is thus applicable in high-dimensional settings, allowing it to be rolled out broadly for the large-volume data products of future wide-field surveys. Machine Learning, Inverse Problems, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian, Bayesian Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian Inference, Bayesian, Inference Inference, Bayesian, Bayesian, Inference Inference, Bayesian Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian Inference, Bayesian, Inference Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Inference, Inference Inference, Bayesian Inference, Bayesian, Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian Inference Inference Inference, Bayesian Inference Inference Inference Inference, Bayesian Inference strongest solvers, hallucinations will stay with us. Here we ask the question: Can we find out where they happen? ## 2 Methods We assume that we have access to differentiable models of the forward process \(f\), the likelihood \(\mathcal{L}\), and the prior \(\mathcal{P}\). First-order minimizers then perform gradient steps in the opposite direction of \(\nabla_{\mathbf{x}}L=\nabla_{\mathbf{x}}\log\mathcal{L}+\nabla_{\mathbf{x}} \log\mathcal{P}\). The curvature of \(L\), i.e. the Hessian matrix \(\mathbf{H}_{ij}=\frac{\partial^{2}L}{\partial x_{i}\partial x_{j}}\) is used in second-order methods to determine the step sizes. In statistics, the negative Hessian is called the Fisher information matrix \(\mathbf{F}=-\mathbf{H}\), so dubbed because it describes the statistical information about \(\mathbf{x}\) conveyed by \(L\). Equipped with \(\mathbf{F}\), we can recast our question: Where is \(\mathbf{F}\) dominated by the prior, as opposed to the likelihood? As long as the features of the solution are largely determined by \(\mathcal{L}\), there are by definition no hallucinations. We now seek to compute the Hessians of \(\log\mathcal{L}\) and \(\log\mathcal{P}\) at the minimizer of Equation 1 separately, or, if more convenient, the Hessians of \(L\) and one of the others, as the Fisher matrix is linear: \[\mathbf{F}=\mathbf{F}_{\log\mathcal{L}}+\mathbf{F}_{\log\mathcal{P}}. \tag{2}\] Unfortunately, calculating the full Hessian matrix is inefficient, with a scaling of \(\mathcal{O}(n^{2})\), where \(n\) is the dimension of \(\mathcal{X}\). This makes computing \(\mathbf{H}\) intractable for most practical applications. A full marginalization to get pixel-level uncertainties would additionally require a matrix inversion. But in imaging applications, the off-diagonal terms in the Hessian are typically small, and often confined to a few off-diagonal bands. We therefore make the simplification of computing the Hessian diagonal \(\mathbf{H}_{D}=\mathrm{Diag}(\mathbf{H})\). Doing so has two advantages: There is an efficient way to compute only the diagonals of the Hessian, which we describe below; and the result has the same shape as \(\mathbf{x}\), which means that it has the form of an image that can be compared to the solution to indicate regions dominated by the prior. We follow the approach of Hutchinson (1989); Yao et al. (2020) to calculate the approximated diagonal Hessian. Instead of computing the full Hessian, we make use of the Hessian-vector product (HVP), which can be computed efficiently with automatic differentiation. From the chain rule, we have the following equation for the HVP, \[\frac{\partial\mathbf{g}^{T}\mathbf{z}}{\partial\mathbf{x}}=\frac{\partial \mathbf{g}^{T}}{\partial\mathbf{x}}\mathbf{z}-\mathbf{g}\frac{\partial \mathbf{z}}{\partial\mathbf{x}}=\frac{\partial\mathbf{g}^{T}}{\partial \mathbf{x}}\mathbf{z}=\mathbf{H}\cdot\mathbf{z} \tag{3}\] where \(\mathbf{g}=\nabla_{x}L\), and \(\mathbf{z}\) is an arbitrary vector independent of \(\mathbf{x}\), hence \(\partial\mathbf{z}/\partial\mathbf{x}=0\). Equation 3 evidently requires only \(\mathcal{O}(n)\) operations, which is critical for higher-dimensional problems. To compute the diagonal approximation to the Hessian, we employ the method of Hutchinson (1989): \[\mathbf{H}_{D}=\mathbb{E}\left(\mathbf{z}\odot(\mathbf{H}\cdot\mathbf{z}) \right), \tag{4}\] with \(\mathbf{z}\) being sampled from a Rademacher distribution. We show an implementation of the Hessian diagonal approximation in Algorithm 1, making use gradient and Jacobian-vector product routines in JAX. ``` functionHessianDiag(\(f,\mathbf{x},\epsilon\)) g = jax.grad(\(f\)) hvp = jax.jvp(g, \(\mathbf{x}\)) H = jnp.zeros(\(\mathbf{x}.\)shape) H = jnp.zeros(\(\mathbf{x}.\)shape) for\(i=0\dots\)do \(\mathbf{z}\sim\mathrm{Rademacher}(\mathbf{x}.\)shape) H = H + (\(\mathbf{z}\odot\mathrm{hvp}(\mathbf{z})\)) if\(\|\mathbf{H}/(i+1)-\mathbf{H}^{\prime}/i\|<\epsilon\|\mathbf{H}/(i+1)\|\)then return\(\mathbf{H}/(i+1)\) H = H ``` **Algorithm 1** Hessian diagonal approximation Finally, we produce our hallucination score \[\Delta\mathbf{F}=\mathrm{Diag}(\mathbf{F}_{\log\mathcal{P}})-\mathrm{Diag}( \mathbf{F}_{\log\mathcal{L}}). \tag{5}\] Equivalent forms, such as \(\Delta\mathbf{F}=2\,\mathrm{Diag}(\mathbf{F}_{\log\mathcal{P}})-\mathrm{Diag}( \mathbf{F})\) can be chosen as well, e.g. when the Hessian of the log posterior has already been estimated during the optimization. The positive regions of \(\Delta\mathbf{F}\) are prior-dominated and are therefore prone to hallucinations. The user can then decide how much trust they should place in the reconstruction of features in these regions. An example from an inpainting problem in astronomy is shown as the image on the right-hand side of Figure 1. ## 3 Experiments We demonstrate the capability of this method with a toy model of a galaxy reconstruction using the source deblending method scarlet(Melchior et al., 2018), which computes a differentiable likelihood and performs proximal gradient descent to enforce regularization. We replaced these regularizers by a score-based diffusion model, which directly learns \(\nabla_{\mathbf{x}}\log\mathcal{P}\) from training data, and acts as an informative prior for the galaxy morphology distribution. We implement all models in JAX(Bradbury et al., 2018) and equinox(Kidger and Garcia, 2021) with the diffusion model based on the implementation from Song et al. (2020). ### Data The diffusion model was trained on data from the Subaru Hyper-Suprime Cam catalogue (Bosch et al., 2018). We extracted the existing scarlet models (each representing a single, isolated galaxy source) for three tracts, yielding about 600,000 examples. Doing so exploits that these models have already been deblended and deconvolved and can therefore act as examples of the true distribution of galaxy shapes. We trained the diffusion model on a single NVIDIA A100 for \(1,000,000\) steps with a batch size of 256. ### Quality of the score model We show a test of the score model and its utility for suppressing image features that are inconsistent with galaxy shapes. Figure 2 shows an input image with and without a ring-shaped artifact. The bottom row shows the corresponding prior score. While the original galaxy image has an overall low amplitude prior score without clear spatial structure, the artifact is strongly suppressed by the prior gradients. While we train a time-dependent score model as in Song et al. (2020), we evaluate the score at temperature \(T=0\) during the optimization and for computing the hallucination score. Longer runtimes of diffusion models would not allow us to scale the prior evaluations to the data volumes expected for the Vera C. Rubin Observatory (Ivezic et al., 2019). By comparing scores from the \(T=0\) limit with those from full diffusion, we have confirmed that our data distribution is simple enough that the former performs sufficiently well for our purposes. When we targeting larger or better resolved galaxies, we will need to reinvestigate this approximation. ### Hallucination score We assume we have a usable generative model of galaxy morphologies, which we now apply to solve an inverse problem to see where prior and likelihood dominate the reconstruction, respectively. We take a random sample of a galaxy observation from the HSC data and model it with scarlet2. For simplicity, we remove the right half of the data (array values and weights set to 0), as this will enforce the prior to generate the entirety of the features on this half of the reconstructed image. In the second trial, we perform the same reconstruction on the unaltered image. The **top row** of Figure 3 shows the results of trial 1, and the bottom row shows trial 2. We can see from the reconstruction that the noise in the image is removed, and that by virtue of the prior we get a reasonable estimate of the right half of the galaxy shape as well. The next panels show the Hessian diagonals \(\mathbf{H}_{D}\) for \(\log\mathcal{L}\) and \(\log\mathcal{P}\), both calculated with Algorithm 1, allowing us to then compute the hallucination Figure 1: Overview of our methods to produce a reconstructed galaxy image and the corresponding hallucination score. The input image, of which half was masked to create an inpainting and denoising problem, is first modeled with scarlet by gradient descent of the likelihood \(\nabla_{\mathbf{x}}\log\mathcal{L}\) and a data-driven prior in the form of a score model \(\nabla_{\mathbf{x}}\log\mathcal{P}\). Next, we calculate the diagonalized Hessian matrices for both \(\log\mathcal{L}\) and \(\log\mathcal{P}\) via Algorithm 1. We then compute the hallucination score according to Equation 5. Figure 2: Example galaxy sample from the HSC catalog (_top left_), with a ring artifact (_top right_), and the calculated prior score for both cases (_bottom row_). The artifact is strongly suppressed in the prior gradients. score from Equation 5. While the left side of the \(\Delta\mathbf{F}\) shows that the information comes from the likelihood, the majority of the right side is dominated by the prior, as expected due to the absence of valid data. It is noteworthy that the hallucination score is much weaker in the central region of the right side than in the outskirts. This is likely attributed to the training data consisting of primarily relatively small galaxies, leading to a high confidence in the pixel values for the outskirts of the galaxy source: they are likely very close to 0. In the inner region, the score model is much less confident, so the hallucination score is closer to 0. We also note that for \(\mathbf{H}_{D}(\log\mathcal{P})\) we evaluate the Jacobian of the score model, which is in itself only an approximation of the true prior gradients; any inaccuracies of the score model will be amplified when computing another derivative. The bottom row of Figure 3 shows the same results run on the unaltered galaxy, where we now see that the likelihood dominates the central region with the highest signal-to-noise ratio, while the prior starts to dominate in the outskirts. The calculations for the Hessian diagonals in the example of Figure 3 took \(\approx\)1 ms for the likelihood, which converged after a single iteration, and \(\approx\)260 ms for the prior, which took on average 80 iterations to converge, resulting in roughly 3 ms per single HVP evaluation. With just-in-time compilation, the JVP of the score network is only a factor 3 slower than the HVP of the simple Gaussian likelihood of this example. All timing tests were run on an M1 Macbook Pro, utilizing 4 CPU cores. ## 4 Conclusion We presented a practical and computationally efficient method to determine which regions in the solutions of inverse problems with data-driven priors are prone to hallucinations. By computing the diagonal elements of the Fisher information matrix of the likelihood and the prior separately, we can flag regions where the information is prior-dominated. Our diagnostic can directly be compared to the reconstructed solutions and enables users to make informed decisions about the trustworthiness of relevant features in the reconstruction. Our method scales linearly with the number of parameters and is thus scalable to high-dimensional settings, allowing it to be rolled out broadly for the large-volume data products of future wide-field surveys. The choice of Equation 5 as a hallucination metric has advantages over simpler diagnostics such as directly calculating the standard deviation of the posterior. Doing so cannot differentiate the source of the information that determines the optimized model. Alternatively, merely checking the gradients of the likelihood and prior will become meaning Figure 3: In the first two panels we show the initial input image for which the right half of the data has been set to 0, and the reconstructed image. Panels 3 and 4 show the Hessian diagonal for \(\log\mathcal{L}\) and \(\log\mathcal{P}\), respectively. We note the features of \(\mathbf{H}_{D}(\log\mathcal{L})\) come directly from the variance weighting in the initial HSC data for the galaxy. In the rightmost, panel we show the hallucination score \(\Delta\mathbf{F}\) from Equation 5. The red shading indicates regions dominated by the prior, whereas the blue shading shows regions dominated by the likelihood. less once the model is converged because they need to be either very small or cancel each other. While our method does not make assumptions about the prior model, computing gradients of data-driven priors requires a high level of fidelity of that model. Caution should be taken to ensure that the prior model is of sufficient accuracy for this purpose. ## Software and Data We have used python(Van Rossum & Drake Jr, 1995) with the packages JAX(Bradbury et al., 2018), equinox(Kidger & Garcia, 2021), numpy(Harris et al., 2020), and matplotlib(Hunter, 2007). Data for this project is taken from the Subaru Hyper-Suprime Cam Survey (Bosch et al., 2018). ## Acknowledgements We thank the reviewers for their helpful comments in improving this manuscript. MLS acknowledges financial support from the Princeton University First-Year Fellowship in the Natural Sciences and Engineering.
2303.08932
Enhancing Data Space Semantic Interoperability through Machine Learning: a Visionary Perspective
Our vision paper outlines a plan to improve the future of semantic interoperability in data spaces through the application of machine learning. The use of data spaces, where data is exchanged among members in a self-regulated environment, is becoming increasingly popular. However, the current manual practices of managing metadata and vocabularies in these spaces are time-consuming, prone to errors, and may not meet the needs of all stakeholders. By leveraging the power of machine learning, we believe that semantic interoperability in data spaces can be significantly improved. This involves automatically generating and updating metadata, which results in a more flexible vocabulary that can accommodate the diverse terminologies used by different sub-communities. Our vision for the future of data spaces addresses the limitations of conventional data exchange and makes data more accessible and valuable for all members of the community.
Zeyd Boukhers, Christoph Lange, Oya Beyan
2023-03-15T20:57:31Z
http://arxiv.org/abs/2303.08932v1
# Enhancing Data Space Semantic Interoperability through Machine Learning: a Visionary Perspective ###### Abstract. Our vision paper outlines a plan to improve the future of semantic interoperability in data spaces through the application of machine learning. The use of data spaces, where data is exchanged among members in a self-regulated environment, is becoming increasingly popular. However, the current manual practices of managing metadata and vocabularies in these spaces are time-consuming, prone to errors, and may not meet the needs of all stakeholders. By leveraging the power of machine learning, we believe that semantic interoperability in data spaces can be significantly improved. This involves automatically generating and updating metadata, which results in a more flexible vocabulary that can accommodate the diverse terminologies used by different sub-communities. Our vision for the future of data spaces addresses the limitations of conventional data exchange and makes data more accessible and valuable for all members of the community. data spaces, semantic interoperability, machine learning + Footnote †: 2023 to improve semantic interoperability in Data Spaces, making it the go-to solution for data exchange. In this paper, we present our perspective on how machine learning solutions can be utilized to improve the semantic interoperability of a data space, using IDS as a concrete example. Although there are numerous aspects of semantic interoperability, we concentrate on six key challenges that are prevalent in data exchange. It is crucial to note that this paper does not make any definitive claims, but instead offers a comprehensive overview and framework for integrating machine learning solutions directly into data spaces. ## 2. Related Work The concept of Data Spaces has been in existence for several decades, but in recent times, it has gained significant attention, and considerable effort is being devoted to facilitating data exchange in today's data-driven ecosystem, such as International Data Spaces (IDS)1 and the Common European Data Spaces (Hernet and Web, 2017). So far, the primary focus in practical data spaces has been on legal, technical, and metadata interoperability, with little attention given to the semantic aspect of data, as only a few studies have been conducted in this area (Beng et al., 2017). This means that in terms of semantic interoperability, the current focus is on metadata only with the assumption that it exists. However, data semantic interoperability has been studied for decades. For example, Ouksel et al. (Ouksel et al., 2017) discussed the issue of finding accurate information in a complex, heterogeneous information system like the Internet and Web. They proposed a framework for interoperability that involves relating information to real-world entities and acknowledges the changing nature of semantics. More than one decade later, Kiljander et al. (Kiljander et al., 2017) discussed the need for common approaches to enable high-level interoperability between heterogeneous IoT devices to realize pervasive computing and IoT visions. It divides the interoperability challenge into two levels: connectivity and semantics. The connectivity level covers traditional Open System Interconnection (OSI) model layers from the physical to the transport layer. The semantic level covers technologies needed for enabling meaning-sharing between communicating parties. The authors stated that semantic level interoperability has been identified as a main goal in the Semantic Web and that semantic web technology can be used to represent knowledge about the physical world in IoT-related projects. Footnote 1: [https://www.fraunhofer.de/en/research/lighthouse-projects-fraunhofer-initiatives/international-data-spaces.html](https://www.fraunhofer.de/en/research/lighthouse-projects-fraunhofer-initiatives/international-data-spaces.html) Semantic interoperability in data exchange has been also addressed in specific domains. Lin et al. (Lin et al., 2014) evaluated the usage of Logical Observation Identifiers Names and Codes (LOINC) and its impact on the interoperability of laboratory data from different institutions that use LOINC codes. Heterogeneous data formats have been discovered among different institutions for the same laboratory tests using LOINC codes. After investigating the common problems that arise when aggregating such data, they suggest that more guidance on best practices in coding laboratory results is needed to achieve greater interoperability. ## 3. ML-Enhanced Data Spaces Semantic interoperability in data spaces is a complex issue that involves multiple aspects, as illustrated in Figure 1. While machine learning has the potential to improve each of these aspects, traditional approaches have primarily utilized machine learning techniques in isolation, rather than within the broader context of data spaces. It is vital to consider the full spectrum of semantic interoperability aspects and integrate machine learning in a comprehensive and holistic manner within the data space environment. Figure 2 presents an overview of the ML-enhanced data space in the International Data Spaces environment, showcasing six key aspects of data management among three stakeholders, including data providers and consumers and service providers. These aspects are: * **Automatic Metadata Extraction (1)**: A machine learning model can automatically extract essential attribute values from the data if metadata is not already available, helping data providers to prepare their data for exchange and consumption without the need for manual metadata preparation. * **Ontology and Vocabulary Alignment (2)**: The vocabulary of the data space is aligned with the vocabulary of the data provider, enabling data consumers to understand the data being exchanged. This eliminates the need for members in the data space to adopt the same internal vocabulary, which can often be a challenging task. * **FAIRness Evaluation (3)**: The FAIRness level of the data is assessed based on provided or extracted metadata, allowing the data provider to improve the FAIRness of their data and allowing the data consumer to understand the ease of use of the data. Figure 1. Semantic interoperability aspects in data spaces that machine learning can enhance * **Data Quality Assessment & Enhancement (4)**: The quality of the data is evaluated and improved if possible, based on the format of the data. Machine learning can be used to evaluate and enhance structured and tabular data, however, it's important to recognize that the quality metrics may vary depending on the format of the data. For example, it might be challenging to assess the quality of unstructured data (e.g., a corpus of documents). * **Privacy Preserving (5)**: ML-based anonymization and masking techniques can be applied to data that contains private, sensitive, or personal information to make it shareable. Sensitive data can be automatically detected or provided by the data provider, allowing data providers to share their data without any privacy concerns. * **Compatibility Improvement (6)**: The data is transformed into a readable format for the data consumer. In cases where data is being merged with the consumer's data, the consumer will communicate the structure and format, enabling the data to be transformed accordingly. This allows the consumer to make use of the received data without having to put in additional effort to read and understand it. In the following, we discuss each of these aspects: ### Automatic Metadata Extraction Metadata plays a vital role in data exchange as it enables data consumers to understand the data and determine if it meets their needs. However, many data providers may be hesitant to provide the necessary metadata due to a lack of capacity or knowledge to prepare it for their resources. This can be a significant obstacle in data exchange, as it limits the ability of consumers to access and utilize the data they need. To overcome this challenge, machine learning can be leveraged to (semi) automatically extract metadata from resources. Machine learning algorithms can be trained on a dataset of resources and their corresponding metadata, allowing them to learn the patterns and relationships between the data and the metadata. These algorithms can then be applied to new resources to extract the relevant metadata. This approach has the advantage of being able to handle complex and nuanced relationships between the data and the metadata. It can also be easily updated and adapted as the data and its needs evolve. However, it is important to note that a typical challenge in data spaces is that the resources have different, heterogeneous formats. Different resources being exchanged in data spaces can have varying metadata properties, and it may be necessary to utilize different machine learning (ML) models for different resources and metadata attributes. For instance, in the case of document corpora, Natural Language Processing (NLP) techniques can be employed to extract titles and descriptions. Specifically, automatic metadata extraction techniques such as those in (Bartos et al., 2015; Krizhevsky et al., 2015) can be utilized to extract metadata from each document, such as _Publication Date, Author, Language_, etc. This metadata can then be used to derive the metadata for the entire collection, such as _Publication Range, Authors, Languages_, etc. ### Ontology and Vocabulary Alignment The International Data Spaces Reference Architecture4 highlights the importance of common vocabularies for effective data exchange within a data space. However, in practice, data providers may have their own unique vocabularies, making it difficult to align them with the vocabulary used in the data space. This can be due to the cost and effort involved in mapping their existing vocabularies to the data space vocabulary, or due to the fact that a data provider may participate in multiple data spaces with different vocabularies. Footnote 4: [https://internationaldatabases.org/use/reference-architecture/](https://internationaldatabases.org/use/reference-architecture/) To tackle these challenges, machine learning algorithms can be utilized to support automatic mapping between the local vocabulary of a data provider and the vocabulary used in the data space. This allows for seamless and interoperable data exchange, without requiring data providers to adopt a new vocabulary. Machine learning-based methods for ontology alignment (Krizhevsky et al., 2015) and ontology matching (Bartos et al., 2015) can be applied to automatically map concepts and terms from one ontology or vocabulary to another. These algorithms use techniques such as semantic similarity measures (Krizhevsky et al., 2015), graph-based methods (Krizhevsky et al., 2015), and deep learning models (Bartos et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015) to identify correspondences between concepts in different ontologies or vocabularies. The goal is to produce a mapping that enables data exchange between systems using different ontologies or vocabularies while preserving the meaning of the data. ### FAIRness Evaluation The FAIR (i.e., Findable, Accessible, Interoperable and Reusable) principles are becoming increasingly important in data exchange and sharing. These principles aim to ensure that data resources are easily discoverable, accessible, can be easily integrated with other data sources, and can be reused for multiple purposes. Compliance with these principles makes it more likely that data will be used and reused, as it increases the overall quality and usability of the resource. Evaluating the FAIRness of a resource is a crucial step in determining its fitness for use, as it helps to identify any potential barriers to reuse. This can include issues such as licensing restrictions, data access conditions, and data interoperability issues. Conducting this evaluation in advance can save valuable time and resources, as it helps to avoid the need for costly negotiations or lengthy wait times for access to data that may not be suitable for the intended use. As discussed in Section 3.2, the use of shared vocabularies, such as ontologies, is important for increasing the findability and interoperability of resources. However, only using mapping techniques (see Section 3.2) may not be enough, as internal ontologies that describe the metadata may not be represented using common classes. To address this issue, machine learning techniques, such as BERTmap (Bartos et al., 2015), can be used to assess the level of compatibility between the provider's ontology and the data space's ontology. Additionally, rule-based and semantic web technologies can be used to evaluate the structure of the metadata, further increasing the overall FAIRness of the resource. ### Data Quality Assessment & Enhancement Data quality is a crucial concern for data consumers, as it impacts the trustworthiness and usefulness of the data. Unfortunately, metadata alone cannot provide any indication of the quality of the data. To ensure the quality of data, various dimensions must be considered, including accuracy, completeness, correctness, validity, integrity, and uniqueness. The importance of each dimension may vary depending on the intended use of the data and the needs of the data consumer. Accuracy refers to how closely the data reflects the real-world phenomenon it represents. Completeness refers to the extent to which all necessary data is present. Correctness pertains to the degree to which the data adheres to established rules, such as those related to syntax, semantics, or data constraints. Validity refers to the degree to which the data follows the predefined format, structure, and domain. Integrity is the degree to which the data is protected against unauthorized changes. Lastly, uniqueness refers to the degree to which each data item is distinct and identifiable. To ensure data quality, data providers must take steps to assess and improve the quality of their data. This can include implementing data validation and quality checks, using techniques like data profiling and data cleaning, and implementing data governance policies and procedures. Data consumers should also take steps to assess the quality of the data they receive, such as evaluating the data's source and provenance, performing data quality checks, and monitoring the data for anomalies. Machine learning algorithms can play a crucial role in ensuring the quality and accuracy of data. One way they achieve this is by comparing the data to other sources to validate its accuracy. Additionally, machine learning algorithms can be trained to identify patterns and anomalies in the data (Bahdan et al., 2016; Krizhevsky et al., 2014), helping to flag any potential inaccuracies or errors. Another benefit of using machine learning algorithms is the ability to complete missing data. By analyzing patterns and relationships in the data, machine learning models can make predictions about missing values and fill them in (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014). This is especially useful in cases where it would be time-consuming or challenging to manually fill in missing data. Furthermore, machine learning techniques can also be applied to identify and remove duplicates in data, improving the overall uniqueness and consistency of the data (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014). ### Privacy Preserving Private and sensitive data, such as personal information, medical records, and financial data, is often subject to strict regulations and guidelines for protection and access. In order for different systems to exchange and use private data, they must be able to accurately interpret and understand the meaning and context of the data, and ensure that it is being used in compliance with applicable laws and regulations. ensuring semantic interoperability for private data requires a combination of technical solutions, such as secure data exchange protocols and data anonymization techniques, and strict governance and compliance mechanisms. To achieve this, data providers can use machine learning techniques to automatically detect private and sensitive data in their systems (Bahdan et al., 2016; Krizhevsky et al., 2014) and take appropriate actions to mask (Krizhevsky et al., 2014) or anonymize (Krizhevsky et al., 2014) the data. This can help protect individuals' privacy while enabling data sharing and interoperability. For example, techniques such as data de-identification, data masking, and differential privacy can be Figure 2. An overview of an ML-enhanced Data Space with three members. (1) Automatic Metadata Extraction, (2) Ontology and Vocabulary Alignment, (3) FAIRness Evaluation, (4) Data Quality Assessment & Enhancement, (5) Privacy Preserving, (6) Compatibility Improvement. used to remove identifying information from data while preserving its usefulness for analysis. ### Compatibility Improvement Also, when the same vocabulary and ontology are used by the data provider and consumer, resources are not semantically interoperable if they are not compatible with the consumer system of their resource to be integrated with. To overcome the incompatibility of resources in data exchange, solutions include data mapping and data transformation. Machine learning techniques have shown great performance in these tasks. Resources are not semantically interoperable when they cannot be understood or used by the systems that need to access them. This can occur when the resources have different data formats or structures, making it difficult for systems to integrate and make use of the information. To overcome the incompatibility of resources in data exchange, solutions include data mapping (Kumar et al., 2018) and data transformation (Kumar et al., 2018). Data mapping is the process of aligning the data elements from one resource to the corresponding elements in another resource. Data transformation is the process of converting data from one format or structure to another. Both of these solutions can help to make resources compatible and enable data exchange. Machine learning can also be used to convert data from one format to another, such as natural language text to structured data (Kumar et al., 2018). ## Discussion The enhancement of semantic interoperability of data spaces is a complex task that involves different facets and approaches. In this paper, we have focused on specific aspects that can be improved through the use of machine learning in the context of International Data Spaces. To achieve this, we propose the development of machine learning-powered software that can be easily integrated into the Data Spaces connectors as smart data apps. This will make the software more user-friendly and accessible, allowing for seamless integration into the existing system. In addition, with the growing popularity of Gaia-X in Europe and beyond, this software can also be provided as a service within the Gaia-X framework, offering members a valuable resource for improving semantic interoperability. By integrating machine learning into the data spaces, organizations can ensure that their data is properly structured, and their systems can effectively communicate and exchange information with other systems, resulting in more efficient and effective data management and exchange. ## 4. Conclusion In this paper, we presented our innovative perspective on enhancing semantic interoperability in data spaces through the use of machine learning. Our focus was on six crucial aspects of interoperability within the International Data Spaces architecture, and we highlighted the significance of each of these aspects and how machine learning can improve their impact on successful data exchange. As a follow-up to this work, we plan to test some of the concepts and solutions presented in this paper by integrating them into real-world data exchange scenarios in both the International Data Spaces and Gaia-X architectures. This will provide valuable insights into the practical implementation and effectiveness of our proposed approach, and help to further advance the state of the art in data interoperability and exchange.
2310.10102
KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training
This paper proposes a method for hiding the least-important samples during the training of deep neural networks to increase efficiency, i.e., to reduce the cost of training. Using information about the loss and prediction confidence during training, we adaptively find samples to exclude in a given epoch based on their contribution to the overall learning process, without significantly degrading accuracy. We explore the converge properties when accounting for the reduction in the number of SGD updates. Empirical results on various large-scale datasets and models used directly in image classification and segmentation show that while the with-replacement importance sampling algorithm performs poorly on large datasets, our method can reduce total training time by up to 22% impacting accuracy only by 0.4% compared to the baseline. Code available at https://github.com/TruongThaoNguyen/kakurenbo
Truong Thao Nguyen, Balazs Gerofi, Edgar Josafat Martinez-Noriega, François Trahay, Mohamed Wahib
2023-10-16T06:19:29Z
http://arxiv.org/abs/2310.10102v1
# KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training ###### Abstract This paper proposes a method for hiding the least-important samples during the training of deep neural networks to increase efficiency, i.e., to reduce the cost of training. Using information about the loss and prediction confidence during training, we adaptively find samples to exclude in a given epoch based on their contribution to the overall learning process, without significantly degrading accuracy. We explore the converge properties when accounting for the reduction in the number of SGD updates. Empirical results on various large-scale datasets and models used directly in image classification and segmentation show that while the with-replacement importance sampling algorithm performs poorly on large datasets, our method can reduce total training time by up to 22% impacting accuracy only by 0.4% compared to the baseline. Code available at [https://github.com/TruongThaoNguyen/kakurenbo](https://github.com/TruongThaoNguyen/kakurenbo) ## 1 Introduction Empirical evidence shows the performance benefits of using larger datasets when training deep neural networks (DNN) for computer vision, as well as in other domains such as language models or graphs [1]. More so, attention-based models are increasingly employed as pre-trained models using unprecedented dataset sizes, e.g. the JFT-3B dataset consists of nearly three billion images, annotated with a class-hierarchy of around 30K labels [2], LIAON-5B provides 5,85 billion CLIP-filtered image-text pairs that constitute over 240TB [3]. A similar trend is also observed in scientific computing, e.g., DeepCAM, a climate simulation dataset, is over 8.8TB in size [4]. Furthermore, the trend of larger datasets prompted efforts that create synthetic datasets using GANS [5] or fractals [6]. The downside of using large datasets is, however, the ballooning cost of training. For example, it has been reported that training models such as T5 and AlphaGo cost $1.3M [7] and $35M [8], respectively. Additionally, large datasets can also stress non-compute parts of supercomputers and clusters used for DNN training (e.g., stressing the storage system due to excessive I/O requirements [9; 10]). In this paper, we are focusing on accelerating DNN training over large datasets and models. We build our hypothesis on the following observations on the effect of sample quality on training: a) _biased with-replacement sampling_ postulates that not all samples are of the same importance and a biased, with-replacement sampling method can lead to faster convergence [11; 12], b) _data pruning_ methods show that when select samples are pruned away from a dataset, the predication accuracy that can be achieved by training from scratch using the pruned dataset is similar to that of the original dataset [13; 14; 15; 16]. Our hypothesis is that if samples have a varying impact on the learning process and their impact decreases as the training progresses, then we can in real-time, adaptively, exclude samples with the least impact from the dataset during neural network training. In this paper, we dynamically hide samples in a dataset to reduce the total amount of computing and the training time, while maintaining the accuracy level. Our proposal, named KAKURENBO, is built upon two pillars. First, using combined information about the loss and online estimation of the historical prediction confidence (see Section 3.1) of input samples, we adaptively exclude samples that contribute the least to the overall learning process on a per-epoch basis. Second, in compensation for the decrease in the number of SGD steps, we derive a method to dynamically adjust the learning rate and the upper limit on the number of samples to hide in order to recover convergence rate and accuracy. We evaluate performance both in terms of reduction in wall-clock time and degradation in accuracy. Our main results are twofold: first, we show that decaying datasets by eliminating the samples with the least contribution to learning has no notable negative impact on the accuracy and convergence and that the overhead of identifying and eliminating the least important samples is negligible. Second, we show that decaying the dataset can significantly reduce the total amount of computation needed for DNN training. We also find that state-of-the-art methods such as importance sampling algorithm [11], pruning [13], or sample hiding techniques [17; 18] performs poorly on large-scale datasets. To the contrary, our method can reduce training time by \(10.4\%\) and \(22.4\%\) on ImageNet-1K [19] and DeepCAM [4], respectively, impacting Top-1 accuracy only by \(0.4\%\). ## 2 Background and Related Work As the size of training datasets and the complexity of deep-learning models increase, the cost of training neural networks becomes prohibitive. Several approaches have been proposed to reduce this training cost without degrading accuracy significantly. Table 1 summarizes related work against this proposal. This section presents the main state-of-the-art techniques. Related works are detailed in the Appendix-E. **Biased with-Replacement Sampling** has been proposed as a method to improve the convergence rate in SGD training [11; 12]. Importance sampling is based on the observation that not all samples are of equal _importance_ for training, and accordingly replaces the regular uniform sampling used to draw samples from datasets with a biased sampling function that assigns a likelihood to a sample being \begin{table} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Approach** & **Method** & \multicolumn{2}{c|}{**Merits (+)**} & \multicolumn{1}{p{113.8pt}|}{**Oline/ Offline (Bottleneck)**} & \multicolumn{1}{p{113.8pt}|}{**Practical Overhead (Constr)**} \\ \hline **Biased w/** **Replacement Sampling** & Importance Sampling [11] & * Theoretically faster convergence & Offline & Sorting samples & O(\(N_{\textit{log}}(N)\)) \\ \cline{3-6} & & & No demonstrated speedup on large datasets (Section 4) & Online & Sorting samples & O(\(N_{\textit{log}}(N)\)) \\ \cline{3-6} & & & No demonstrated speedup & & & \\ \hline **Data** & Engineers Scores [13] & * Robust & & & & \\ drawn proportional to its importance; the more important the sample is, the higher the likelihood it would be selected. The with-replacement strategy of importance sampling maintains the total number of samples the network trains on. Several improvements over importance sampling have been proposed for distributed training [22], or for estimating the importance of samples [12; 23; 24; 25; 26]. Overall, biased with-replacement sampling aims at increasing the convergence speed of SGD by focusing on samples that induce a measurable change in the model parameters, which would allow a reduction in the number of epochs. While these techniques promise to converge in fewer epochs on the whole dataset, each epoch requires computing the importance of samples which is time-consuming. **Data Pruning techniques** are used to reduce the size of the dataset by removing less important samples. Pruning the dataset requires training on the full dataset and adds significant overheads for quantifying individual differences between data points [27]. However, the assumption is that the advantage would be a reduced dataset that replaces the original datasets when used by others to train. Several studies investigate the selection of the samples to discard from a dataset[13; 15; 14; 16][28]. Pruning the dataset does reduce the training time without significantly degrading the accuracy [13; 14]. However, these techniques require fully training the model on the whole dataset to identify the samples to be removed, which is compute intensive. **Selective-Backprop**[17] combines importance sampling and online data pruning. It reduces the number of samples to train on by using the output of each sample's forward pass to estimate the sample's importance and cuts a fixed fraction of the dataset at each epoch. While this method shows notable speedups, it has been evaluated only on tiny datasets without providing any measurements on how accuracy is impacted. In addition, the authors allow up to 10% reduction in test error in their experiments. **Grad-Match**[18] is an online method that selects a subset of the samples that would minimize the gradient matching error. The authors approximate the gradients by only using the gradients of the last layer, use a per-class approximation, and run data selection every \(R\) epochs, in which case, the same subsets and weights will be used between epochs. Due to the infrequent selection of samples, Grad-Match often needs a larger number of epochs to converge to the same validation accuracy that can be achieved by the baseline [29]. Moreover, Grad-Match is impractical in distributed training, which is a de facto requirement in large dataset and models. Distributed Grad-Match would require very costly collective communication to collect the class approximations and to do the matching optimization. This is practically a very high cost for communication per epoch that could even exceed the average time per epoch. Figure 1: **Overview of KAKURENO. At each epoch, samples are filtered into two different subsets, the training list and the hidden list, based on their loss, prediction accuracy (PA), and prediction confidence (PC), with a maximum hidden fraction of \(F\). PA and PC are used to drive sample move back decisions. Samples in the training list are processed using uniform sampling without replacement. The loss and the prediction accuracy, calculated from the training process, are reused to filter samples in the next epoch. For samples on the hidden list, KAKURENO only calculates the loss and PA by performing the forward pass at the end of each epoch.** ## 3 KAKURENBO: Adaptively Hiding Samples In this work, we reduce the amount of work in training by adaptively choosing samples to hide in each epoch. We consider a model with a loss function \(\ell(\mathbf{w},\mathbf{x}_{n},\mathbf{y}_{n})\) where \(\left\{\mathbf{x}_{n},\mathbf{y}_{n}\right\}_{n=1}^{N}\) is a dataset of \(N\) sample-label pairs (\(x_{n}\in X\)), and \(G:X\to X\) is a function that is applied to hide certain samples during training, e.g., by ranking and cut-off some samples. Using SGD with a learning-rate \(\eta\) and batch size of \(B\), the update rule for each batch when training with original full dataset is \[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta\frac{1}{B}\sum_{n\in\mathcal{B}\left(k \left(t\right)\right)}\nabla_{\mathbf{w}}\ell\left(\mathbf{w}_{t},\mathbf{x}_ {n},\mathbf{y}_{n}\right) \tag{1}\] where \(k\left(t\right)\) is sampled from \(\left[N/B\right]\triangleq\left\{1,\ldots,N/B\right\}\), \(\mathcal{B}\left(k\right)\) is the set of samples in batch \(k\) (to simplify, \(B\) is divisible by \(N\)). We propose to hide \(M\) examples by applying the a hiding function \(G\). We modify the learning rule to be \[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta\frac{1}{B}\sum_{n\in\mathcal{B}\left(k \left(t\right)\right)}\nabla_{\mathbf{w}}\ell\left(\mathbf{w}_{t},G(\mathbf{x }_{n}),\mathbf{y}_{n}\right) \tag{2}\] using \(B\) batch at each step, which is composed of \(N/B\) steps. Since we exclude \(M\) samples, the aggregate number of steps is reduced from \(N/B\) to become \(\left(N-M\right)/B\), i.e., fixing the batch size and reducing the number of samples reduces the number of SGD iterations that are performed for each epoch. Sample hiding happens before presenting the input to each epoch. The training set that excludes the hidden samples (\(N-M\)) is then shuffled for the training to process with the typical w/o replacement uniform sampling method. Based on the above training strategy, we propose KAKURENBO, a mechanism to dynamically reduce the dataset during model training by selecting important samples. The workflow of our scheme is summarized in Figure 1. First, (B.1) we sort the samples of a dataset according to their loss. We then (B.2) select a subset of the dataset by _hiding_ a fixed fraction \(F\) of the data: the samples with the lowest loss are removed from the training set. Next, (B.3) hidden samples that maintain a correct prediction with high confidence (see Section 3.1) are moved back to the epoch training set. The training process (C) uses uniform sampling without replacement to pick samples from the training list. KAKURENBO adapts the learning rate (C.2) to maintain the pace of the SGD. At the end of the epoch, we perform the forward pass on samples to compute their loss and the prediction information on the up-to-date model (D). However, because calculating the loss for all samples in the dataset is prohibitively compute intensive [11], we propose to reuse the loss computed during the training process, which we call _lagging_ loss (D.2). We only recompute the loss of samples from the hidden list (D.1). In the following, we detail the steps of KAKURENBO. ### Hidden Samples Selection We first present our proposed algorithm to select samples to hide in each epoch. We follow the observation in [11] that not all the samples are equal so that not-too-important samples can be hidden during training. An important sample is defined as the one that highly contributes to the model update, e.g., the gradient norm \(\nabla_{\mathbf{w}}\ell\left(\mathbf{w}_{t},\mathbf{x}_{n},\mathbf{y}_{n}\right)\) in Equation 1. Removing the fraction \(F\) of samples with the least impact on the training model from the training list could reduce the training time, i.e., the required computing resource, without affecting the convergence of the training process. Selecting the fraction \(F\) is arbitrary and driven by the dataset/model. If the fraction \(F\) is too high, the accuracy could drop. In contrast, the performance gained from hiding samples will be limited if \(F\) is small, or potentially less than the overhead to compute the importance of samples. In this work, we aim to design an adaptive method to select the fraction \(F^{*}\) in each epoch. We start from a tentative maximum fraction \(F\) at the beginning of the training process. We then carefully select the hidden samples from \(F\) based on their importance and then move the remaining samples back to the training set. That is, at each epoch a dynamic hiding fraction \(F^{*}\) is applied. It is worth noting that the maximum fraction number \(F\) does not need to be strictly accurate in our design; it is a maximum ceiling and not the exact amount of samples that will be hidden. However, if the negative impact of hiding samples, i.e., becomes too high, it could significantly affect the accuracy. For example, when a high maximum fraction \(F\) is set and/or when most of the samples have nearly the same absolute contribution to the update, e.g., at the latter epoch of the training process. We investigate how to choose the maximum hiding fraction in each epoch in Section 3.3. **Moving Samples Back:** since the loss is computed in the forward pass, it is frequently used as the metric for the importance of the sample, i.e. samples with high loss contribute more to the update and are thus important [11; 22]. However, the samples with the smallest loss do not necessarily have the least impact (i.e., gradient norm) on the model, which is particularly true at the beginning of the training, and removing such high-impact samples may hurt accuracy. To mitigate the misselection of important samples as unimportant ones, we propose an additional rule to filter the low-loss samples based on the observation of historical prediction confidence [13]. The authors in [13] observed that some samples have a low frequency of toggling back from being classified correctly to incorrectly over the training process. Such samples can be pruned from the training set eternally. Because estimating the per-sample prediction confidence before training (i.e., offline) is compute-intensive, in this work, we perform an online estimation to decide whether an individual sample has a history of correct prediction with high confidence or not in a given epoch. Only samples that have low loss and sustain correct prediction with high confidence in the current epoch are hidden in the following epoch. A sample is correctly predicted with high confidence at an epoch \(e\) if it is predicted correctly (**PA**) and the prediction confidence (**PC**) is no less than a threshold \(\tau\), which we call the _prediction confidence threshold_, at the previous epoch. In addition to the prediction confidence of a given sample (\(x\), \(y\)) is the probability that the model predicts this sample to map to label \(y\): \[\begin{split} out=model(\mathbf{w}_{e},x,y)\\ PC=\max_{k}(\sigma(out_{k}))\end{split} \tag{3}\] where \(\sigma\) is a sigmod (softmax) activation function. In this work, unless otherwise mentioned, we set the prediction confidence threshold to \(\tau=0.7\) as investigated in Section 4.3. ### Reducing the Number of Iterations in Batch Training: Learning Rate Adjustment After hiding samples, KAKURENBO uses uniform without replacement sampling to train on the remaining samples from the training set. In this section, we examine issues related to convergence when reducing the number of samples and we provide insight into the desirable convergence properties of adaptively hiding examples. Implicit bias in the SGD training process may lead to convergence problems [30]: when reducing the total number of iterations at fixed batch sizes, SGD selects minima with worse generalization. We examine the selection mechanism in SGD when reducing the number of iterations at a fixed batch size. For optimizations of the original datasets, i.e., without example hiding, we use loss functions of the form \[f\left(\mathbf{w}\right)=\frac{1}{N}\sum_{n=1}^{N}\ell\left( \mathbf{w},\mathbf{x}_{n},\mathbf{y}_{n}\right)\,, \tag{4}\] where \(\left\{\mathbf{x}_{n},\mathbf{y}_{n}\right\}_{n=1}^{N}\) is a dataset of \(N\) data example-label pairs and \(\ell\) is the loss function. We use SGD with batch of size \(B\) and learning-rate \(\eta\) with the update rule \[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta\frac{1}{B}\sum_{n\in\mathcal{B}(k(t))} \nabla_{\mathbf{w}}\ell\left(\mathbf{w}_{t},\mathbf{x}_{n},\mathbf{y}_{n} \right)\,. \tag{5}\] for without replacement sampling, \(B\) divisible by \(N\) (to simplify), and \(k\left(t\right)\) sampled uniformly from \(\left\{1,\dots,N/B\right\}\). When using an over-parameterized model as is the case with deep neural networks, we typically converge to a minimum \(\mathbf{w}^{*}\) that is a global minimum on all data points \(N\) in the training set [31; 14]. Following Hoffer et al. [32], linearizing the dynamics of Eq. 5 near \(\mathbf{w}^{*}\) (\(\forall n:\nabla_{\mathbf{w}}\ell\left(\mathbf{w}^{*},\mathbf{x}_{n},\mathbf{ y}_{n}\right)=0\)) gives \[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta\frac{1}{B}\sum_{n\in\mathcal{B}(k(t))} \mathbf{H}_{n}\mathbf{w}_{t}\, \tag{6}\] where we assume \(\mathbf{w}^{*}=0\) since the models we target are over-parameterized (i.e., deep networks) leading to converge to a minimum \(\mathbf{w}^{*}\). We also assume \(\mathbf{H}_{n}\triangleq\nabla_{\mathbf{w}}^{2}\ell\left(\mathbf{w},\mathbf{x}_ {n},\mathbf{y}_{n}\right)\) represents the per-example loss Hessian. SGD can select only certain minima from the many potential different global minima for the loss function of a given the full training set \(N\) (and without loss of generality, for the training dataset after hiding samples \(N-M\)). The selection of minima by SGD depends on the batch sizes and learning rate through the averaged Hessian over batch \(k\) \[\left\langle\mathbf{H}\right\rangle_{k}\triangleq\frac{1}{B}\sum_{n\in\mathcal{ B}(k)}\mathbf{H}_{n}\] and the maximum over the maximal eigenvalues of \(\left\{\left\langle\mathbf{H}\right\rangle_{k}\right\}_{k=1}^{N/B}\) \[\lambda_{\max}=\max_{k\in[N/B]}\max_{\forall\mathbf{v}:\|\mathbf{v}\|=1} \mathbf{v}^{\top}\left\langle\mathbf{H}\right\rangle_{k}\mathbf{v}. \tag{7}\] This \(\lambda_{\max}\) affects SGD through the Theorem proved by Hoffer et al. [32]: the iterates of SGD (Eq. 6) will converge if \[\lambda_{\max}<\frac{2}{\eta}\] The theorem implies that a high learning rate leads to convergence to be for global minima with low \(\lambda_{\max}\) and low variability of \(\mathbf{H}_{n}\). Since in this work we are fixing the batch size, we maintain \(\lambda_{\max}\), the variability of \(\left\langle\mathbf{H}\right\rangle_{k}\). Therefore, certain minima with high variability in \(\mathbf{H}_{n}\) will remain accessible to SGD. Now SGD may converge to these high variability minima, which were suggested to exhibit worse generalization performance than the original minima [33]. We mitigate this problem by reducing the delta by which the original learning rate decreases the learning rate (after the warm-up phase [34]). That way we make these new minima inaccessible again while keeping the original minima accessible. Specifically, KAKURENBO adjusts the learning rate at each epoch (or each iteration) \(e\) by the following rule: \[\eta_{e}=\eta_{base,e}\times\frac{1}{1-F_{e}} \tag{8}\] where \(\eta_{base,e}\) is the learning rate at epoch \(e\) in the non-hiding scenario and \(F_{e}\) is the hiding fraction at epoch \(e\). By multiplying the base learning rate with a fraction \(\frac{1}{1-F_{e}}\), KAKURENBO is independent of the learning rate scheduler of the baseline scenario and any other techniques related to the learning rate. ### Adjusting the Maximum Hidden Fraction \(F\) Merely changing the learning rate may not be sufficient, when some minima with high variability and low variability will eventually have similar \(\lambda_{\max}\), so SGD will not be able to discriminate between these minima. To account for this, we introduce a schedule to reduce the maximum hidden fraction. For the optimum of the set of hidden samples, \(\mathbf{w}_{\mathbf{M}}=G(\mathbf{x}_{n})\) and an overall loss function \(F(\cdot)\) that acts as a surrogate loss for problems which are sums of non-convex losses \(f_{i}(\mathbf{w})\), where each is individually non-convex in \(\mathbf{w}\). With Lipschitz continuous gradients with constant \(L_{i}\) we can assume \[\|\nabla f_{i}(\mathbf{w}_{\mathbf{1}})-\nabla f_{i}(\mathbf{w}_{\mathbf{2}}) \|\leq L_{i}\|\mathbf{w}_{\mathbf{1}}-\mathbf{w}_{\mathbf{2}}\|\] Since we are hiding samples when computing the overall loss function \(F(\cdot)\), we assume each of the functions \(f_{i}(.)\) shares the same minimum value \(\min_{\mathbf{w}}f_{i}(\mathbf{w})=\min_{\mathbf{w}}f_{j}(\mathbf{w})\ \forall\ i,j\). We extend the proof of the theorem on the guarantees for a linear rate of convergence for smooth functions with strong convexity [35] to the non-convex landscape obtained when training with hidden samples (proof in Appendix A) **Lemma 1**.: _Let \(F(\mathbf{w})=\mathbb{E}[f_{i}(\mathbf{w})]\) be non-convex. Set \(\sigma^{2}=\mathbb{E}[\|\nabla f_{i}(\mathbf{w}_{\mathbf{M}})\|^{2}]\) with \(\mathbf{w}^{*}:=argminF(\mathbf{w})\). Suppose \(\eta\leq\frac{1}{\sup_{i}L_{i}}\). Let \(\Delta_{t}=\mathbf{w}_{\mathbf{t}}-\mathbf{w}\). After \(T\) iterations, SGD satisfies:_ \[\mathbb{E}\left[\|\Delta_{T}\|^{2}\right]\leq(1-2\eta\hat{C})^{T}\|\Delta_{0} \|^{2}+\eta R_{\sigma} \tag{9}\] _where \(\hat{C}=\lambda(1-\eta\sup_{i}L_{i})\) and \(R_{\sigma}=\frac{\sigma^{2}}{\hat{C}}\)._ Since the losses \(f_{i}(\mathbf{w})\) are effectively dropping for individual samples, driven by the weight update, we thus drop the maximum fraction that can be hidden to satisfy Eq. 9. Specifically, we suggest selecting a reasonable number that is not too high at the first epoch, e.g, \(F=0.3\). We then adjust the maximum fraction per epoch (denoted as \(F_{e}\)) to achieve \(F_{e}\). We suggest using step scheduling, i.e., to reduce the maximum hiding fraction gradually with a factor of \(\alpha\) by the number of epochs increases. For example, we set \(\alpha\) as [1, 0.8, 0.6, 0.4] at epoch [0, 30, 60, 80] for ImageNet-1K and [0, 60, 120, 180] for CIFAR-100, respectively. ### Update Loss and Prediction Our technique is inspired by an observation that the importance of each sample of the local data does not change abruptly across multiple SGD iterations [22]. We propose to reuse the loss and historical prediction confidence, computed during the training process, and only recompute those metrics for samples from the hidden list. Specifically, the loss and historical prediction confidence of samples are computed only one time at each epoch, i.e., when the samples are fed to the forward pass. It is not re-calculated at the end of each epoch based on the latest model. Therefore, only samples of the last training iteration of a given epoch have an up-to-date loss. Furthermore, if we re-calculate the loss of hidden samples, i.e., only skip the backward and weight update pass of these samples, the loss of hidden samples is also up-to-date. For instance, if we cut off 20% of samples, we have nearly 20% up-to-date losses and 80% of not-up-to-date losses at the end of each epoch As the result, in comparison to the baseline scenario, KAKURENBO helps to reduce the total backward and weight update time by a fraction of \(F_{e}\) while it does not require any extra forward time ## 4 Evaluation We evaluate KAKURENBO using several models on various datasets. We measure the effectiveness of our proposed method on two large datasets. We use Resnet50 [36] and EfficientNet [37] on ImageNet-1K [19], and DeepCAM [4], a scientific image segmentation model with its accompanying dataset. To confirm the correctness of the baseline algorithms we also use WideResNet-28-10 on the CIFAR-100 dataset. Details of experiment settings and additional experiments such as ablation studies and robustness evaluation are reported in Appendix-B and Appendix-C. We compare the following training strategies: * **Baseline**: We follow the original training regime and hyper-parameters suggested by their authors using uniform sampling without replacement. * **Importance Sampling With Replacement**[11] **(ISWR)**: In each iteration, each sample is chosen with a probability proportional to its loss. The with-replacement strategy means that a sample may be selected several times during an epoch, and the total number of samples fed to the model is the same as the baseline implementation. * **FORGET** is an online version of a pruning technique [13]: instead of fully training the model using the whole dataset before pruning, we train it for 20 epochs, and a fraction \(F\) of forgettable samples (i.e. samples that are always correctly classified) are pruned from the dataset1. The training then restarts from epoch \(0\). We report the total training time that includes the 20 epochs of training with the whole dataset, and the full training with the pruned dataset. Footnote 1: We choose the samples to remove by increasing number of forgetting events as in [13]. * **Selective Backprop (SB)**[17] prioritizes samples with high loss at each iteration. It performs the forward pass on the whole dataset, but only performs backpropagation on a subset of the dataset. * **Grad-Match**[18] trains using a subset of the dataset. Every \(R\) epoch, a new subset is selected so that it would minimize the gradient matching error. * **KAKURENBO**: our proposed method where samples are hidden dynamically during training. It is worth noting that we follow the hyper-parameters reported in [38] for training ResNet-50, [39] for training WideResNet-28-10, [37] for training EfficientNet-b3, and [4] for DeepCAM. We show the detail of our hyper-parameters in Appendix B. We configure ISWR, and FORGET to remove the same fraction \(F\) as KAKURENBO. For SB, we use the \(\beta=1\) parameter that results in removing \(50\%\) of samples. Unless otherwise mentioned, our default setting for the maximum hidden fraction \(F\) for KAKURENBO is \(30\%\), except for the CIFAR-100 small dataset, for which we use \(10\%\) (see below). To maintain fairness in comparisons between KAKURENBO and other state-of-the-art methods, we use the same model and dataset with the same hyper-parameters. This would mean we are not capable of using state-of-the-art hyper-parameters tuning methods to improve the accuracy of ResNet-50/ImageNet (e.g., as in [40]). That is since the state-of-the-art hyper-parameters tuning methods are not applicable to some of the methods we compare with. Particularly, we can not apply GradMatch for training with a large batch size on multiple GPUs. Thus, we compare KAKURENBO with GradMatch using the setting reported in [18], i.e., CIFAR-100 dataset, ResNet-18 model. ### Accuracy The progress in the top-1 test accuracy with a maximum hiding fraction of \(0.3\) is shown in Figure 2. Table 2 summarizes the final accuracy for each experiment. We present data on the small dataset of CIFAR-100 to confirm the correctness of our implementation of ISWR, FORGET, and SB. Table 3 reports the single GPU accuracy obtained with Grad-Match because it cannot work on distributed systems. For CIFAR-100, we report similar behavior as reported in the original work on ISWR [11], SB [17], FORGET [13], and Grad-Match [18]: ISWR, FORGET, and Grad-Match degrade accuracy by approximately 1%, while SB and KAKURENBO roughly perform as the baseline. KAKURENBO on CIFAR-100 only maintains the baseline accuracy for small fractions (e.g. \(F=0.1\)). When hiding a larger part of the dataset, the remaining training set becomes too scarce, and the model does not generalize well. On the contrary, on large datasets such as ImageNet-1K, ISWR and KAKURENBO slightly improve accuracy (by \(0.2\)) in comparison to the baseline, while FORGET and SB degrade accuracy by \(1.2\%\) and \(3.5\%\), respectively. On DeepCAM, KAKURENBO does not affect the accuracy while ISWR degrades it by \(2.4\%\) in comparison to the baseline2. Table 4 reports the accuracy obtained for transfer learning. We do not report Grad-Match results because we could not apply it to this application. Using SB significantly degrades accuracy compared to the baseline, while ISWR, FORGET, and KAKURENBO maintains the same accuracy as the baseline. Especially, as reported in Figure 3, the testing accuracy obtained by KAKURENBO are varied when changing the maximum hiding fraction. We observe that for small hiding fractions, KAKURENBO achieves the same accuracy as \begin{table} \begin{tabular}{l|r r|r r|r r} \hline \hline \multirow{2}{*}{**Setting**} & \multicolumn{2}{c|}{**CIFAR-100**} & \multicolumn{3}{c}{**ImageNet-1K**} \\ & \multicolumn{2}{c|}{**WN-28-10**} & \multicolumn{2}{c|}{**ResNet-50**} & \multicolumn{2}{c|}{**EfficientNet-33**} & \multicolumn{2}{c}{**DeepCAM**} \\ \hline & **Acc.** & **Diff.** & **Acc.** & **Diff.** & **Acc.** & **Diff.** & **Acc.** & **Diff.** \\ \hline Baseline & 77.49 & & 74.89 & & 76.63 & & 78.14 \\ \hline ISWR & 76.51 & (-0.98) & 74.91 & (+0.02) & N/A & & 75.75 & (-2.39) \\ \hline FORGET & 76.14 & (-1.35) & 73.70 & (-1.20) & N/A & N/A & \\ \hline SB & 77.03 & (-0.46) & 71.37 & (-3.52) & N/A & N/A & \\ \hline KAKURENBO & 77.21 & (-0.28) & 75.15 & (+0.26) & 76.23 & (-0.5) & 77.42 & (-0.9) \\ \hline \hline \end{tabular} \end{table} Table 2: Max testing accuracy (Top-1) in percentage of KAKURENBO in the comparison with those of the Baseline and other SOTA methods. **Diff.** represent the gap to the Baseline. \begin{table} \begin{tabular}{l|r r} \hline \hline \multirow{2}{*}{**Setting**} & \multicolumn{2}{c}{**CIFAR-100**} \\ & \multicolumn{2}{c}{**ResNet-18**} \\ \hline & \multicolumn{2}{c}{**Acc. Time (sec)**} \\ \hline Baseline & 77.98 & 856 \\ \hline Grad-Match-0.3 & \begin{tabular}{r} 76.87 \\ (-1.11) \\ \end{tabular} & \begin{tabular}{r} 8104 \\ (-5.3\%) \\ \end{tabular} \\ \hline KAKURENBO-0.3 & \begin{tabular}{r} 77.05 \\ (-0.93) \\ \end{tabular} & \begin{tabular}{r} 8784 \\ (+2.7\%) \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with Grad-Match in a single GPU (cutting fraction is set to \(0.3\). Figure 2: Convergence and speedup of KAKURENBO and importance sampling (ISWR). the baseline. When increasing hiding fractions, as expected, the degradation of the testing accuracy becomes more significant. ### Convergence Speedup and Training Time Here we discuss KAKURENBO's impact on training time. Figure 2 reports test accuracy as the function of elapsed time (note the X-axis), and reports the training time to a target accuracy. Table 4 reports the upstream training time of DeiT-Tiny-224. The key observation of these experiments is that KAKURENBO reduces the training time of Wide-ResNet by \(21.7\%\), of ResNet-50 by \(23\%\), of EfficientNet by \(13.7\%\), of DeepCAM by \(22.4\%\), and of DeiT-Tiny by \(15.1\%\) in comparison to the baseline training regime. Surprisingly, Importance Sampling With Replacement (ISWR) [11] introduces an overhead of \(34.8\%\) on WideNet, of \(41\%\) on ImageNet-1K and offers only a slight improvement of \(2.5\%\) on DeepCAM. At each epoch, ISWR processes the same number of samples as the baseline. Yet, it imposes an additional overhead of keeping track of the importance (i.e., the loss) of all input samples. While on DeepCAM it achieves a modest speedup due to its faster convergence, these experiments reveal that ISWR's behavior is widely different on large datasets than on the smaller ones previously reported [11; 17]. FORGET increases the training time of WideResNet by \(46.1\%\) because of the additional 20 epochs training on the whole dataset needed for pruning the samples. When the number of epoch is large, such as for ResNet50 that runs for 600 epochs, FORGET decreases the training time by \(17.9\%\), and for DeiT by \(14.4\%\). However, this reduction of training time comes at the cost of degradation of the test accuracy. On WideResNet and ResNet, SB performs similarly to KAKURENBO by reducing the training time without altering the accuracy. However, SB significantly degrades accuracy compared to the baseline for ImageNet and DeiT. It is worth noting that KAKURENBO has computation overheads for updating the loss and prediction (Step D in Figure 1), and sorting the samples based on the loss (Step A in Figure 1). For example, Figure 4 reports the measured speedup per epoch as compared to the baseline epoch duration. The speedup follows the same trend as the hiding rate. This is because reducing the number of samples in the training set impacts the speed of the training. The measured speedup does not reach the maximum hiding rate because of the computation overhead. The performance gain from hiding samples will be limited if the maximum hiding fraction \(F\) is small, or potentially less than the overhead to compute the importance score of samples. In experiments using multiple GPUs, those operations are performed in parallel to reduce the running time overhead. When using a single GPU on CIFAR-100 with ResNet-18 (Table 3), the computational overhead is bigger than the speedup gained from hiding \begin{table} \begin{tabular}{l|l|l|r r r r r} \hline \hline & Dataset & Metrics & Baseline & ISWR & FORGET & SB & KAKUR. \\ \hline Up & \multirow{2}{*}{Fractal-3K} & Loss & 3.26 & 3.671 & 3.27 & 4.18 & 3.59 \\ stream & & Time (min) & 623 & 719 & 533 & 414 & 529 \\ & & Impr. & - & (+15.4\%) & (-14.4\%) & (-33.5\%) & (-15.1\%) \\ \hline Down & \multirow{2}{*}{CIFAR-10} & Acc. (\%) & 95.03 & 95.79 & 95.85 & 93.59 & 95.28 \\ stream & & Diff. & - & (+0.76) & (+0.82) & (-1.44) & (+0.25) \\ \cline{2-7} & \multirow{2}{*}{CIFAR-100} & Acc. (\%) & 79.69 & 79.62 & 79.95 & 76.98 & 79.35 \\ & & Diff. & - & (-0.07) & (+0.26) & (-2.71) & (-0.34) \\ \hline \hline \end{tabular} \end{table} Table 4: Impact of KAKURENBO in transfer learning with DeiT-Tiny-224 model. Figure 3: Test accuracy vs. epoch of KAKURENBO with different maximum hiding fractions \(F\). samples. Thus, KAKURENBO takes more training time in this case. In short, KAKURENBO is optimized for large-scale training and provides more benefits when running on multiple GPUs. ### Ablation Studies **Impact of prediction confidence threshold \(\tau\).** Higher prediction confidence threshold \(\tau\) leads to a higher number of samples being moved back to the training set, i.e., fewer hidden samples at the beginning of the training process. At the end of the training process, when the model has is well-trained, more samples are predicted correctly with high confidence. Thus the impact of the prediction confidence threshold on the number of moved-back samples becomes less (as shown in Figure 4). The result in Table 5 shows that when we increase the threshold \(\tau\), we obtain better accuracy (fewer hidden samples), but at the cost of smaller performance gain. We suggest to set \(\tau=0.7\) in all the experiments as a good trade-off between training time and accuracy. **Impact of different components of KAKURENBO.** We evaluate how KAKURENBO's individual internal strategies, and their combination, affect the testing accuracy of a neural network. Table 6 reports the results we obtained when training ResNet-50 on ImageNet-1K3 with a maximum hiding fraction of \(40\%\). The results show that when only HE (Hiding Examples) of the \(40\%\) lowest loss samples is performed, accuracy slightly degrades. Combining HE with other strategies, namely MB (Move-Back), RF (Reducing Fraction), and LR (Learning Rate adjustment) gradually improves testing accuracy. In particular, all combinations with RF achieve higher accuracy than the ones without it. For example, the accuracy of v110 is higher than that of v1100 by about \(0.59\%\). We also observe that using LR helps to improve the training accuracy by a significant amount, i.e., from \(0.46\)% to \(0.83\)%. The MB strategy also improves accuracy. For example, the accuracy of v1010 is \(72.81\%\), compared to v1110 which is \(72.96\%\). This small impact of MB on the accuracy is due to moving back samples at the beginning of the training, as seen in Appendix C.3. By using all the strategies, KAKURENBO achieves the best accuracy of \(73.6\%\), which is very close to the baseline of \(73.68\%\). Footnote 3: We use the ResNet-50 (A) configuration in this evaluation as shown in Appendix-B ## 5 Conclusion We have proposed KAKURENBO, a mechanism that adaptively hides samples during the training of deep neural networks. It assesses the importance of samples and temporarily removes the ones that would have little effect on the SGD convergence. This reduces the number of samples to process at each epoch without degrading the prediction accuracy. KAKURENBO combines the knowledge of historical prediction confidence with loss and moves back samples to the training set when necessary. It also dynamically adapts the learning rate in order to maintain the convergence pace. We have demonstrated that this approach reduces the training time without significantly degrading the accuracy on large datasets. \begin{table} \begin{tabular}{l|c c c c|c} \hline \hline & \multicolumn{4}{c|}{**Component**} & \multicolumn{1}{c}{**Accuracy**} \\ & HE & MB & RF & LR & \\ \hline Baseline & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 73.68 \\ \hline v1000 & ✓ & \(\times\) & \(\times\) & \(\times\) & 72.25 (-1.8\%) \\ v1001 & ✓ & \(\times\) & \(\times\) & ✓ & 73.08 (-0.7\%) \\ v1010 & ✓ & \(\times\) & ✓ & \(\times\) & 72.81 (-1.1\%) \\ v1011 & ✓ & \(\times\) & ✓ & ✓ & 73.27 (-0.4\%) \\ v1100 & ✓ & ✓ & \(\times\) & \(\times\) & 72.37 (-1.7\%) \\ v1101 & ✓ & ✓ & \(\times\) & ✓ & 73.09 (-0.7\%) \\ v1110 & ✓ & ✓ & ✓ & \(\times\) & 72.96 (-0.9\%) \\ KAKUR. (v1111) & ✓ & ✓ & ✓ & ✓ & 73.6 \\ \hline \hline \end{tabular} \end{table} Table 6: The impact of different components of KAKURENBO on testing accuracy including **HE**: Hiding \(F\)% lowest-loss examples, **MB**: Moving Back, **RF**: Reducing the Fraction by epoch, **LR**: Adjusting Learning Rate. Numbers inside the (.) indicate the gap in percentage compared to the full version of KAKURENBO. Figure 4: Reduction of hiding fraction, per epoch, and the resulting speedup. Acknowledgments This work was supported by JSPS KAKENHI under Grant Numbers JP21K17751 and JP22H03600. This paper is based on results obtained from a project, JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). This work was supported by MEXT as "Feasibility studies for the next-generation computing infrastructure" and JST PRESTO Grant Number JPMJPR20MA. We thank Rio Yokota and Hirokatsu Kataoka for their support on the Fractal-3K dataset.
2304.03787
Fourier expansion in variational quantum algorithms
The Fourier expansion of the loss function in variational quantum algorithms (VQA) contains a wealth of information, yet is generally hard to access. We focus on the class of variational circuits, where constant gates are Clifford gates and parameterized gates are generated by Pauli operators, which covers most practical cases while allowing much control thanks to the properties of stabilizer circuits. We give a classical algorithm that, for an $N$-qubit circuit and a single Pauli observable, computes coefficients of all trigonometric monomials up to a degree $m$ in time bounded by $\mathcal{O}(N2^m)$. Using the general structure and implementation of the algorithm we reveal several novel aspects of Fourier expansions in Clifford+Pauli VQA such as (i) reformulating the problem of computing the Fourier series as an instance of multivariate boolean quadratic system (ii) showing that the approximation given by a truncated Fourier expansion can be quantified by the $L^2$ norm and evaluated dynamically (iii) tendency of Fourier series to be rather sparse and Fourier coefficients to cluster together (iv) possibility to compute the full Fourier series for circuits of non-trivial sizes, featuring tens to hundreds of qubits and parametric gates.
Nikita A. Nemkov, Evgeniy O. Kiktenko, Aleksey K. Fedorov
2023-04-07T18:00:01Z
http://arxiv.org/abs/2304.03787v2
# Fourier expansion in variational quantum algorithms ###### Abstract The Fourier expansion of the loss function in variational quantum algorithms (VQA) contains a wealth of information, yet is generally hard to access. We focus on the class of variational circuits, where constant gates are Clifford gates and parameterized gates are generated by Pauli operators, which covers most practical cases while allowing much control thanks to the properties of stabilizer circuits. We give a classical algorithm that, for an \(N\)-qubit circuit and a single Pauli observable, computes coefficients of all trigonometric monomials up to a degree \(m\) in time bounded by \(\mathcal{O}(N2^{m})\). Using the general structure and implementation of the algorithm we reveal several novel aspects of Fourier expansions in Clifford+Pauli VQA such as (i) reformulating the problem of computing the Fourier series as an instance of multivariate boolean quadratic system (ii) showing that the approximation given by a truncated Fourier expansion can be quantified by the \(L^{2}\) norm and evaluated dynamically (iii) tendency of Fourier series to be rather sparse and Fourier coefficients to cluster together (iv) possibility to compute the full Fourier series for circuits of non-trivial sizes, featuring tens to hundreds of qubits and parametric gates. ###### Contents * I Introduction and results * II Parameterized quantum circuits * II.1 Trigonometric expansion of the unitary * II.2 Fourier expansion of the loss function * II.3 Fourier terms from averages * III Clifford+Pauli variational circuits * III.1 Definition and properties * III.2 Computing averages * IV Classical algorithm * IV.1 Expansion of the dressed Hamiltonian * IV.2 Accounting for the expectation values * IV.3 Truncated Fourier series as an approximation * IV.4 Is there a more efficient algorithm? * V Case studies * V.1 Random circuits * V.2 QAOA * V.3 Hardware-efficient circuits * VI Discussion and outlook * A Structure of a generic Fourier expansion * A.1 Level expansion and number of terms * A.2 Coefficients from averages * B Optimizing Pauli order * C Details of numerical computations * A.3 Random circuits * A.4 QAOA * A.5 Hardware-efficient circuits * A Estimating complexity from Monte-Carlo sampling ## I Introduction and Results Variational quantum algorithms (VQA) [1] are the leading candidates to make the most out of current NISQ devices [2; 3]. While the scope of potential VQA applications is extremely broad, there are also many theoretical and practical limitations. VQA are hybrid algorithms, using classical optimization to train parameterized quantum circuits, and in this sense they are similar to the classical machine learning models. The loss function of a VQA is defined as an average of some observable in the state, prepared by the parameterized quantum circuit. The structure of VQA loss landscape is of central importance, because the efficiency of the classical optimization largely determines the quality of solutions obtained by VQA. Accessible shapes of the loss function also determine the expressive power of the quantum machine learning models [4]. Typically, VQA are trained by gradient-based methods and their local properties are of the most interest. At the same time, the structure of parametrized quantum circuits makes Fourier series representation a natural and rich language for description of VQA loss functions. We now briefly survey some of the relevant results. Under the assumption that generators of parametric gates have commensurable eigenvalues, the Fourier series in fact truncates to a trigonometric polynomial. As shown in Ref. [5], accessible frequencies in this expansion are determined by the spectrum of the generators, while the coefficients depend on the structure of the circuit and the observable. In Refs. [5; 6] this observation was used to highlight the importance of data encoding in quantum machine learning models. Recently, there has been an interest in quantitatively studying the expressive power of machine learning models based on the properties of their Fourier expansion [7; 8]. In Refs. [9; 10] the Fourier representation was used to de-quantize a class of quantum machine learning models. Paper [11] investigates the case when the Fourier series is rather sparse, so that the loss landscape can be efficiently recovered with limited experimental data. An interesting proposal made in Ref. [12] shows that noise in VQA can be detected by observing inaccessible frequencies and mitigated by filtering them out. We also note that the fundamental result on the NP-hardness of training general VQA [13] relies on their loss functions being trigonometric polynomials. While the Fourier representation can be a very convenient tool to characterize variational loss functions, it has its limitations. When generators of the parametric gates square to identity, which is the most common case, the Fourier series for a VQA with \(M\) parameters is a multivariate trigonometric polynomial containing up to \(3^{M}\) terms. Exponential growth of accessible terms makes the Fourier series an impractical description, unless the actual distribution of coefficients is very sparse (e.g. the number of non-zero terms only grows polynomially with \(M\)). Computing the Fourier coefficients is also a challenge. For example, evaluating the lowest order, constant Fourier term, amounts to finding the loss function averaged over all parameter configurations, and there seems to be no efficient recipe for that in general. In this paper, we restrict attention to a special class of parametrized quantum circuits, which we refer to as _Clifford+Pauli_ circuits. Parametric gates in Clifford+Pauli circuits are exponentials of Pauli strings, while constant gates are Clifford gates. This is in fact a very large class of circuits that includes the majority of most studied VQA, such Quantum Approximate Optimization Algorithm (QAOA) [14], Hardware-Efficient Ansatz (HEA), [15], Unitary Coupled Cluster Ansatz [16]. Special properties of stabilizer circuits [17; 18] give an essential technical leverage to study the Fourier expansion of Clifford+Pauli circuits in quantitative details. The interplay between properties of stabilizer circuits and VQA have been explored previously, mainly in the context of quantum chemistry [19; 20; 21]. In particular, initialization methods based on perturbative expansion around Clifford points [22] or on the discrete search through the space of Clifford circuits [23; 24] have been developed, ansatz structures [25; 26] and partitioning schemes [27], based on the properties of Clifford gates have been proposed. In this work, however, we focus on a different scope of questions. Our core technical contribution is an efficient classical algorithm computing all Fourier coefficients in the loss function up to level \(m\), with time complexity bounded by \(O(N2^{m})\), where \(N\) is the number of qubits. Note that in general, the complexity of the Fourier series is not limited directly by the number of qubits \(N\), but rather by the total number of parametric gates \(M\). For Clifford+Pauli VQA this observation thus admits a concrete realization. The algorithm has both theoretical and practical utility. On the theoretical side, we show that typical Fourier series are much sparser than anticipated in the general case. For Clifford+Pauli circuits with a single Pauli observable, the number of coefficients is upper bounded by \(2^{M}\), and for the worst case behavior expected in practice we find \(\left(\frac{3}{2}\right)^{M}\). We also show, that truncating the Fourier polynomial below the maximum degree gives an approximation that can be quantified by the \(L^{2}\) norm of the loss function and evaluated dynamically, as the algorithm proceeds. The number of terms contributing non-negligibly the functional norm is typically an exponential fraction of all terms, yet still growing exponentially itself. On the practical side, we perform several case studies to probe the structure of Fourier expansions in more detail. In all examples we find that Fourier terms tend to cluster around some mean level, which is exactly \(\frac{M}{2}\) for non-local random circuits, but much smaller for the local circuits with special structure, such as QAOA or HEA, making their Fourier expansion much sparser and easier to compute. Our algorithm is based on a simple recursive expansion of the loss function. A similar approach was described in the context of the Qubit Coupled Cluster Method [25], and in the context of QAOA in Ref. [28]. However, there are some important distinctions with the previous work. First, we identify Clifford+Pauli circuits as the class to which the method is universally applicable, and treat the problem in general terms, as well as establish its direct relation to the Fourier series expansion. More importantly, we demonstrate how to significantly reduce the algorithm cost by pruning some branches of the recursive expansion early, based on filtering by expectation values. While this does not change the large \(M\) asymptotic, for practical cases the difference is essential. For example, for random Clifford+Pauli circuits on \(N=50\) qubits, it allows increasing the depth of circuits that can be handled from \(M=30\) to \(M=80\) without changing the computational budget. Finally, we formulate the problem of computing all non-zero coefficients in the Fourier expansion as an instance of the multivariate boolean quadratic problem, which allows us to argue that our algorithm is likely not far from optimal yet points towards potential improvements. ## II Parameterized Quantum Circuits In this section, we establish some notation, describe basic properties of variational circuits and their loss functions, and discuss how Fourier expansion arises in this context. ### Trigonometric expansion of the unitary We will assume that a unitary matrix \(U(\mathbf{\phi})\) of a parameterized quantum circuit takes the following form: \[U(\mathbf{\phi})=C_{M}P_{M}(\phi_{M})\dots C_{2}P_{2}(\phi_{2})C_{1}P_{1}(\phi_{1} )C_{0}\;. \tag{1}\] Here \(C_{m}\) are constant gates, \(P_{m}(\phi)=e^{-\imath\frac{\phi}{2}G_{m}}\) are single-parameter rotations, and \(\mathbf{\phi}=(\phi_{1},\dots,\phi_{M})\) is a vector of parameters. We will assume that Hermitian generators of the parameterized gates square to identity \(G_{m}^{2}=1\), so that \[P_{m}(\phi)=\mathbbm{1}\cos\frac{\phi}{2}-\imath G_{m}\sin\frac{\phi}{2}\;. \tag{2}\] Applying this relation to each parametric gate in the circuit one obtains the following formal trigonometric expansion contain ing \(2^{M}\) terms \[U(\mathbf{\phi})=\sum_{I\in\{0,1\}^{M}}U_{I}t_{I}\left(\frac{\mathbf{\phi}}{2}\right). \tag{3}\] Here \(I=(I_{1},\ldots,I_{M})\) with \(I_{m}\in\{0,1\}\) is a multi-index, \(t_{I}(\mathbf{\phi})\) is a multivariate trigonometric monomial of order \(M\) \[t_{I}(\mathbf{\phi})=\prod_{m=1}^{M}t_{I_{m}}(\phi_{m})\, \tag{4}\] where each term in the product is defined by \[t_{i}(\phi)=\cos^{1-i}\phi\sin^{i}\phi=\begin{cases}\cos\phi,&i=0\\ \sin\phi,&i=1\end{cases}. \tag{5}\] We note that coefficient matrices \(U_{I}\) correspond to the circuit unitary, evaluated at specific values \[U_{I}=U(\mathbf{\phi}=\pi I)=\alpha C_{M}G_{M}^{I_{M}}\ldots C_{1}G_{M}^{I_{1}}C_{ 0}\, \tag{6}\] where \(\alpha\) is a phase factor \(\alpha=(-\imath)^{\sum_{m}I_{m}}\). ### Fourier expansion of the loss function The loss function \(F(\mathbf{\phi})\) of a variational algorithm is defined by the average of some Hermitian operator \(H\), often referred to as _the Hamiltonian_, in the state prepared by the circuit \[F(\mathbf{\phi})=\left\langle 0\,\right|U^{\dagger}(\mathbf{\phi})HU(\mathbf{\phi})\left| \,0\right\rangle. \tag{7}\] Here and in the following \(\left|0\right\rangle=\left|0\right\rangle^{\otimes N}\) is the all zeros state of \(N\) qubits. Substituting expansion (3) into the loss function gives \[F(\mathbf{\phi})=\sum_{IJ}t_{I}\left(\frac{\mathbf{\phi}}{2}\right)t_{J}\left(\frac{ \mathbf{\phi}}{2}\right)\left\langle 0\,\right|U_{I}^{\dagger}HU_{J}\left| \,0\right\rangle. \tag{8}\] In contrast to the expansion of unitary (3), which is homogeneous, expansion of the loss function contains trigonometric monomials of various degrees, see App. A for details. Let us organize the Fourier expansion of the loss function by level \[F(\mathbf{\phi})=\sum_{m=0}^{M}F_{m}(\mathbf{\phi}). \tag{9}\] Each level \(F_{m}(\mathbf{\phi})\) only involves monomials of order \(m\). There are \(\binom{M}{m}\) possible parameter subsets at level \(m\) each giving rise to \(2^{m}\) trigonometric monomials. Hence, the total number of independent coefficients in the Fourier expansion is \[\sum_{m=0}^{M}2^{m}\binom{M}{m}=3^{M}. \tag{10}\] ### Fourier terms from averages Computing the Fourier series for generic loss functions appears to be a formidable task. Indeed, let us note that the constant term \(F_{0}\) in the Fourier expansion can be thought of as the loss function, averaged over all parameters \[F_{0}=\left\langle F(\mathbf{\phi})\right\rangle_{\mathbf{\phi}}:=\frac{1}{(2\pi)^{M }}\int_{0}^{2\pi}\prod_{m=1}^{M}d\phi_{m}\ F(\mathbf{\phi}). \tag{11}\] This relation holds, because all higher levels \(F_{m>0}(\mathbf{\phi})\) in the Fourier series trivially vanish when averaged. Higher level terms can be obtained similarly, see App. A. The average in Eq. (11) can be expressed in a succinct form using orthogonality of trigonometric monomials \(t_{I}\) (4) \[\left\langle t_{I}(\mathbf{\phi})t_{J}(\mathbf{\phi})\right\rangle_{\mathbf{\phi}}=2^{-M} \delta_{IJ}. \tag{12}\] Hence, averaging (8) yields \[F_{0}=\frac{1}{2^{M}}\sum_{I\in\{0,1\}^{M}}\left\langle 0\,\right|U_{I}^{ \dagger}HU_{I}\left|\,0\right\rangle. \tag{13}\] Evaluating this expression explicitly seems to be out of reach for generic circuits. In terms of a classical simulation, computing any single expectation value in Eq. (13) is difficult on its own for a sufficiently large number of qubits. Even when the averages can be computed efficiently, either classically or provided an access to a quantum computer, equation (13) still requires summing \(2^{M}\) terms, infeasible for any significant number of parameters \(M\). As we show in the next section, for Clifford+Pauli quantum circuits evaluating \(F_{0}\) and in fact any particular monomial in the Fourier expansion is classically efficient. ## III Clifford+Pauli variational circuits ### Definition and properties Let us first establish some notation relevant for stabilizer circuits. A single-qubit Pauli operator is simply an \(X,Y,Z\) Pauli matrix or an identity, possibly with a phase \(\pm 1,\pm\imath\). An \(n\)-qubit Pauli operator is a tensor product of \(n\) arbitrary single-qubit Pauli operators. Any two Pauli operators \(P_{1}\) and \(P_{2}\) either commute or anti-commute: \(P_{1}P_{2}=\pm P_{2}P_{1}\). Clifford gates \(C\) are operators that transform every Pauli gate \(P\) into some Pauli gate \(P^{\prime}\): \(C^{\dagger}PC=P^{\prime}\). The group of Clifford gates can be generated by the Hadamard gate \(\mathsf{H}\,\mathsf{S}\ =\sqrt{Z}\), and controlled NOT gate. Circuits consisting only of the Clifford gates applied to the stabilizer states (of which \(\left|0\right\rangle\) is an example) can be efficiently simulated classically due to the Gottesman-Knill theorem Gottesman and Knill (1993). We define Clifford+Pauli variational circuits as a subset of variational circuits (1), where generators of parametric gates are Pauli operators and all constant gates are Clifford gates. For clarity of exposition, we also assume that the Hamiltonian \(H\) is a Pauli operator. The case that is most relevant in practice, when the Hamiltonian is a polynomial-sized sum of Pauli operators, can be handled by linearity. We stress that both the Pauli generators and the Hamiltonian are allowed to have arbitrary weight, i.e. be supported on any number of qubits. Note that Pauli rotations with generic angles are not Clifford gates, and hence Clifford+Pauli circuits can not be simulated efficiently using the stabilizer formalism. Clifford+Pauli circuits admit a simple canonical form, where all Clifford gates are eliminated. First, one uses commutation properties of Clifford and Pauli operators to drag all the Clifford gates to the very end of the circuit. Generators of Pauli rotations will generally change during the process. The Clifford gate \(C\) accumulated at the end of the circuit is absorbed into the Hamiltonian \(H\to C^{\dagger}HC\), which remains a Pauli operator. Hence, without loss of generality, we will assume that Clifford+Pauli variational circuit takes the following _Pauli form_ \[U(\mathbf{\phi})=P_{M}(\phi_{M})P_{M-1}(\phi_{M-1})\dots P_{1}(\phi_{1})\;, \tag{14}\] where \(P(\phi)=e^{-\imath\frac{\phi}{2}P}\) for some Pauli string \(P\). We will use notation \((P_{1}\dots P_{M}|H)\) for a Clifford+Pauli circuit in Pauli form. Note that this form is not unique, as Pauli rotations with commuting generators can be swapped. ### Computing averages Now let us revisit the computation of the average loss function (13). Eq. (13) now takes the form \[F_{0}=\frac{1}{2^{M}}\sum_{I\in\{0,1\}^{M}}\left\langle 0\, \middle|\,P_{1}^{I_{1}}\dots P_{M}^{I_{M}}HP_{M}^{I_{M}}\dots P_{1}^{I_{1}} \,\middle|\,0\right\rangle\;. \tag{15}\] We claim that this sum vanishes unless \(H\) commutes with every \(P_{i}\). Indeed, suppose there is some \(P_{m}\) s.t. \(P_{m}H=-HP_{m}\). Then, it is straightforward to see, that any two terms in the sum that only differ in the value of \(I_{m}\) give opposite contributions. Hence, for the circuit \((P_{1}\dots P_{M}|H)\) we find \[F_{0}=\begin{cases}\left\langle 0|H|0\right\rangle,&[H,P_{i}]=0\quad\forall i \\ 0,&\text{otherwise}\end{cases}\;. \tag{16}\] Therefore, computing the average loss function for Clifford+Pauli circuits is a trivial task for any number of qubits and any number of parameters. In fact, as we show in the following, this applies to every individual term in the Fourier series. The difficulty of computing the full Fourier expansion then stems solely from the fact that the total number of non-vanishing coefficients can be exponentially large. In the next section, we present an efficient classical algorithm to compute Fourier expansion level by level. ## IV Classical algorithm ### Expansion of the dressed Hamiltonian Introduce the following notation for an operator conjugated by circuit's unitary \[O[\mathbf{\phi}]=U^{\dagger}(\mathbf{\phi})OU(\mathbf{\phi})\;. \tag{17}\] Following the quantum chemistry literature, we will call \(H(\mathbf{\phi})\) the _dressed Hamiltonian_. The loss function is the average of the dressed Hamiltonian in the all-zero state \[F(\mathbf{\phi})=\left\langle 0\,\middle|\,H[\mathbf{\phi}]\,\middle|\,0\right\rangle\;. \tag{18}\] Next, we make the following simple observation: for an arbitrary Pauli string \(O\) it holds \[P(\phi)^{\dagger}OP(\phi)=\begin{cases}O,&[P,O]=0\\ O\cos\phi+\imath PO\sin\phi,&\{P,O\}=0\end{cases}\;, \tag{19}\] i.e. when the conjugating Pauli rotation \(P(\phi)\) commutes with \(O\), it cancels out, while for an anti-commuting Pauli rotation the result can be written as a sum of two Pauli operators. This gives a recurrence procedure to expand the dressed Hamiltonian. Indeed, it follows from (19) that for an arbitrary Pauli string \(O\) \[O[\mathbf{\phi}^{(m)}]=\begin{cases}O[\mathbf{\phi}^{(m-1)}],&[O,P_{m}]=0;\\ O[\mathbf{\phi}^{(m-1)}]\cos\phi_{m}+&\{O,P_{m}\}=0;\\ +\imath(P_{m}O)[\mathbf{\phi}^{(m-1)}]\sin\phi_{m},&\{O,P_{m}\}=0;\end{cases} \tag{20}\] Here \(\mathbf{\phi}^{(m)}\) the subset of the first \(m\leq M\) parameters \(\mathbf{\phi}^{(m)}:=(\phi_{1},\dots,\phi_{m})\) and \(O[\mathbf{\phi}^{(m)}]\) is defined as in (17) with the conjugating unitary \(U(\mathbf{\phi}^{(m)}):=P_{m}(\phi_{m})\dots P_{1}(\phi_{1})\). Repeatedly applying (20) to \(H[\mathbf{\phi}]\equiv H[\mathbf{\phi}^{(M)}]\) represents the dressed Hamiltonian as a sum of Pauli strings multiplied by trigonometric monomials, i.e. gives the Fourier expansion of the dressed Hamiltonian with operator coefficients. The recurrent expansion can be conveniently visualized as a binary tree, see Fig. 1 for an example. Figure 1: A sample diagram representing recursive expansion of a dressed Hamiltonian. Here \(\{P_{3},H\}=\{P_{2},H\}=\{P_{1},P_{3}H\}=\{P_{1},P_{2}H\}=[P_{2},P_{3}H]=[P_{1 },H]=0\). as follows. The nodes, which we refer to as the _computational nodes_, correspond to variational circuits, specified by a list of Pauli generators and an observable \((P_{1}\ldots P_{m}|O)\). If the observable \(O\) anti-commutes with the last Pauli generator \(P_{m}\), the node branches into two \((P_{1}\ldots P_{m}|O)\rightarrow\cos\phi_{m}(P_{1}\ldots P_{m-1}|O)+\imath\sin \phi_{m}(P_{1}\ldots P_{m-1}|P_{m}O)\). For brevity, we omit coefficients at the diagram. Branching increases the Fourier level by one. If the last Pauli generator instead commutes with \(O\), it is simply removed \((P_{1}\ldots P_{m}|O)\rightarrow(P_{1}\ldots P_{m-1}|O)\) and the Fourier level remains unchanged. We depict this by horizontal arrows at the diagram. When there are no Pauli generators left, the node contains the final observable encoding a single operator coefficient in the Fourier expansion of the dressed Hamiltonian. The graphical representation makes several distinctive features of Fourier series for Clifford+Pauli circuits manifest. Let \(n(m)\) be the number of resulting Fourier modes at level \(m\). Introduce \[\delta(m)=2^{-m}n(m),\quad\Delta(m)=\sum_{k=0}^{m}\delta(k)\;. \tag{21}\] For any Clifford+Pauli circuit and any Pauli Hamiltonian it holds \[\Delta(M)=\sum_{m=0}^{M}2^{-m}n(m)=1\;, \tag{22}\] i.e. the weighted sum of populations at all Fourier levels is an invariant. This implies certain constraints on the distribution of Fourier terms. For example, the maximum number of Fourier terms \(\sum_{m}n(m)\) is upper bounded by \(2^{M}\) (when the last level is fully populated), cf. the bound for generic circuits \(3^{M}\) (10). Importantly, the presence of any single Fourier term at level \(m<M\) reduces the maximum possible amount of terms at other levels. For instance, if \(n(0)=1\), i.e. \(F_{0}\neq 0\), all other Fourier terms vanish. For \(N\)-qubit circuits, processing each computational node only involves multiplying Pauli strings of length \(N\), and hence has complexity \(\mathcal{O}(N)\). Since the total number of nodes is at most \(2^{M}\), the time complexity of the algorithm can be bounded by \(\mathcal{O}(N2^{M})\). In the following, we will give more detailed estimates for the expected number of computational nodes and complexity of the algorithm. ### Accounting for the expectation values So far, we discussed the expansion of the dressed Hamiltonian. In turn, the loss function is given by its expectation value in the all-zero state (18). A likely scenario is that the majority of the final Pauli observables have vanishing expectations, and hence do not contribute to the loss function. This observation allows significantly increasing the efficiency of the computation by pruning unfit branches in advance. Let \(\mathbb{F}_{2}^{n}\) be a vector space of binary strings of length \(n\). For \(k=(k_{1},\ldots,k_{n})\in\mathbb{F}_{2}^{n}\) denote \(\mathbf{Z}(k)=\bigotimes_{i=1}^{n}Z^{k_{i}}\) (\(\mathbf{X}(k)\) is defined similarly). For any \(n\)-qubit Pauli operator \(P\) one can define two vectors \(P_{Z},P_{X}\in\mathbb{F}_{2}^{n}\) such that \[P=\alpha\;\mathbf{Z}(P_{Z})\mathbf{X}(P_{X})\;, \tag{23}\] where \(\alpha\) is a phase factor. One can think of \(P_{Z}\) and \(P_{X}\) as coordinates of \(P\) in the basis of Pauli \(Z\) and \(X\) strings, respectively. With the notation in place, we can explain how expectation values can be taken into account during the expansion of the dressed Hamiltonian. A Pauli string \(P\) has a non-zero expectation value \(\langle 0\,|\,P\,|\,0\rangle\neq 0\) iff \(P_{X}=(0,\ldots,0)\). First assume, for simplicity, that \(X\)-vectors of the first \(N\) Pauli generators \((P_{1})_{X},\ldots,(P_{N})_{X}\) are linearly independent and constitute a basis in \(\mathbb{F}_{2}^{N}\). This implies that for every \(O\), there is a unique vector \(k\in\mathbb{F}_{2}^{N}\) such that \[O_{X}=k_{1}(P_{1})_{X}+\ldots+k_{N}(P_{N})_{X}\;. \tag{24}\] Therefore, among all \(2^{N}\) possible observables of the form \(P_{1}^{t_{1}}\ldots P_{N}^{t_{N}}O\), that can be produced during recursive expansion of \((P_{1}\ldots P_{N}|O)\), only a single one with \(t_{i}=k_{i}\) can yield a non-zero expectation value (all other terms will have a non-zero \(X\)-component). Thus, instead of generating the full recursive expansion of \((P_{1}\ldots P_{N}|O)\), which can contain up to \(2^{N}\) nodes, we can first find \(k\) from (24), and then check if this operator actually appears in the dressed Hamiltonian, i.e. it is compatible with the branching rules. This yields an exponential saving for large \(N\). Now let us lift the restriction of the first \(N\) Pauli generators forming the basis. The necessary condition for \((P_{1}\ldots P_{m}|O)\) to have a non-zero expectation is that \(O_{X}\) is contained in the span of \((P_{1})_{X},\ldots,(P_{m})_{X}\). Therefore, for each newly generated computational node, we can test if this condition is satisfied. If it is not, all final observables stemming from the expansion of this node have zero expectation values, and the node can be disregarded. As discussed in App.B, there is some room for further optimization based on the freedom to permute commuting Pauli generators. Also, at this point, we would like to spell out explicitly an elementary observation about the Fourier series of Clifford+Pauli circuits. For a single Pauli observable, all non-zero coefficients are given by averages of Pauli strings, and hence equal to \(\pm 1\). ### Truncated Fourier series as an approximation Having many terms at low Fourier levels appears to be convenient, because this partially reduces a proliferation of coefficients at subsequent levels. This is further reinforced by the observation that each individual term at a lower level contributes exponentially more to the loss function than a term at a higher level. Intuitively, this is because the average absolute value of a trigonometric monomial of order \(m\) is \((\pi/2)^{-m}\) and decays exponentially with degree \(m\). At the same time, there can be exponentially more terms at higher levels. We can quantify this trade-off by evaluating the \(L^{2}\) norm of the loss function. From orthogonality of trigonometric monomials (12) it follows \[||F||^{2}:=\big{\langle}\,\big{|}\,F(\mathbf{\phi})\,\big{|}\,^{2}\big{\rangle}_{\mathbf{ \phi}}=\sum_{m=0}^{M}2^{-m}l(m)\;, \tag{25}\] where \(l(m)\) is the number of non-zero Fourier terms in the expansion of the loss function at level \(m\), which is upper bounded by the number of non-zero terms in the expansion of the dressed Hamiltonian (\(l(m)\leq n(m)\)). Note that \(n(m)-l(m)\) is the number of operators in the dressed Hamiltonian at level \(m\) with zero expectation value (i.e. with non-trivial \(X\) coordinate). Using (22), we can then bound the \(L^{2}\) norm of the loss function: \[||F||^{2}\leq 1\;. \tag{26}\] Let \(F^{(m)}(\mathbf{\phi})\) denote the Fourier series truncated to the first \(m\) levels. Then, \[||F^{(m)}-F||^{2}\leq\sum_{k=m+1}^{M}2^{-k}l(k)\leq 1-\Delta(m)\;. \tag{27}\] If \(\Delta(m)\) is close to 1, i.e. sufficiently many terms are concentrated up to level \(m\), the truncated Fourier series \(F^{(m)}(\mathbf{\phi})\) gives a good approximation to the full loss function. Note that our recursive expansion generates \(F^{(m)}(\mathbf{\phi})\) level by level, so the quality of the approximation can be gauged dynamically, and the computation stopped when the necessary accuracy is reached. We need to mention two caveats related to the approximation result stated. First, while closeness in \(L^{2}\) norm guarantees good approximation for most parameter configurations, it does not translate directly into point-wise convergence. Indeed, while the average absolute value of higher-level monomials (4) is exponentially suppressed, their maximum values are independent of the order (\(\max_{\mathbf{\phi}}|t_{I}(\mathbf{\phi})|=1\)). Second, the bound (27) may be too weak in practice, as it effectively assumes that all the final observables in the dressed Hamiltonian expansion above level \(m\) have non-zero expectation values. In practice, we expect that only an exponentially small fraction of observables contributes to the loss function norm. Properly taking this into account can significantly strengthen the bound, but requires accounting for the structure of a particular circuit at hand. We illustrate this in a random circuit model discussed in Sec. V. ### Is there a more efficient algorithm? As sketched in Fig. 1(a), our discussion features four different scales. The largest scale (I) is set by the number of terms in the expansion of the dressed Hamiltonian. It depends only on the structure of the circuit and the Hamiltonian, and can contain up to \(2^{M}\) terms. Note that the total number of nodes in the computational tree Fig. 1 can only exceed the number of final observables by a constant factor, so the recursive expansion algorithm is optimal for computing the Fourier series of the dressed Hamiltonian. Another scale (III) corresponds to the number of non-zero terms in the Fourier expansion of the loss function. It quantifies the very complexity of describing the loss function by its Fourier series, and implies a limit to when such a description can be practical. Also, as discussed in Sec.(IV.3), the truncated Fourier series can furnish a good approximation to the full loss function, while containing only a tiny fraction of all non-zero terms. Hence, we associate a separate scale (IV) to it. The most relevant scale in practice, however, is set the by number of computation nodes (II). It quantifies the complexity of the algorithm. Without accounting for the expectation values, it simply coincides with the number of terms in the dressed Hamiltonian. In Sec. IV.2 we explained how to prune the branches of the dressed Hamiltonian expansion, with the expected saving being exponential in the number of qubits. Still, in general, this leaves a large gap between the number of computational nodes and the number of non-zero terms in the loss function. Indeed, assume for simplicity that the first \(N\) Pauli generators span an \(X\)-basis, and consider a computational node \((P_{1},\ldots,P_{N}|O)\). For generic \(P_{i}\) and \(O\) this node is exponentially unlikely to make a non-zero contribution to the loss function. Indeed, there is a unique combination \(P_{1}^{k_{1}}\ldots P_{N}^{k_{N}}O\) that has a non-zero expectation value in the all Figure 2: (a) A sketch of the hierarchy of scales in the problem. (b) A sample computational tree taking into account different properties of nodes. Dashed arrows lead to nodes \((P_{1}\ldots P_{m}|O)\) for which \(O_{X}\) does not lie in the span of \((P_{i})_{X}\). These nodes do not contribute to the loss function and are not actually generated by the algorithm. The number of remaining nodes quantifies the complexity of the algorithm. Only two final observables with non-vanishing expectation values are present, and the one at the second level contributes most to the loss function norm. zero state, but this very combination is unlikely to satisfy all the branching rules, and hence actually appear in the recursive expansion. Hence, most nodes of the type \((P_{1},\ldots,P_{N}|O)\) do not contribute to the loss function and are not necessary to generate in the first place. Can a more efficient pruning algorithm be developed? Let us formalize the question. For Pauli strings \(P_{i},P_{j}\), set \(\left\langle P_{i},P_{j}\right\rangle=1\) if they anti-commute and \(\left\langle P_{i},P_{j}\right\rangle=0\) otherwise. Note that for any three Pauli strings \(P_{i},P_{j},P_{k}\) it holds that \(\left\langle P_{i},P_{j}P_{k}\right\rangle=\left\langle P_{i},P_{j}\right\rangle +\left\langle P_{i},P_{k}\right\rangle\) (here and in the following \((\bmod 2)\) is implied). All possible final observables in the expansion of the dressed Hamiltonian are of the form \[O(k)=P_{1}^{k_{1}}\ldots P_{M}^{k_{M}}H, \tag{28}\] with some \(k\in\mathbb{F}_{2}^{M}\). In order for \(O(k)\) to actually appear in the set of final observables, \(k\) has to be consistent with the branching rules. If \(k_{i}=1\), the Pauli string \(P_{i}\) must anti-commute with \(P_{i+1}^{k_{i}+1}\ldots P_{M}^{k_{M}}H\), while \(k_{i}=0\) implies no constraints. These conditions can be expressed as \(M\) equations (\(i=1,\ldots,M\)) \[k_{i}=k_{i}\left\langle P_{i},H\right\rangle+\sum_{j=i+1}^{M}k_{i}k_{j}\left \langle P_{i},P_{j}\right\rangle\;. \tag{29}\] The condition for \(O(k)\) to have a non-zero expectation value is \[H_{X}=\sum_{i=1}^{M}k_{i}(P_{i})_{X}=0\;. \tag{30}\] This relation contains \(N\) constraints for an \(N\)-qubit problem. Together, branching constraints (29) and \(X\)-constraints (30) present an instance of a boolean multivariate quadratic problem (Boolean MQ), which is known to be NP-hard. State-of-the-art algorithms [29; 30] have worst-case time complexities around \(2^{0.69M}\) to find a solution or prove one does not exist. Due to the recurrent structure of equations (29), our Boolean MQ instances are significantly simpler than the general case. As shown in the next section, for random circuits, which are expected to capture the worst case behavior in practice, time complexity around \(2^{0.59M-N}\) is sufficient to find all solutions. Thus, while generic algorithms for the Boolean MQ problem are unlikely to be useful directly, there is a possibility that more efficient pruning techniques can be adopted in our scheme, narrowing the gap between the number of computational nodes and non-zero coefficients in the loss function. ## V Case studies So far, we discussed general properties of the Fourier expansion for Clifford+Pauli circuits. In this section, we consider several specific examples that showcase how the expansions are structured in practice. We will both make analytic estimates and put the classical algorithm to work in numeric simulations. ### Random circuits We first study the case where all the Pauli generators, as well as the Hamiltonian, are random Pauli operators with the support on all of the qubits. In this setup, it is simple to give probabilistic estimates for the expected distribution of Fourier terms in the loss function. In fact, the assumption that the Pauli generators are random is not necessary as long as the observable is random. Therefore, we expect this behavior to capture well the asymptotic limit of most sufficiently deep circuits. Indeed, even if the original Hamiltonian is local, as we go down the computational tree (see Fig. 1), the observables at the intermediate computational nodes become ever more scrambled, and eventually behave as random Pauli operators. The argument is not rigorous, of course, and circuits that do not confirm to this pattern Figure 3: (a) Level distribution of the number of terms \(l(m)\) and norm \(\nu(m)\) (both normalized) in Fourier expansion of the dressed Hamiltonian for random circuits. Scatter plots are mean values computed from the simulations, filled areas quantify standard deviations. Solid curves are theoretical predictions. (b) Complexity of the algorithm for random Pauli circuits as a function of the number of qubits \(N\) with depth \(M=N/\log\frac{3}{2}\). Details of numerical experiments are discussed in App. C.1. can be constructed. Nevertheless, the behavior of random circuits should give a useful reference point for generic circuits. _Distribution of terms in the dressed Hamiltonian_. We will first look at the coarse-grained characteristics of the dressed Hamiltonian expansion, such as the number of non-zero terms at level \(m\), denoted by \(n_{M}(m)\), and the total number of terms \(n_{M}=\sum_{m}n_{M}(m)\) (here we add a subscript \(M\) to emphasize dependence on the total number of parameterized gates). When all \(P_{m}\) and \(H\) are random, the probability of branching at each computational node is \(\frac{1}{2}\). Therefore, on average, each iteration of the algorithm increases the total number of nodes \(n_{M}\) by a factor of \(\frac{3}{2}\), leading to \[n_{M}=\left(\frac{3}{2}\right)^{M}. \tag{31}\] The same reasoning applies to the number of non-zero computational nodes at each level \(n_{M}(m)\), which hence satisfies the recurrence relation \(n_{M+1}(m)=\frac{1}{2}n_{M}(m)+n_{M}(m-1)\). Solving it yields \[n_{M}(m)=2^{m-M}\binom{M}{m}. \tag{32}\] One can check that \(\sum_{m=0}^{M}n_{M}(m)=n_{M}\). _Distribution of terms in the loss function_. So far we discussed the distribution of Fourier terms in the dressed Hamiltonian, and now we turn to the loss function. Since every final observable of the dressed Hamiltonian is again a random Pauli, it has \(2^{-N}\) probability of having non-zero expectation value. Therefore, the distribution of Fourier terms by level \(l_{M}(m)\) is simply \[l_{M}(m)=2^{-N}n_{M}(m). \tag{33}\] The expected number of all non-zero terms in the loss function is \[l_{M}=2^{-N}\left(\frac{3}{2}\right)^{M}\approx 2^{0.59M-N}. \tag{34}\] Now, we recall that each term at level \(m\) contributes exactly \(2^{-m}\) to the \(L^{2}\) norm. From the distribution of Fourier coefficients by level, we can derive the distribution of norm by level \(\nu_{M}(m)=2^{-m}l_{M}(m)\), explicitly given by \[\nu_{M}(m)=2^{-N-M}\binom{M}{m}. \tag{35}\] Note that \(\sum_{m=0}^{M}\nu_{M}(m)=2^{-N}\). At Fig. 2(a) we plot results of numerical simulations for random circuits, which convincingly confirm our estimates. _Accuracy of a truncated expansion_. We can now address the question of how many terms need to be included in the loss function to give a good \(L^{2}\) approximation. Since binomial distribution (35) is symmetric around \(m=\frac{M}{2}\), including Fourier terms up to level \(\frac{M}{2}\) will on average account for \(50\%\) of the norm. The total number of nodes at the included levels can be estimated as \(l_{M}(\frac{M}{2})\sim 2^{\frac{M}{2}-N}\). While in the large \(M\) limit this is an exponentially small fraction of all terms (34), the number of relevant terms itself is still an exponential in \(M\). _Complexity of the algorithm and simulation limits._ Exponential increase in the number of relevant terms with \(M\) clearly limits the depth of the circuits we can address. Importantly, the property that most final observables have zero expectation values in turn limits the number of qubits \(N\) we can meaningfully simulate. While in principle, the number of qubits is only limited by the simulation cost of Clifford circuits, to yield a non-zero loss function the number of Pauli rotation gates \(M\) needs to increase with \(N\). Requiring the number of non-zero terms in the loss function (34) to stay of order \(\mathcal{O}(1)\) as we increase \(N\) requires to scale the depth of the circuit as \(M\sim N/\log_{2}\frac{3}{2}\). Thanks to the subroutine filtering by the expectation value, the algorithm only branches during processing of the first \(M-N\) gates, leading to the number of computational nodes around \(\sim\left(\frac{3}{2}\right)^{M-N}\simeq 2^{0.59(M-N)}\). Therefore, the number of computational nodes generated per non-zero term in the loss function grows with the number of qubit roughly as \[\sim 2^{0.41N}\approx 10^{N/8} \tag{36}\] As reported in Fig.2(b), this scaling is well confirmed numerically. With a computational budget to process \(10^{6}\) nodes, which takes minutes with a basic implementation run on a laptop, the Fourier expansion of the loss function for a \(50\)-qubit random circuit with \(85\) Pauli rotation gates can be computed exactly. With resources to process \(10^{12}\) nodes, which should be feasible with an efficient implementation on a computational cluster, \(100\)-qubit circuits with about \(170\) Pauli gates can be handled. ### Qaoa Variational circuits appearing in practice are far from the random Pauli model described above. Instead, they typically only involve local gates and observables. If two local Pauli strings are supported on different subsets of qubits they necessarily commute, and hence the probability of two generic local operators anti-commuting is much smaller than \(\frac{1}{2}\). Although we expect the random Pauli model to describe well \begin{table} \begin{tabular}{c c c c c} \hline \hline degree\(\backslash\)level 1 & 2 & 3 & 4 & \\ \hline 2 & \(10^{0.7\pm 0.08}\) & \(10^{1.5\pm 0.9}\) & \(10^{3.4\pm 1.7}\) & \(10^{4.7\pm 2.6}\) \\ 3 & \(10^{1.1\pm 0.1}\) & \(10^{3.6\pm 0.4}\) & \(10^{8.6\pm 0.9}\) & \(10^{17.5\pm 2.2}\) \\ 4 & \(10^{1.4\pm 0.3}\) & \(10^{5.6\pm 0.5}\) & \(10^{16.4\pm 1.3}\) & \(10^{36.6\pm 3.0}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Estimated number of computational nodes to exactly compute Fourier expansion for QAOA circuits of varying degree and level. Statistics is collected over different graphs and observables, see App. C.2 for details. the large depth asymptotic, circuits of shorter depth may behave quite differently. Indeed, until the observables at computational nodes become sufficiently scrambled, the branching probability is much smaller than \(\frac{1}{2}\), and the complexity growth is much slower. This allows to compute Fourier expansions for circuits with much higher depths than anticipated for the random model. As a case study, we consider instances of the Quantum Approximate Optimization Algorithm (QAOA) for the MaxCut problem on regular graphs [14], which is the most studied approach to combinatorial optimization. Two key characteristics of a particular QAOA circuit is the degree of a graph \(d\) and the number of layers \(p\), see App. C.2 for details. Due to locality, shallow instances of QAOA allow for efficient classical computation of the loss function independently of the number of qubits. To every observable one associates a reverse light cone, containing all qubits that are connected to the observable by the entangling gates. For graphs of bounded degree, the size of the reverse light cone stays constant in the large \(N\) limit and classical computation of the loss function never involves simulating quantum circuits larger than that size. For QAOA on a graph of degree \(d\) with \(p\) layers the size of the reverse light cone is bounded by [14] \[N_{c}=2\frac{(d-1)^{p+1}-1}{d-2}. \tag{37}\] Note that \(N_{c}\) scales exponentially with the level \(p\), so that large \(p\) regime is the most difficult to simulate. Large \(p\) also implies large number of parametric gates, which is the key limiting factor for our algorithm. At the same time, while the locality of gates is not necessary in our approach, it certainly helps. Moreover, the benefits of locality are incorporated automatically. Indeed, the Pauli generators supported outside the reverse light cone of the Hamiltonian commute with all observables at the computational nodes and are trivially eliminated. Using a basic Monte-Carlo sampling, we estimate the expected complexity of our algorithm to compute the full Fourier expansion of the loss function for several values of \(d\) and \(p\). Results are reported in Tab. 1, details of numerical simulations are specified in App.C.2. Though estimates are very crude, they provide useful anchors for expected complexities. We stress here that the special structure of QAOA circuits and observables makes it possible to handle much larger number of parameters compared to random circuits. Indeed, for a \(d\)-regular graph with \(N\) nodes there are exactly \(|E|=Nd/2\) edges, so the number of parametric gates in our simulations is given by \[M=p(N_{c}+|E|)=pN_{c}\left(\frac{d}{2}+1\right). \tag{38}\] For instance, for \(p=3,d=3\) this gives \(N_{c}=30,M=225\), and a random circuit in this setup would require processing \(\sim\left(\frac{3}{2}\right)^{M-N_{c}}\simeq 2.18\times 10^{34}\) computational nodes, clearly an unmanageable amount. In contrast, for the actual QAOA instances, we find about \(10^{9}\) computational nodes to be sufficient. The distribution of Fourier terms in QAOA appears to be somewhat irregular, although with clustering characteristics similar to the random circuits of equivalent complexity, see App. C.2 for details. ### Hardware-efficient circuits One frequently studied design of variational algorithms is the hardware-efficient form [15], where the circuit is constructed to give the maximal expressivity with limited depth, efficiently using native hardware gates. We consider hardware-efficient circuits with brick wall architecture, where each block is built of an entangling CZ gate and four single-qubit Pauli rotations, and observables of weight \(2\), see App. C.3 for details. In this setting, we estimate the complexity of our algorithm to exactly compute the loss function and report the results in Fig. 4. The takeaway is similar to the QAOA case, locality of the circuit and the observable strongly reduces the number of Fourier terms compared to the random case, so the computational budget needed to process random circuits with \(M\sim 180\) should be sufficient to fully characterize the Fourier series for circuits with \(M\sim 600\) parametric gates in the current local setup. ## VI Discussion and outlook We looked at some qualitative and quantitative aspects of Fourier series expansion of VQA loss functions. The main observation is that restricting to the class of Clifford+Pauli circuits allows giving a much finer picture than possible for generic VQA. We presented an efficient classical algorithm for computing the Fourier expansion level by level, with the worst case complexity bounded by \(\mathcal{O}(N2^{M})\) for a single Pauli observable. We estimated the complexity of the algorithm and Figure 4: Estimated algorithm complexity for computing the full Fourier series of a two-local hardware efficient circuit with 50 qubits and Pauli Hamiltonian of weight two, as a function of the number of parametric gates. Details are specified in App. C.3. characterized the distribution of Fourier terms in several examples, including random non-local Clifford+Pauli circuits as well as more conventional local circuits such as QAOA and HEA. We anticipate our findings to provide useful tools for further study of the interplay between the properties of VQA and the structure of their Fourier series expansion. One major issue facing VQA is trainability, where two crucial obstacles are barren plateaus [31; 32] and local minimums [13; 33]. Although many ansatz structures, initialization and optimization heuristics have been proposed to mitigate these problems (we refer to [34] for a nice summary, and to [35; 36] for the discussion of the over-parameterized setting), they persist in many practical scenarios. Interestingly, the Fourier series contains a wealth of global information about the loss landscape, and may hence give a useful perspective on these problems. For instance, we note that the variance of the loss function gradient, frequently used to diagnose barren plateaus, is naturally related to the Fourier coefficients \[\mathrm{var}_{\boldsymbol{\phi}}\left(\nabla F(\boldsymbol{\phi}) \right)^{2}=-\int d\boldsymbol{\phi}\;F(\boldsymbol{\phi})\Delta F(\boldsymbol {\phi})=\sum_{m=0}^{M}m\,l(m)\;. \tag{39}\] (To arrive at this expression first integrate by parts and then use the fact that for every trigonometric monomial \(t_{m}(\boldsymbol{\phi})\) of degree \(m\) it holds \(\Delta t_{m}(\boldsymbol{\phi})=-mt_{m}(\boldsymbol{\phi})\).) Therefore, already coarse-grained characteristics of the Fourier expansion, such as the distribution of terms by level \(l(m)\), may provide a very useful input. Note that this distribution can be estimated by a Monte-Carlo sampling, even when the full computation of the loss function may be out of reach. Also, it would be very interesting to evaluate the role of lower vs higher order Fourier modes in shaping the loss landscape. Superficially, since the higher order modes are more oscillatory, yet typically contribute less to the \(L^{2}\) norm of the loss function, they may be a justified suspect in creating the majority of spurious local minima. Also, it looks promising to explore the potential of our algorithm for classical computation of quantum mean values. For circuits with local gates and constant depth, classical algorithms exist that scale favorably with the number of qubits [37]. While gate locality helps in practice, our algorithm is based on the properties of stabilizer circuits it is not principally constrained by it. Neither it is constrained by the degree of entanglement [38; 39] or circuit's graph treewidth [40], which can be a bottleneck for simulators based on tensor networks. Apparently, the flavor of our approach is most similar to simulation schemes for circuits dominated by Clifford gates. The parameter ranges that can be handled look similar, involving 50-100 qubits and dozens to hundreds of non-Clifford gates [41; 42], but detailed benchmarks are necessary to make a reasonable comparison. We should also stress that our approach is neither a simulation algorithm (in a weak or strong sense) nor an algorithm exclusively computing mean values. Instead, it returns the full (or truncated) Fourier series of a VQA, and hence has an interesting and novel character. **Acknowledgments**. We thank Vsevolod Yashin and V Vijendran for useful discussions. We thank the Priority 2030 program at the National University of Science and Technology "MISIS" under the project K1-2022-027. ## Appendix A Structure of a generic Fourier expansion ### Level expansion and number of terms For the sake of clarity, here we give a more detailed description of the general structure of Fourier expansion for the loss function introduced in Sec. II. Let us first illustrate trigonometric expansion of the unitary (3) for the case with \(M=2\) angles \[U(\phi_{1},\phi_{2})=U_{00}\cos\frac{\phi_{1}}{2}\cos\frac{\phi_{2}}{2}+U_{01 }\cos\frac{\phi_{1}}{2}\sin\frac{\phi_{2}}{2}+U_{10}\sin\frac{\phi_{1}}{2} \cos\frac{\phi_{2}}{2}+U_{11}\sin\frac{\phi_{1}}{2}\sin\frac{\phi_{2}}{2}\;. \tag{30}\] Note that trigonometric monomials here have the same degree and period \(4\pi\). Substituting such expansions into the definition of the loss function (7) leads to the Fourier series expansion of the loss function. Applying identities \(\cos^{2}\frac{\phi}{2}=\frac{1+\cos\phi}{2},\sin^{2}\frac{\phi}{2}=\frac{1- \cos\phi}{2},\cos\frac{\phi}{2}\sin\frac{\phi}{2}=\frac{\sin\phi}{2}\) leads to an expression involving trigonometric monomials of a smaller period \(2\pi\) and degrees up to \(M\). For instance, \[F(\phi_{1},\phi_{2})=\left\langle 0\,\right|U^{\dagger}(\phi_{1},\phi_{2})HU( \phi_{1},\phi_{2})\left|\,0\right\rangle=\frac{1}{4}\left\langle 0\,\right|U^{ \dagger}_{00}HU_{00}\left|\,0\right\rangle(1+\cos\phi_{1}+\cos\phi_{2}+\cos \phi_{1}\cos\phi_{2})+\ldots \tag{31}\] and we wrote explicitly only the contribution from the first term. More generally, terms in the expansion (9) assume the following form \[F_{0}=\text{const},\quad F_{1}(\boldsymbol{\phi})=\sum_{i=1}^{M}F_{i}(\phi_{i} ),\quad F_{2}(\boldsymbol{\phi})=\sum_{i,j=1}^{M}F_{ij}(\phi_{i},\phi_{j}), \quad F_{3}(\boldsymbol{\phi})=\sum_{i,j,k=1}^{M}F_{ijk}(\phi_{i},\phi_{j}, \phi_{k}),\quad\ldots \tag{32}\] Here \[F_{i}(\phi_{i})=A_{i}\cos\phi_{i}+B_{i}\sin\phi_{i}\] \[F_{ij}(\phi_{i},\phi_{j})=A_{ij}\cos\phi_{i}\cos\phi_{j}+B_{ij} \cos\phi_{i}\sin\phi_{j}+C_{ij}\sin\phi_{i}\cos\phi_{j}+D_{ij}\sin\phi_{i}\sin \phi_{j} \tag{10}\] \[\ldots\] At each level \(m\) there are \(\binom{M}{m}\) subsets of parameters, enumerating possible indices of homogeneous polynomials \(F_{i_{1}\ldots i_{m}}\). To define the polynomial for each parameter configuration requires specifying \(2^{m}\) coefficients. This leads to the counting (10) for the total number of coefficients in the Fourier series. ### Coefficients from averages Trivially \(F_{0}=\left\langle F(\mathbf{\phi})\right\rangle_{\mathbf{\phi}}\). Higher order terms can be obtained similarly. For instance, \(F_{i}(\phi_{i})=\left\langle F(\mathbf{\phi})-F_{0}\right\rangle_{\mathbf{\phi}\neq \phi_{i}}\), i.e. averaging over all angles except \(\phi_{i}\) only leaves first-order monomials involving a given angle \(\phi_{i}\). Higher order terms can be found recursively. Note that in this prescription, to compute terms at level \(m\) requires to first compute and subtract the contribution of all levels below \(m\). ## Appendix B Optimizing Pauli order A Pauli form of a Clifford+Pauli circuit is not unique, because adjacent Pauli rotations with commuting generators can be swapped. In principle, the number of computational nodes may be sensitive to this ordering. We briefly mention two possible optimizations along these lines. _Delayed branching_. The first strategy may be to delay branching as long as possible, by swapping the anti-commuting Pauli to the left. For illustration, consider a Pauli circuit \((P_{1}\ldots P_{N}P_{1}^{\prime}\ldots P_{N}^{\prime}|O)\) where all Pauli generators mutually commute, and in addition \([P_{i},O]=0\). Generators \(P_{i}^{\prime}\) may not commute with \(O\). While processing operators \(P_{i}^{\prime}\) up to \(2^{N}\) nodes may be generated. After that, assuming that \(P_{i}\) span an \(X\)-basis, no new nodes will be produced. Had we started with the circuit \((P_{1}^{\prime}\ldots P_{N}^{\prime}P_{1}\ldots P_{N}|O)\) instead, which is equivalent by assumption, generators \(P_{i}\) would be eliminated right away and no branching would be required (assuming \(P_{i}^{\prime}\) also span an \(X\)-basis). _Early pruning_. The freedom to swap commuting Pauli generators can also be used to enforce early pruning. As an illustration, consider circuit \((PP_{1}^{\prime}\ldots P_{m}^{\prime}|O)\), where \(P\) commutes with all \(P_{i}^{\prime}\) and, moreover, \(P\) is an independent generator, in the sense that \(P_{X}\) does not lie in the span of \((P_{i}^{\prime})_{X}\). If we process the circuit directly, up to \(2^{m}\) computational nodes are generated and then tested against compatibility with the last generator \(P\). However, in the equivalent setting \((P_{1}^{\prime}\ldots P_{m}^{\prime}P|O)\) compatibility with \(P\) is tested right away, and on average half of observables \(O\) will be pruned leading on average to \(2^{m-1}\) computation nodes. If there are \(n\) generators of \(P\)-type, savings up to \(2^{-n}\) can be expected. Note that both optimizations rely on the presence of large streaks of commuting Pauli generators. Therefore, for the non-local random Pauli circuits very little is to be gained, while, for structured local circuits (e.g. QAOA) the savings might be substantial. ## Appendix C Details of numerical computations An implementation of the algorithm and the data presented in the paper are available at GitHub repository [43]. ### Random circuits Statistics in Fig. 3a is collected from 20 random non-local Pauli circuits with \(N=30\) qubits and depth \(M=25\). Note that the distribution of norm \(\nu(m)\) is not an independent characteristic, but is computed from the distribution of nodes \(l(m)\) according to \(\nu(m)=2^{-m}l(m)\). In Fig. 5a we also plot the distribution of Fourier terms for random circuits, where only the observable is non-local, while the circuit consists of random Pauli exponentials of weight \(2\). As expected, the average values are the same as for the non-local random circuits, but the fluctuations around the mean are much higher. ### Qaoa Given a graph \(G\) with edges \(E_{ij}\), a single layer of the QAOA circuit consists of two-qubit Pauli rotation gates \(Z_{i}Z_{j}(\gamma_{ij})=\exp(-\imath Z_{i}Z_{j}\gamma_{ij}/2)\) (here \(Z_{i}\) stands for a Pauli string with \(Z\) on \(i\)-th position and identities on all others) for each edge \(E_{ij}\) followed by the sequence of single-qubit \(X_{i}(\beta_{i})\) gates placed on every qubit, see Fig. 5(a) for an example. Note that in the standard formulation of QAOA, \(\gamma_{ij}=\gamma,\beta_{i}=\beta\) i.e. all \(ZZ\) gates and all \(X\) gates have the same parameters within each layer. However, taking this correlation into account does not simplify our analysis and we will not impose it. Single layer repeated \(p\) times gives an instance of QAOA with \(p\) layers. The Hamiltonian is given by the sum of all \(Z\)-Pauli generators \(H=\sum_{E_{ij}}Z_{i}Z_{j}\). To estimate the complexity of computing the Fourier series for instances of QAOA of degree \(d\) with \(p\) layers, it is sufficient to consider circuits of size \(N_{c}\) (37). Statistics presented in Tab. 1 was collected in the following way. For each configuration \((d,p)\), we generate 20 random regular graphs of degree \(d\) and for each graph choose a single observable \(Z_{i}Z_{j}\) at random. Then for each circuit-observable pair we estimate the complexity of the algorithm using the Monte-Carlo technique C4 with \(10^{4}\) samples. Fluctuations are significant both due to inaccuracies of the Monte-Carlo sampling, but more importantly, due to inhomogeneous nature of data gathered over different graphs and observables. To represent large fluctuations more meaningfully, we compute average values and deviations in the log scale, i.e. for the exponents of the expected number of computations nodes. Hence the format used in Tab. 1. We stress that the numbers reported are estimated number of computational nodes, i.e. the expected complexity of the algorithm. The number of non-zero terms in the loss function is less by orders of magnitude, but also harder to estimate with reasonable precision. Besides the overall complexity, it might be of interest to look at the particular distributions of the Fourier terms in the loss function. To this end, in Fig. 4(b) we plot distributions on non-zero Fourier coefficients by level. Statistics is gathered for 100 different instances of \(d=3,p=2\) QAOA circuits, with a single randomly chosen QAOA observable for each circuit. Theoretical curve for random circuits with \(M=30\) is plotted for comparison. We note that while the distribution of the coefficients looks close enough to the random case, the distribution of norm has significant differences and large fluctuations. In particular, the norm appears to be concentrated at lower Fourier levels than expected for random circuits (after adjusting for the distribution of terms), which might make the truncated schemes quite useful. ### Hardware-efficient circuits An example of a brick wall hardware-efficient circuit that we consider is shown in Fig. 5(b). Statistics for Fig.4 is collected for \(N=50\) qubit circuits with up to \(M=600\) rotation gates. For each \(M\) we take 10 random observables of weight \(2\) and estimate the number of computational nodes with \(10^{4}\) Monte-Carlo samples. Similarly to the QAOA case, due to large individual Figure 5: Normalized distribution of non-zero Fourier coefficients in (a) local circuits with random observables (b) QAOA circuits with \(d=3,p=2\) and (c) brick wall HEA circuits with \(N=50\) and \(M=304\). Statistics is collected for random circuits structures (except for HEA, where it is fixed) and random observables. Solid lines are theoretical curves for random circuits of the equivalent complexity. Figure 6: Circuit layouts for (a) Single-layer QAOA instance on a 3-regular graph with 4 nodes (b) Hardware-efficient brick wall circuit with 5 qubits (all rotation gates have different parameters, not indicated explicitly). fluctuations, we compute means and deviations in the logarithmic scale. In Fig. 5c we also plot the distribution of Fourier coefficients for \(N=50\) circuit with \(M=308\) parametric gates (corresponding to 77 CZ gates), averaged over 100 random Pauli observables of weight 2. Similarly to the QAOA case, the most interesting part is the norm distribution, which shows large fluctuations and also the tendency to cluster at lower levels (compared to a random circuit with a similar distribution of coefficients). ### Estimating complexity from Monte-Carlo sampling It is useful to estimate the runtime/complexity of the algorithm without performing the full computation. To do this, we can use a simple version of Monte-Carlo sampling to probe the structure of the computational tree Fig. 1. First, we will describe a version that allows to estimate the number of terms in the expansion of the dressed Hamiltonian. To this end, we can use the basic algorithm to traverse the computational tree, but instead of keeping both branches at each split, we choose one at random and disregard the other. This procedure produces a single complete branch from the computational tree. The probability to generate any particular branch is \(2^{-m}\) with \(m\) being the number of splits performed in the process. Let \(s(m)\) be the number of samples at with \(m\) splits. We estimate the total number of terms \(n\) as \[n\approx\frac{\sum_{m=0}^{M}2^{m}s(m)}{\sum_{m=0}^{M}s(m)}. \tag{10}\] Note that in this case the number of splits \(m\) is the same as the Fourier level of the resulting term. To estimate the complexity of the actual algorithm, i.e. the number of computation nodes generated during the computation, we need to take into account the pruning based on the expectation values, see Sec. IV.2. The sampling prescription above is modified in a simple way. We traverse the computational tree and choose branches at random. Importantly, we only choose from the branches compatible with the pruning restrictions. If one of the branches in not admissible, the actual algorithm will not generate additional nodes and the probability of the sample does not need to be updated. In this case, the Fourier level of the resulting term does not correspond to the probability of sampling. The prescription (10) remains unchanged.
2301.09312
Enabling Hard Constraints in Differentiable Neural Network and Accelerator Co-Exploration
Co-exploration of an optimal neural architecture and its hardware accelerator is an approach of rising interest which addresses the computational cost problem, especially in low-profile systems. The large co-exploration space is often handled by adopting the idea of differentiable neural architecture search. However, despite the superior search efficiency of the differentiable co-exploration, it faces a critical challenge of not being able to systematically satisfy hard constraints such as frame rate. To handle the hard constraint problem of differentiable co-exploration, we propose HDX, which searches for hard-constrained solutions without compromising the global design objectives. By manipulating the gradients in the interest of the given hard constraint, high-quality solutions satisfying the constraint can be obtained.
Deokki Hong, Kanghyun Choi, Hye Yoon Lee, Joonsang Yu, Noseong Park, Youngsok Kim, Jinho Lee
2023-01-23T08:15:09Z
http://arxiv.org/abs/2301.09312v1
# Enabling Hard Constraints in Differentiable Neural Network and Accelerator Co-Exploration ###### Abstract. Co-exploration of an optimal neural architecture and its hardware accelerator is an approach of rising interest which addresses the computational cost problem, especially in low-profile systems. The large co-exploration space is often handled by adopting the idea of differentiable neural architecture search. However, despite the superior search efficiency of the differentiable co-exploration, it faces a critical challenge of not being able to systematically satisfy hard constraints such as frame rate. To handle the hard constraint problem of differentiable co-exploration, we propose HDX, which searches for hard-constrained solutions without compromising the global design objectives. By manipulating the gradients in the interest of the given hard constraint, high-quality solutions satisfying the constraint can be obtained. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote†: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted none: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted none: + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: ### DNN-Accelerator Co-exploration The early work on the co-exploration utilize variants of reinforcement learning, or evolutionary algorithm to leverage its simplicity (Bahdan et al., 2015; Chen et al., 2015; Chen et al., 2016; Li et al., 2017; Li et al., 2018; Li et al., 2019). Each candidate network is trained for evaluation, while the accelerator design is analyzed for hardware efficiency. These values create rewards used by the agent to create the next candidate solution. However, they all inherit the same problem from RL-based NAS methods in which they require expensive training to evaluate each candidate solution. To worsen the matter, co-exploration requires even larger network/hardware search space than searching only for networks. In such regard, differentiable approaches were adopted to co-exploration (Li et al., 2017). Auto-NBA (Li et al., 2018) used a differentiable accelerator search engine to build a joint-search pipeline, and DANCE (Li et al., 2019) trained auxiliary neural networks for hardware search and cost evaluation. However, none of the above properly addresses the hard constraint problem. In this work, we propose a holistic method of handling hard constraints on differentiable co-exploration. ## 3. Motivational Experiment The most straightforward and naive way to handle hard constraints within differentiable co-exploration would be to tune the relative weight to the hardware cost. For example, below is the loss function used in differentiable co-exploration (Li et al., 2018; Li et al., 2018). \[\mathcal{Loss}=\mathcal{Loss}_{CE}+\lambda_{Cost}Cost_{HW}, \tag{1}\] which is designed co-optimize accuracy and hardware cost simultaneously, and \(\lambda_{Cost}\) balances the two terms1. By increasing \(\lambda_{Cost}\), one can indirectly instruct the search process to consider hardware metrics more. However, giving a larger penalty does not directly lead to reduction in the value of a constrained metric. Figure 1 plots how changing \(\lambda_{Cost}\) in Eq. (1) from 0.001 to 0.010 affects the latency/energy and the classification error for CIFAR-10 dataset. Searches were done three times for each setting and plotted with the same colors with large dots for their averages. Even though some trend is observed that depends on \(\lambda_{Cost}\), inconsistency in both direction and variance of the trajectory is more dominant. Footnote 1: It is different from hyperparameters of typical machine learning formulation where the two terms serve toward a single objective. Consider a scenario where a designer wants to design a neural network-accelerator architecture pair with latency under some constraint (e.g., 33.3 ms), using the conventional co-exploration methods. The designer would try searching with some initial \(\lambda_{Cost}\) and try adjusting the value over the course of multiple searches. However, such inconsistency between \(\lambda_{Cost}\) and the latency makes it extremely difficult to obtain the adequate solution, not to mention the huge time cost of performing the search numerous times. Despite the difficulties that lie in tackling a hard-constrained co-exploration problem, designing an effective strategy is necessary. ## 4. Hard-Constrained Co-Exploration ### Problem Definition The mathematical formulation of hard-constrained differentiable co-exploration is as below: \[\operatorname*{arg\,min}_{\alpha,\beta}(\mathcal{Loss}_{NAS}(w^{* },net(\alpha))+\lambda_{Cost}Cost_{HW}(eval(\alpha,\beta))),\] \[\text{s.t.}\;\;w^{*}=\operatorname*{arg\,min}_{w}(\mathcal{Loss} _{NAS}(w,net(\alpha))),\;\;\;t\leq T, \tag{2}\] where \(t\) denotes the current value of constrained metric such as latency or energy, and \(T\) is the target value (e.g., 33.3 ms for latency). \(\alpha\) and \(\beta\) denote network architecture parameters and hardware accelerator configuration, respectively. \(w\) is the weights of the NAS supernet and \(net(\alpha)\) is the current dominant network architecture selected. \(eval(\alpha,\beta)\) indicates the hardware metrics evaluated for \(\alpha\) and \(\beta\). The objective of co-exploration is expressed using two distinct evaluation metrics, which are neural architecture loss (\(\mathcal{Loss}_{NAS}\)) and hardware cost (\(Cost_{HW}\)) defined from the user. ### Differentiable Co-exploration Framework Although our main contribution is that we enable hard constraints, we explain our framework for the co-exploration since they are closely related. Figure 2 illustrates the overall architecture of the proposed method, being similar to existing methods (Li et al., 2018; Li et al., 2018). Figure 2 (a) is the network search module. This module searches for network architecture by choosing a path from the supernet. The network structure is then fed to the evaluator module. The evaluator network \(eval()\) is the key to the differentiable co-exploration that enables the gradient to flow into the supernet, considering the relation between the hardware accelerator. It is a composition of two subnetworks: a hardware generator \(gen()\) and an estimator \(est()\). The hardware generator takes the neural architecture parameters as inputs and uses them to output the optimal hardware implementation (\(\beta\) from Eq. 2). It is jointly trained during the co-exploration so that the generator does not depend on certain cost function, and can adapt to the constraint. The estimator network outputs the hardware-related metrics by taking output of the generator and the network. It is pre-trained according to the network and the accelerator search space. For pre-training the estimator, traditional (non-differentiable) cost estimation frameworks such as MAESTRO (Li et al., 2018), Timeloop (Tlemelo et al., 2019), and Accelergy (Li et al., 2019) are used as ground truth. After pre-training, the estimator is frozen during the exploration and is only used to infer the hardware cost given a Figure 1. A motivational experiment. In each plot, we swept the value \(\lambda_{Cost}\) 0.001 to 0.010. It is clear that the trajectory is not strictly linear to \(\lambda_{cost}\) with high variations. network architecture. With these, we convert Eq. 2 as below: \[\operatorname*{s.t.} w^{*}=\operatorname*{arg\,min}_{w}(\mathcal{Loss}_{NAS}(w,net( \alpha))),\] \[v^{*}=\operatorname*{arg\,min}_{w}(Cost_{HW}(est(\alpha,gen(v, \alpha)))), \tag{3}\] where \(v\) is the weights for the hardware generator. ### Enabling Hard-Constraints with Gradient Manipulation In addition to the differentiable co-exploration methodology, we suggest the novel idea of gradient manipulation as an effective solution to the hard constraint problem. Direct manipulation of gradients is a strategy often used in achieving multiple goals, such as in continual learning (Srivastava et al., 2015) or differential equations (Kang et al., 2016). In this paper, we present a solution to apply gradient manipulation to the co-exploration problem in the interest of satisfying hard constraints. The diagrams on Figure 2 (b) and (c) show a high-level abstraction of our gradient manipulation method. The main idea is to artificially generate a force that can push the gradient in the direction that _agrees_ with the constraint. The conditions under which the method is applied to compute the new gradient \(g\) are defined as below: \[g=\begin{cases}g_{\mathcal{Loss}}&,\text{if }t\leq T\\ &\text{or }t>T\wedge g_{\mathcal{Loss}}\cdot g_{Const}\geq 0,\\ m_{\alpha}+g_{\mathcal{Loss}}&,\text{otherwise}\end{cases} \tag{4}\] \[g_{Const}=\frac{\partial\max(t-T,0)}{\partial\alpha}. \tag{5}\] In the above equation, \(g_{\mathcal{Loss}}\) is the original gradient from the global loss function defined as \[\mathcal{Loss}=\mathcal{Loss}_{NAS}+\lambda_{Cost}\cdot Cost_{HW}, \tag{6}\] as in Eq. 3, and \(g_{Const}\) is the gradient of constraint loss that we define as: \(Const=max(t-T,0)\). Note that \(t\) is a function of \(\alpha\), and thus can be backpropagated to find the gradient with respect to \(\alpha\). In an ideal case where the \(t\leq T\), the constraint is already met so we do nothing. In the unfortunate case when the constraint is still not met, we calculate for the dot product of the two gradients to determine the agreement in their directions. If \(g_{\mathcal{Loss}}\cdot g_{Const}\geq 0\) (i.e., the angle between two gradients is less than 90), it means gradient descent update will contribute towards satisfying the constraint. Thus it is interpreted as an agreement in direction and the same \(g_{\mathcal{Loss}}\) is used unmodified. Figure 2 (b) depicts this scenario. However, if they disagree as illustrated in Figure 2 (c) (i.e., \(g_{\mathcal{Loss}}\cdot g_{Const}<0\)), we force the gradient to shift its direction by \(m_{\alpha}\), which is obtained from \((m_{\alpha}+g_{\mathcal{Loss}})\cdot g_{Const}\geq 0\) to guarantee decrease in target cost after gradient descent. It can be reformulated as \(m_{\alpha}\cdot g_{Const}+g_{\mathcal{Loss}}\cdot g_{Const}=\delta\) where \(\delta\geq 0\) is a small value for ensuring gradual movement towards satisfying the constraint. For updating \(\alpha\) and \(w\), we solve for optimal \(m_{\alpha}\) with respect to \(\alpha\), which are the parameters for the network architecture. To minimize the effect of \(m_{\alpha}\) on \(g_{\mathcal{Loss}}\), we use a pseudoinverse-based solution that is known to minimize the size of \(||m_{\alpha}||_{2}^{2}\) as below: \[m_{\alpha}^{*}=\frac{-(g_{\mathcal{Loss}}\cdot g_{Const})+\delta}{||g_{Const }||_{2}^{2}}g_{Const}. \tag{7}\] In order to control the magnitude of the pull, we use a small multiplying factor \(p>0\) on \(\delta\). The policy for updating \(\delta\) using \(p\) is as follows: Some initial value \(\delta_{0}\) exists for \(\delta\). If the target metric fails to meet the constraint, \(\delta\) is multiplied by \(1+p\) to strengthen the pull (\(\delta^{\prime}=(1+p)\delta\)). In the other case when the constraint is satisfied, \(\delta\) is reset to its initial value (\(\delta^{\prime}=\delta_{0}\)). Note that we also train \(v\), weights for the hardware generator using gradient descent. Thus we compute for \(m_{v}^{*}\) in the same manner, but use \(g_{Cost_{HW}}\) in place of \(g_{\mathcal{Loss}}\) for updating the generator. Although a single constraint is already a challenging target, our method can be further generalized to accommodate multiple constraints. Now the gradient is modified only in the direction of individual constraints that do not comply. We provide a more generalized formulation: \[g=\begin{cases}g_{\mathcal{Loss}}&,\text{if }\bigwedge_{i=1}^{n}(t_{i}\leq T _{i})\\ &\text{or }\bigvee_{i=1}^{n}(t_{i}>T_{i})\wedge g_{\mathcal{Loss}}\cdot g_{Const }\geq 0,\\ m_{\alpha}+g_{\mathcal{Loss}}&,\text{otherwise}\end{cases} \tag{8}\] \[g_{Const}=\frac{\partial\sum_{i=1}^{n}\max(t_{i}-T_{i},0)}{\partial\alpha}. \tag{9}\] ### Implementation Details **Hardware cost function**. In this work, we choose the inference latency, energy, and the chip area as the widely used hardware metrics. Considering all of them, a commonly used cost function is multiplying them (i.e., EDP, EDAP) as in (Gang et al., 2016; Wang et al., 2016). However, we found that the energy is usually easier to optimize for, and using such cost function unfairly favors energy-oriented designs. Therefore, we use a balanced weighted sum for the cost function as below. \[Cost_{HW}=C_{E}Energy+C_{L}Latency+C_{A}Area. \tag{10}\] **Estimator and Generator Network**. Following (Gang et al., 2016), we model both the estimator and generator with five-layer Multi-Layer Perceptron (MLP) with residual connections. To train the estimator, we first build a dataset by randomly sampling 10.8M network-accelerator pairs (2.95e\(-9\) % of the total search space) from our search space which are evaluated on hardware metrics using Timeloop (Timeteloop, 2016) and Accelergy (Miller et al., 2017). Using this dataset, the estimator Figure 2. Overall structure of HDX. is trained for 200 epochs with the batch size of 256. The weight update is done using Adam optimizer with the learning rate of 1e-4. The accuracy of the estimator was over 99% for all metrics, being powerful enough as an engine for co-exploration. The generator is randomly initialized and jointly trained with the NAS supernet. As the manipulated gradient from the hard-constraint is back-propagated, the generator learns to create accelerators that comply with the constraint on given neural network architecture. **Search Space.** We use ProxylessNAS (He et al., 2017) as a NAS backbone with path sampling to train \(\alpha\). It consists of multiple settings of MBConv operation with kernel size (He et al., 2017; He et al., 2017; He et al., 2018) and expand ratio (He et al., 2017; He et al., 2018). The total number of layers is 18 and 21 for CIFAR-10 (He et al., 2018) and ImageNet (He et al., 2018) dataset, respectively. However, our method is orthogonal to the NAS implementation and has the flexibility to choose from any differentiable NAS algorithms, such as DARTS (Krizhevsky et al., 2014) or OFA (He et al., 2017). We use Eyeriss (Eyeriss, 2015) as the accelerator's backbone architecture. It is composed of a two-dimensional Processing Element (PE) array where each PEs has a Multiply-Accumulate (MAC) unit attached to a register file. Therefore, hardware accelerator design space comprises PE array size from 12x8 to 20x24, register file size per PE from 16B to 256B. In addition, the search space includes dataflow of Weight-Stationary (WS) similar to (He et al., 2017), Output-Stationary (OS) similar to (He et al., 2017) and Row-Stationary (RS) similar to (Eyeriss, 2015). ## 5. Experiments ### Experimental Environment We have conducted experiments on HDX using CIFAR-10 (He et al., 2018) and ImageNet (He et al., 2018) dataset. For all the hardware metrics (latency, energy, and chip area) reported, we have used the direct evaluation on the designed hardware from Timeloop (Tel Figure 3 (left) and (mid) show the relation between error and latency. The colored horizontal bars represent the two latency targets we applied. It can be easily seen that all solutions found by HDX satisfy the given hard constraints regardless of the value of \(\lambda_{Cost}\). Furthermore, all solutions have the latency right below the constraint, showing that the solutions did not over-optimize for the constrained metric (latency). DANCE (DANCE, 2017) and Auto-NBA (Beng et al., 2017) were able to exploit the trade-off between hardware metrics and accuracy, but has no control over meeting the constraint. Even with soft-constraint terms, they mostly failed to obtain in-constraint solutions. Auto-NBA at a glance seems to be slightly better at meeting the constraints, but it is because its baseline method favors hardware-efficient solutions over high-accuracy ones, not because of its ability to meet the constraint, exemplified by the fact that there is no solution with high accuracy, or latency under \(16.6\,\mathrm{ms}\). ### Solution Quality Found by HDX In this subsection, we demonstrate that HDX can 1) handle constraints from all three metrics (latency, energy, and chip area), 2) handle multiple constraints, and 3) obtain solutions of good overall quality. Figure 3 (right) plots \(Cost_{HW}\) and error together, which allows evaluating quality of the solutions in terms of Pareto-optimality. Because Figure 3 (left) and (mid) overlook the other metrics, comparing the \(Cost_{HW}\) together is required to be fair. From the plot, it is clear that quality of solutions from HDX is better than the NAS\(\rightarrow\)HW method, and has no degradation from the existing co-exploration methods. In fact, the tightly constrained (\(16.6\mathrm{ms}\)) solutions even find better solution than those of the existing solutions in terms of Pareto-optimality. To further study the quality of the solutions found by HDX, we have conducted another set of experiments. We selected a few solutions found from DANCE method as 'Anchor' solutions and listed them in Table 2. From those, we chose either one or all three of the hardware metrics to be fixed as the hard constraint, and performed co-explorations using HDX. Because it is guaranteed that such solution exists, a good method should be able to find a solution meeting the constraint, of at least a similar quality. As in the Section 5.3, all of the 8 cases we have examined succeeded in finding a valid solution. Furthermore, all the solutions show similar global loss values from the anchor solutions as shown in the rightmost column. ### Results from ImageNet Dataset Table 3 shows the co-exploration results from ImageNet dataset (Dosov et al., 2017), under \(125\) ms constraint. As displayed in the table, HDX always succeeded in finding a solution within constraint where the others often failed to satisfy. Furthermore, the Top-1 error and the global loss shows that the quality of the solution found by HDX is not compromised at all, compared to DANCE or its variant. ### Sensitivity Study on Pulling Magnitude In HDX, the only hyperparameter is \(p\) that controls the pulling magnitude. Figure 4 illustrates how the global loss and latency changes over latency-constrained (\(33.3\) ms) explorations, with varying \(p\) of \(1\)e-\(2\), \(7\)e-\(3\), and \(4\)e-\(3\). Regardless of the value of \(p\), the curve for the constrained value shows a similar trend. At the beginning, the global loss becomes mainly optimized, while the latency stays steady. During this phase the pulling magnitude \(\delta\) (See Eq. 7) is still growing, and is not strong enough to make meaningful changes. At certain point, \(\delta\) becomes strong enough to pull the solution towards lowering latency. When the latency satisfies the constraint, global loss starts to decrease while maintaining the latency. There is no significant discrepancy between the final solution in the global loss and the latency, which shows that HDX is insensitive to the hyperparameter \(p\). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Index & Constrained & Lat. (ms) & E (mJ) & Area (mm\({}^{2}\)) & Error (\%) & \(Cost_{HW}\) & Loss \\ \hline \multirow{4}{*}{A} & Anchor & 69.23 & 37.00 & 2.53 & \(4.10\pm 0.16\) & 21.84 & 0.632 \\ & Latency & **43.99** & 21.79 & 2.10 & \(4.20\pm 0.07\) & 13.87 & 0.624 \\ & Energy & 51.98 & **29.18** & 2.53 & \(4.38\pm 0.17\) & 17.44 & 0.630 \\ & Chip Area & 64.00 & 34.82 & 2.53 & \(4.05\pm 0.06\) & 20.56 & 0.629 \\ & All & **63.72** & **12.09** & 1.86 & \(4.12\pm 0.18\) & 13.29 & 0.623 \\ \hline \multirow{4}{*}{B} & Anchor & 49.65 & 27.53 & 2.53 & \(4.22\pm 0.06\) & 16.67 & 0.638 \\ & Latency & **48.02** & 27.33 & 2.53 & \(4.27\pm 0.09\) & 16.41 & 0.644 \\ \cline{1-1} & Energy & 95.02 & **24.45** & 1.89 & \(4.05\pm 0.10\) & 20.76 & 0.648 \\ \cline{1-1} & Chip Area & 54.74 & 29.81 & 2.53 & \(4.11\pm 0.13\) & 17.96 & 0.645 \\ \cline{1-1} & All & **41.32** & **8.59** & 1.86 & \(4.35\pm 0.05\) & 9.50 & 0.629 \\ \hline \hline \end{tabular} * "Bold colored numbers indicate that they are under constraint of the same colored non-bold numbers. \end{table} Table 2. Results Showing the Quality of Solutions Figure 4. Sensitivity to \(p\) on HDX. The red lines represents latency constraint at 33.3 ms. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & in-const\({}^{2}\) & Lat. (ms) & Error (\%) & CostHW & Loss \\ \hline \multirow{2}{*}{NAS\(\rightarrow\)HW} & ✗ & 242.92 & 24.84 & 46.29 & 1.99 \\ & ✗ & 135.39 & 28.83 & 24.26 & 2.17 \\ \hline \multirow{2}{*}{DANCE} & ✗ & 165.98 & 25.46 & 28.37 & 2.04 \\ & ✓ & 125.18 & 25.28 & 25.32 & 2.09 \\ \hline \multirow{2}{*}{DANCE;Soft Coast.} & ✗ & 188.69 & 25.69 & 33.14 & 1.99 \\ & ✓ & 105.65 & 26.37 & 25.58 & 2.08 \\ \hline \multirow{2}{*}{**HDX (Proposed)**} & ✓ & 92.06 & 25.01 & 24.48 & 1.98 \\ \cline{1-1} & ✓ & 112.11 & 25.20 & 22.63 & 2.00 \\ \hline \hline \end{tabular} \end{table} Table 3. Experimental Results for ImageNet Figure 3. Co-exploration results. (left) and (mid) represent the latency and (right) represent the hardware cost. Colored marks are methods with constraints of the same color. ### Analysis on the Searched Solutions Fig. 5 visualizes the network and accelerator searched for 60 fps (a) and 30 fps (b) constraints. For the found design pair (a), the design contains relatively smaller kernels, more layers, and a powerful accelerator. To meet a tight constraint while maintaining accuracy, the network has small kernels, mainly of 3\(\times\)3. Using smaller kernels quadratically reduces the number of multiplications. Therefore, decreasing the kernel size and increasing number of layers is a good choice for reducing inference latency. Looking at the accelerator design, it has relatively large PE array (16\(\times\)16) to achieve low latency. It takes weight stationary (WS) dataflow, which is known to have low latency. In addition, there are some kernels with high channel expand ratio in the network. WS exploits channel parallelism for fast execution, and thus has advantage over the found network. On the other hand, in the design for 30 fps (b), the design settles at a solution that can optimize the energy consumption while satisfying the constraint. The design uses larger kernels in the network and row stationary (RS) dataflow in the accelerator. RS is known to have good energy efficiency (Cheng et al., 2019), and exploits parallelism from spatial dimensions of kernel and the activation. Thus, having larger kernels have advantages on RS dataflow. To reduce the energy consumption, the design has fewer PEs (12\(\times\)8), larger RFs to save off-chip access energy, and fewer layers in the network. ## 6. Conclusion In this paper, we proposed HDX, a hard-constrained differentiable co-exploration method. By conditionally applying gradient manipulation that moves the solution towards meeting the constraints, hard constraints can be reliably satisfied with high-quality solutions. We believe this proposal would ease the development of DNN based systems by a significant amount. ###### Acknowledgements. This work has been supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2022R1C1C1008131, 2022R1C1C1011307), and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University)).
2302.10313
Real-Time Speech Enhancement Using Spectral Subtraction with Minimum Statistics and Spectral Floor
An initial real-time speech enhancement method is presented to reduce the effects of additive noise. The method operates in the frequency domain and is a form of spectral subtraction. Initially, minimum statistics are used to generate an estimate of the noise signal in the frequency domain. The use of minimum statistics avoids the need for a voice activity detector (VAD) which has proven to be challenging to create. As minimum statistics are used, the noise signal estimate must be multiplied by a scaling factor before subtraction from the noise corrupted speech signal can take place. A spectral floor is applied to the difference to suppress the effects of "musical noise". Finally, a series of further enhancements are considered to reduce the effects of residual noise even further. These methods are compared using time-frequency plots to create the final speech enhancement design
Georgios Ioannides, Vasilios Rallis
2023-02-20T20:55:53Z
http://arxiv.org/abs/2302.10313v1
**Real-Time Speech Enhancement Using Spectral Subtraction with Minimum Statistics and Spectral Floor** ## 1 Abstract An initial real-time speech enhancement method is presented to reduce the effects of additive noise. The method operates in the frequency domain and is a form of spectral subtraction. Initially, minimum statistics are used to generate an estimate of the noise signal in the frequency domain. The use of minimum statistics avoids the need for a voice activity detector (VAD) which has proven to be challenging to create [7]. As minimum statistics are used, the noise signal estimate must be multiplied by a scaling factor before subtraction from the noise corrupted speech signal can take place. A spectral floor is applied to the difference to suppress the effects of "musical noise" [2]. Finally, a series of further enhancements are considered to reduce the effects of residual noise even further. These methods are compared using time-frequency plots to create the final speech enhancement design. ## 2 Introduction Background additive noise that has distorted a speech signal can degrade the performance of many real-world digital communication systems. Today, digital communication systems are increasingly being used in noise environments such as vehicles, factories and airports. Signal Processing techniques are also used in brain modelling applications[4]. Robustness to noise sensitivity have become key properties in any communication system. In this work, a real-time spectral subtraction system will be implemented to reduce the background noise in a speech signal while leaving the speech itself intact. This is known as speech enhancement. ## 3 Basic Implementation ### High-level Overview In spectral subtraction, the assumption is that the speech signal \(s(t)\) has been distorted by a noise signal \(n(t)\) with their sum being denoted by \(x(t)\) (1). \[x(t)=s(t)+n(t) \tag{1}\] In the frequency domain, these signals become: \[X(\omega)=S(\omega)+N(\omega) \tag{2}\] where \(X(\omega)\), \(S(\omega)\) and \(N(\omega)\) are the Fourier transforms of \(x(t)\), \(s(t)\) and \(n(t)\) respectively. Effectively, the spectral subtraction method operates in the Fourier domain by attempting to subtract an estimate of \(N(\omega)\) which will be denoted as \(\hat{N}(\omega)\) from \(X(\omega)\), to produce a final signal \(Y(\omega)\) (3). \[Y(\omega)=X(\omega)-\hat{N}(\omega) \tag{3}\] However, since the phase of the noise is not known, only the magnitude of the noise estimate \(\hat{N}(\omega)\) will be subtracted from \(X(\omega)\) leaving the phase of \(X(\omega)\) distorted by the noise (4). \[Y(\omega) =X(\omega)-\left|\hat{N}(\omega)\right|\] \[=X(\omega)\left(1-\frac{\left|\hat{N}(\omega)\right|}{\left|X( \omega)\right|}\right)\] \[=X(\omega)g(\omega) \tag{4}\] An issue with simply implementing (4) is that if \(g(\omega)\) is negative for some frequency bins, the phase of those frequency bins will be shifted by \(\frac{\pi}{2}\) radians. As stated by [3], the relative phases of two signal components is relevant if the two components are separated by less than a critical bandwidth. This critical bandwidth is close to \(1/6^{\rm th}\) of an octave after 1kHz. Therefore, under some conditions, phase distortion might result in an audible distortion in the time domain. A solution to this problem would be to modify \(g(\omega)\) to: \[g(\omega)=\max\left(0,1-\frac{\left|\hat{N}(\omega)\right|}{\left|X(\omega) \right|}\right) \tag{5}\] Throughout this work, \(g(\omega)\) will be modified in search of an improvement in intelligibility of of the final signal \(y(t)\). Ideally, \(Y(\omega)\approx S(\omega)\), thus, using the inverse Fourier transform, the original speech signal \(s(t)\) can be recovered. The process is illustrated in Figure 1 The assumption of additive noise implies that \(n(t)\) and \(s(t)\) are statistically independent [9]. This assumption can be applied in most real-world situations as no knowledge of the probability density function (PDF) or the frequency domain of the noise is required. Figure 1: Block diagram of spectral subtraction [8] ### Frame Processing For the system to be real-time, the speech signal \(s(t)\) must first be split into smaller sections so that the processing can take place before the entire signal has arrived. These smaller sections are called frames and their size is denoted as \(N\). For the basic implementation, \(N=256\). It is critical that \(N\) is a power of 2 so that the radix-2 FFT algorithms can be used. This leads to a reduction in the time-complexity of the FFT algorithm from \(O(N^{2})\) to \(O(N\log(N))\). The reduction in the run-time of the algorithm for large values of \(N\) (i.e. \(N>100\)) is the critical for the system to be achievable in real-time. However, the discontinuities at the edges of each frame will lead to spectral artifacts. To solve this issue, a window is applied in the time domain before the FFT of the frame is computed. Nevertheless, by windowing \(x(t)\) in the time domain, the signal has been distorted; thus, as shown in Figure 2, the original time domain signal will not be recovered. To solve this, the individual frames can be overlapped so that the sum of overlapping windows is always 1. The number of frames that overlap is known as the oversampling factor. The process for an oversampling factor of 2 is shown in Figure 3. Note that for the basic implementation of the spectral subtraction algorithm, an oversampling factor of 4 was used instead (i.e. each frame will contain \(256/4=64\) new samples) As shown in Figure 3, another time domain window is applied to \(x(t)\) after the Inverse Fast Fourier Transform (IFFT) of the function is taken. This second window is necessary as a modification in the frequency domain is equivalent to filtering in the time domain which might lead to discontinuities when the frames meet. In implementations developed in this work, for both the input and output windows, the square root of the Hamming window was used (6). The Hamming window offers a relative first sidelobe amplitude level of -40.0dB Figure 4. \[w(t)=\sqrt{1-0.85185\cos\left(\frac{(2t+1)\pi}{N}\right)}\text{ for }t=0,...,N-1 \tag{6}\] ### Noise Estimation As mentioned previously, for spectral subtraction to be performed, an estimation \(\tilde{N}(\omega)\) of the noise present in the signal is required (7). One way of finding this estimate would be to use a Voice Activity Detector (VAD) which detects whether speech is present in the signal and then take the average of all the frames where speech is not present [2]. However, spectral subtraction based on VAD is exceptionally difficult to make so an easier approach is chosen [7]. Figure 4: Frequency domain of Hamming, Hanning and Gaussian Windows Figure 3: Overlap and add process [8] Figure 2: Problem with simply applying window in the time domain [8] For each frequency bin of \(X(\omega)\), the minimum magnitude over the last 10 seconds is determined. This frame will be referred to as the Minimum Magnitude Spectral Estimate (MMSE) henceforth. Assuming that the speaker who is being recorded will make a brief pause within these 10 seconds to take a breath, the MMSE will correspond to the minimum magnitude of the realization of the noise within the speech pauses in the last 10 seconds. As this estimator will use the minimum of the noise realizations, it will severely underestimate the average magnitude of the noise signal. For this reason, a compensating factor denoted by \(\alpha\) must be introduced. Since \(x(t)\) is sampled at 8kHz, and each new frame contains 64 new samples (i.e. 8ms of new information), 1250 frames must be stored in memory to find the MMSE. This is infeasible due to hardware limitations of the system in use. A simplification can be made by storing just 4 frames denoted as \(M_{i}(\omega)\) where \(i=1,..,4\). For each frame, \(M_{1}(\omega)\) is updated by: \[M_{1}(\omega)=\min\left(\left|X(\omega)\right|,M_{1}(\omega)\right) \tag{7}\] After 2.5 seconds (i.e. approximately 312 new frames), the frames are shifted and the new \(M_{i}(\omega)\) takes the values of the previous \(M_{i-1}\), for \(i=4,...,2\) while \(M_{1}(\omega)\) is set to \(\left|X(\omega)\right|\). The disadvantage of using this simplification is that the MMSE memory (i.e. how far into the past the minimum frequency bins will be searched for) will not be a constant 10 seconds since once the shift occurs, the new MMSE will have an effective memory of 7.508 seconds which will grow until it reaches 10 seconds and then reset again. Nevertheless, this is a small compromise for such a dramatic decrease in the amount of memory required. ### The noise trade-off As explained in [1], one of the problems with the implementation described above is the introduction of a new type of noise into \(Y(\omega)\). This new type of noise will be referred to as musical noise. To explain this new type of noise, it is crucial to understand that there are peaks and valleys in the short-term power spectrum of the noise. Both the frequency and amplitude of these peaks will vary randomly from frame to frame. When spectral subtraction takes place according to \(g(\omega)\) (5), depending on the value of \(\alpha\) more peaks or more valleys will remain in the magnitude of the processed frame \(\left|X(\omega)\right|\). The peaks will be perceived as tones at a specific frequency. This frequency will change every frame, thus, for the implementation described above, the frequency of the tones will change every 8ms. The valleys will be perceived as broadband noise. A simulated example is described to gain a better understanding. Using the MATLAB function randn, 1000 frame realizations of length 256 are generated. The MMSE over these 1000 frames is plotted in Figure 5. The magnitude \(\left|Y(\omega)\right|\) of three consecutive processed frames for \(\alpha=20\) is plotted in Figure 6. Note that both peaks and valleys are present in all three frames. By increasing the value of \(\alpha\), the broadband noise in the frame will be suppressed while the effect of the musical noise (i.e. the peaks) will be further enhanced since it will not be masked by the broadband noise. The magnitude \(\left|Y(\omega)\right|\) of three consecutive processed frames for \(\alpha=200\) is plotted in Figure 7. As expected, the peaks are more prevalent even though their amplitude has been decreased. A solution to this musical noise problem, is to further modify \(g(\omega)\) to introduce a new parameter \(\lambda\) which will be referred to as the spectral floor (8). Effectively, the Figure 5: MMSE over the past 1000 frames Figure 6: Magnitude of three consecutive processed frames for \(\alpha=20\) Figure 7: Magnitude of three consecutive processed frames for \(\alpha=200\) parameter will be used to mask the musical noise with broadband noise (Figure 7). \[g(\omega)=\max\left(\lambda,1-\alpha\frac{\left|\hat{N}(\omega)\right|}{X(\omega) }\right) \tag{8}\] Since the MMSE is used as an estimate for the noise, the appropriate value (i.e. the one that leads to best intelligibility of the speech) of \(\alpha\) will increase with: 1. The memory of the MMSE 2. The variance of the noise. This equivalent to the power of the zero mean noise. In this simulated example, the signal consisted of only noise; however, it must be underscored that if \(\alpha\) is too large, distortion caused by the spectral subtraction will decrease the speech intelligibility. Overall, through the above analysis, it is clear that the parameters of spectral subtraction method used, must be adjusted to achieve a balance between, musical noise, broadband noise and speech intelligibility. This intuition will be used in the next sections to further improve the current implementation. ### Implementation in C The key parts of the C code for the basic implementation are described in the following section. The frame that must be processed is located in the inframe array. The first step is to move the frame from the inframe array to the intermediate array and convert the elements of inframe from float to complex which is a struct defined in the complx.h header file. The conversion from float to complex is required as the signature of the fft function is void fft(int N, complex* X). ``` 1for(k=0jK<FFTLEN;k++) 2{ 3inframe[k]=inbuffer[m]*inwin[k]; 4if(++m>=CIRCBUP)m=0;/*wrapifrequired*/ 5} 6} 7/************************************************DOPROCESSINGOF\(\leftarrow\)FNAMEHEE*****************/ ``` As this is a real-time implementation, optimizations are required to decrease the run time of frame processing. One of the primary optimizations is to only process half of the frame once in the frequency domain. As the frame being processed is real in the time domain, the frequency domain of the frame will be conjugate complex symmetric (9). \[X_{N-n}=X_{n}^{*} \tag{9}\] where \(X_{n}\) is the value of the \(N\) point FFT at frequency bin \(n\) and \({}^{*}\) is the complex conjugate operator. Next, the magnitude of the current frame is computed and used to implement the MMSE algorithm mentioned previously. ``` 1//N.B.mostoftheframeprocessingisdone\(\leftrightarrow\)withingafloop 2//thattakesadvantategoftheconjugatecomplex\(\leftrightarrow\)symmetry. 3//Thisgreallyimprovesefficiency. 4for(k=0jK<FFTLEN/2;k++){ 5//Calculatemagnitudeofcurrentframe 6mag[k]=cabs(intermediate[k]); ``` As the time of the C code is the same, the frame will be computed and elements of the intermediate array are overwritten accordingly. ``` 1//CheckforpossibleMMSEelementsincurrentframe 2m1[k]=min(mag[k],m1[k]); ``` Figure 11: Implementation of MMSE algorithm (1) Figure 12: Implementation of MMSE algorithm (2) Figure 8: Magnitude of three consecutive processed frames for \(\alpha=200\) and \(\lambda=0.1\) Figure 13: Implementation of MMSE algorithm (3) Figure 9: Code to perform FFT on new frame ### Performance of the Basic Implementation The performance of this basic implementation will be used as a benchmark to compare the enhancements that will be introduced in the next section. To compare the different implementations, a selection of Waveform Audio Files (i.e..wav) containing "the sailor passage" with different types of added noise (e.g. car, factory, helicopter) at different noise levels were used as input to the system. To refer to the different types of input their file names will be used (e.g. phantom2.wav for added noise from the F15 phantom aircraft at noise level 2). The spectrogram of the input with no added noise (i.e. clean.wav) is shown in Figure 17. The spectrogram of car1.wav in Figure 18. Through a visual inspection of the spectrogram, the car noise seems to have added broadband stationary noise at low frequencies (i.e. less that 300Hz). The spectrogram of car1.wav after processing, which will simply be referred to as "the output," is shown in Figure 19. As expected, the basic spectral subtraction implementation has reduced the noise in the signal; however, improvements can still be made. ## 4 Enhancements In this sections various enhancements are made to the basic implementation. Not all enhancements were used in the final implementation as some proved to have little effect in practice given their computational cost. The C-code implementation for all of the enhancements can be found in the Appendix. ### Low-pass filtering the magnitude The first enhancement is simply to low-pass filter the magnitude \(\big{|}X(\omega)\big{|}\) of the frame. Note the the low-pass filter in acting on consecutive frames rather than in the time domain. This was recommended in [7][6]. The low-pass filtering is done according to the difference equation (10) \[P_{t}(\omega)=(1-k)\big{|}X(\omega)\big{|}+kP_{t-1}(\omega) \tag{10}\] where \(k=e^{T/\tau}\) is the z-plane pole for time constant \(\tau\) and frame rate \(T\) and \(P_{t}(\omega)\) is the low-pass filtered input for frame \(t\). Note the since \(\big{|}e^{T/\tau}\big{|}<1\) for \(T\neq 0\), the filter will always be stable for any value of \(\tau\). This enhancement improved significantly the output while \(\alpha\) was reduced from 20 to 2; \(\tau\) was set empirically to 30ms which is in the range suggested by [8]. The spectrogram of the output with the above enhancement is shown in Figure 20. Surprisingly, even though the spectrum looks similar to Figure 19, it was perceived to be much clearer. Figure 16: Writing the real values of intermediate to outframe Figure 17: Spectrogram of clean.wav Figure 18: Spectrogram of car1.wav Figure 19: Spectrogram of car1.wav output with \(\alpha=20\) and \(\lambda=0.05\) (Basic Implementation) ### Low-pass filtering power This enhancement is very similar to enhancement 1, except instead of low-pass filtering the magnitude \(\big{|}X(\omega)\big{|}\), the power \(\big{|}X(\omega)\big{|}^{2}\) is low-pass filtered. Theoretically, this makes sense since humans perceive power rather than magnitude. Furthermore, it's expected that the optimal value of \(\tau\) will decrease as \(\big{|}X(\omega)\big{|}^{2}\) will vary faster than \(\big{|}X(\omega)\big{|}\). Empirically, the optimal value of \(\tau\) was set to 0.025. This is within the range specified by [8]. The spectrogram of the output with the above enhancement is shown in Figure 21. The output was perceived to be of higher quality than the output when using enhancement 1. ### Low-pass filtering the noise In this enhancement, instead of low-pass filtering the magnitude of the frame, the MMSE is low-pass filtered. Theoretically, robustness of the system to non-stationary nose where there would be a abrupt change in the output once the MMSE frames \(M_{i}(\omega)\) shift. Empirically, there was a noticeable difference when the input to the DSK was set to factory1.wav and factory2.wav as they contain "the sailor" passage with added factory noises at different levels. ### Using different values for \(g(\omega)\) This enhancement consists of implementing different versions of \(g(\omega)\) shown below: \[g(\omega) =\max\left(\lambda\frac{\Big{|}\hat{N}(\omega)\Big{|}}{X(\omega) },1-\alpha\frac{\Big{|}\hat{N}(\omega)\Big{|}}{X(\omega)}\right) \tag{11}\] \[g(\omega) =\max\left(\lambda\frac{\Big{|}P(\omega)\Big{|}}{X(\omega)},1- \alpha\frac{\Big{|}\hat{N}(\omega)\Big{|}}{X(\omega)}\right)\] (12) \[g(\omega) =\max\left(\lambda\frac{\Big{|}\hat{N}(\omega)\Big{|}}{P(\omega) },1-\alpha\frac{\Big{|}\hat{N}(\omega)\Big{|}}{P(\omega)}\right)\] (13) \[g(\omega) =\max\left(\lambda,1-\alpha\frac{\Big{|}\hat{N}(\omega)\Big{|}}{ P(\omega)}\right) \tag{14}\] All of these enhancements were tested empirically, the best performing one was (13) which is also the version of \(g(\omega)\) that is used in [1]. ### Calculating \(g(\omega)\) in the power domain This is yet another enhancement that modifies \(g(\omega)\); however, in this case, the modification is different as \(g(\omega)\) will be computed in the power domain instead of the magnitude domain ( 15). \[g(\omega)=\max\left(\lambda,\sqrt{1-\left(a\frac{\Big{|}\hat{N}(\omega)\Big{|} }{X(\omega)}\right)^{2}}\right) \tag{15}\] As mentioned previously, humans perceive power rather than magnitude so there is theoretical justification for this enhancement. However, empirically, little difference was perceived in the output signal with this enhancement being very computationally expensive due to the powf and sqrt functions that must be used. ### Overestimate \(\alpha\) at lower SNR frequency bins This enhancements adjusts the parameter \(\alpha\) from frame to frame depending on the SNR as suggested by [1]. The SNR from frame to frame will vary as the power of the noise will be approximately the same for stationary noise while the power of the signal will vary. For high SNR frames, increasing the value of \(\alpha\) is not necessary and will lead to a distortion in the speech signal. For low SNR frames, a higher value \(\alpha\) is necessary to suppress the noise. Therefore, there is theoretical justification to this enhancement. As suggested by [1], a piece wise linear function was used to select the value of \(\alpha\) (Figure 22) (16). Figure 21: Spectrogram carl.wav output with \(\alpha=2\), \(\lambda=0.05\), \(\tau=0.025\) (Enhancement 2) Figure 20: Spectrogram carl.wav output with \(\alpha=2\), \(\lambda=0.05\), \(\tau=0.03\) (Enhancement 1) The function for the solid line in Figure 22 is: \[\alpha(SNR)=\begin{cases}5&\text{for }SNR<-5\\ 5-\frac{4}{20}SNR&\text{for }-5\leq SNR\leq 20\\ 1&\text{for }SNR>1\end{cases} \tag{16}\] Even though in [1], this enhancement was only performed with a frame by frame granularity (i.e. the value of \(\alpha\) will change from frame to frame; however, it will remain constant within a frame) the enhancement was further modified to allow for \(\alpha\) to change with a frequency bin granularity which was used by [5]. The justification of this is that noise does not effect the speech signal in the frequency domain uniformly. This is illustrated in Figure 22 which shows the SNR ratios of four linearly space frequency bins across consecutive frames for the input corrupted by the added car noise. Bin 1 has a lower SNR across most frames as it corresponds to the very low frequencies ( \(<10\)Hz) which is where the car noise is mostly present. The SNRs between different frequency bins differ substantially with difference being greater than 100dB for some frames. Note the if the slope of the piece-wise linear function (16) is increased, then the temporal dynamic range of the signal will also increase substantially leading to a distorted output. Empirically, the slope used in [1], was confirmed to have a good performance so it was not modified. ### Adding the \(\delta(F)\) term In addition to adjusting the noise estimate, \(\hat{N}(\omega)\) based on the SNR ratio of each frequency bin, this enhancement aims to further adjust \(\hat{N}(\omega)\) based the analogue frequency, \(F\) that the frequency bins represents. This enhancement was introduced by [5] and uses a "tweaking factor" \(\delta(F)\) that can be individually set for each frequency bin. In the real-world, noise (e.g. added car noise) is coloured and affects certain frequencies more than others. This is illustrated in Figure 24 which shows the spectrogram of the added car noise. Note that the car noise is present primarily at frequencies \(0\)Hz \(<F<300\)Hz which explains the discrepancies between the SNRs of different frequency bins in Figure 23. The \(\delta(F)\) terms adds an additional degree of freedom to the noise subtraction level of each frequency and modifies (8) slightly to the form shown in (17) \[g(\omega)=\max\left(\lambda,1-\delta(F)\alpha(SNR)\frac{\left|\hat{N}(\omega) \right|}{\left|X(\omega)\right|}\right) \tag{17}\] The values of \(\delta(F)\) where determined empirically and set to: \[\delta(F)=\begin{cases}1&0Hz<F<1kHz\\ 2.5&1kHz\leq F<2kHz\\ 1.5&2kHz\leq F\end{cases} \tag{18}\] These values match the ones used in [5]. The addition of the delta term, lead to a significant increase in the intelligibility of the output especially when dealing with added helicopter noise. The spectrogram of added helicopter noise is shown in Figure 25 Figure 23: SNR ratios of four linearly spaced frequency bins across consecutive frames Figure 22: Value of the compensation factor \(\alpha\) versus SNR of frame Figure 24: Spectrogram of added car noise in carl.wav ### Using different frame lengths By changing the frame length, the time and frequency resolution of the implementation can be changed. A larger frame length will effectively increase the frequency resolution of each frame while decreasing the frequency resolution and vice versa. As mentioned previously, the basic implementation had a frame length of 256 samples and a sampling frequency of 8kHz. Thus each frame consists of 32ms of speech. A shorter frame length resulted in "roughness" in the speech while a longer frame length lead to "slurred" speech. These results agree with [8]. Overall, the ideal frame length was found to be around 28ms. The frame length was adjusted by changing the FFTLEN definition in the C code. ### Residual Noise Reduction This enhancement attempts to remove some of the musical noise by taking advantage of the frame to frame randomness [2]. Effectively, as mentioned previously, the musical noise is due to the formation of peaks in the magnitude spectrum which will appear at a random amplitude and frequency for each frame. Therefore, the musical noise can be suppressed by replacing the frequency bins of the current frame with the minimum frequency bins from the previous and next frame. \[\big{|}X_{i}(\omega)\big{|}=\min\Big{(}\big{|}X_{i-1}(\omega)\big{|},\big{|}X_{ i}(\omega)\big{|},\big{|}X_{i+1}(\omega)\big{|}\Big{)} \tag{19}\] However, even with the complex conjugate symmetric optimization, this enhancement is very computationally demanding and could not be implemented in parallel with the enhancements mentioned thus far. For this reason, it was not included in the final implementation. ### Reduce the MMSE Memory This enhancement aims to increase the responsiveness of the system to non-stationary noise by reducing the MMSE memory. Reducing the MMSE memory is also beneficial from a computational point of view; however, if the speaker continues to produce sound for more than the MMSE memory (measured in seconds), the noise estimate that will be made will be extremely high as segments of speech have effectively been misclassified as noise. This will lead to a serious distortion in the speech signal. ### Changing the windowing function A final enhancement that was considered was to use a different windowing function. As mentioned in section 3.2, in the implementations thus far, the Hamming window was used to mitigate the effects of spectral artifacts in the frequency domain. Other windows that were considered were the Hanning, Gaussian and Black-Harris (3-term). Out of these windows, the Hanning performed the best which might be due to it's higher spectral roll-off (Figure 4) ## 5 Final Implementation and Results In the final implementation a compromise between computational complexity and system performance was made when choosing which enhancements to include. Enhancements 4.2, 4.3, 4.4, 4.6, 4.7 and 4.11 were included in the final implementation. Enhancement 4.5 and 4.9 were very computationally demanding and could not be included together with other enhancements while the rest of the enhancements did not improve the final output or, in some case, lead to worse performance. The input and output SNR levels for the final implementation is shown in Figure 27. The final implementation managed to reduce the noise significantly for all inputs; however, it performs best when the original signal has a high original SNR level. It had the worse improvement in SNR with the phantom4.wav input were it only managed to achieved a 5.98dB improvement. Figure 27: Improvement in SNR levels with final implementation Figure 26: Spectrogram of lynxl.wav output with \(\delta(F)\) set to (18) (Enhancement 7) Figure 25: Spectrogram of added helicopter noise in lynxl.wav Conclusion A real-time speech enhancement system was implemented based on the spectral subtraction technique. Different enhancements were considered and their performance was evaluated based on extensive listening tests, spectrograms and SNR comparisons. The final system manages to reduce the noise present in the output signal substantially while achieving a compromise between broadband noise, musical noise and speech intelligibility. Nevertheless, the system struggles to deal with very low SNR inputs. To deal with these types of inputs, other more recent noise reduction techniques such as Wiener filters or signal subspace approaches could be used.
2304.13594
Diffsurv: Differentiable sorting for censored time-to-event data
Survival analysis is a crucial semi-supervised task in machine learning with numerous real-world applications, particularly in healthcare. Currently, the most common approach to survival analysis is based on Cox's partial likelihood, which can be interpreted as a ranking model optimized on a lower bound of the concordance index. This relation between ranking models and Cox's partial likelihood considers only pairwise comparisons. Recent work has developed differentiable sorting methods which relax this pairwise independence assumption, enabling the ranking of sets of samples. However, current differentiable sorting methods cannot account for censoring, a key factor in many real-world datasets. To address this limitation, we propose a novel method called Diffsurv. We extend differentiable sorting methods to handle censored tasks by predicting matrices of possible permutations that take into account the label uncertainty introduced by censored samples. We contrast this approach with methods derived from partial likelihood and ranking losses. Our experiments show that Diffsurv outperforms established baselines in various simulated and real-world risk prediction scenarios. Additionally, we demonstrate the benefits of the algorithmic supervision enabled by Diffsurv by presenting a novel method for top-k risk prediction that outperforms current methods.
Andre Vauvelle, Benjamin Wild, Aylin Cakiroglu, Roland Eils, Spiros Denaxas
2023-04-26T14:42:31Z
http://arxiv.org/abs/2304.13594v1
# Diffsurv: Differentiable Sorting for censored time-to-event data. ###### Abstract Survival analysis is a crucial semi-supervised task in machine learning with numerous real-world applications, particularly in healthcare. Currently, the most common approach to survival analysis is based on Cox's partial likelihood, which can be interpreted as a ranking model optimized on a lower bound of the concordance index. This relation between ranking models and Cox's partial likelihood considers only pairwise comparisons. Recent work has developed differentiable sorting methods which relax this pairwise independence assumption, enabling the ranking of sets of samples. However, current differentiable sorting methods can not account for censoring, a key factor in many real-world datasets. To address this limitation, we propose a novel method called _Diffsurv_. We extend differentiable sorting methods to handle censored tasks by predicting matrices of possible permutations that take into account the label uncertainty introduced by censored samples. We contrast this approach with methods derived from partial likelihood and ranking losses. Our experiments show that Diffsurv outperforms established baselines in various simulated and real-world risk prediction scenarios. Additionally, we demonstrate the benefits of the algorithmic supervision enabled by Diffsurv by presenting a novel method for top-k risk prediction that outperforms current methods. ## 1 Introduction and Background Survival analysis is an important task in numerous machine learning applications, particularly in the healthcare domain. The goal of survival analysis is to predict the time until the occurrence of an event of interest, such as death or disease onset, based on a set of covariates. In clinical studies, these covariates typically include demographic variables such as sex and age, but may also encompass more complex data modalities such as temporal streams or medical images. However, event times may not be observed due to censoring, especially in observational datasets where many patients may not have experienced the event at the time of data collection. Ignoring censoring can lead to biased predictions towards the censoring event instead of the event of interest. For example, if the end of the study can be determined from the observed covariates, especially if age is included, the predicted event times will be skewed towards the censoring event time instead of the actual event of interest Kvamme & Borgan (2019). The Cox Proportional Hazards (PH) model is widely used for handling censored data in survival analysis (Cox, 1972). The model optimizes a partial likelihood function over ranked data, considering only the order of events, not their exact time of occurrence. As such, Cox's partial likelihood serves as a ranking loss, learning from the order of patients based on their hazard of experiencing an event, not their exact survival time. Raykar et al. (2007) showed that Cox PH and ranking models can be directly equated, with both providing lower bounds to the concordance index, the primary evaluation metric used in survival analysis. A key step in relating the two models assumes only pairwise comparisons or risk sets of size 2. Goldstein & Langholz (1992) show that sub-sampling risk sets produce consistent parameter estimators but that greater risk sets provide more efficient estimators. Cox's partial likelihood and ranking losses underpin current survival analysis methods in deep learning, including DeepSurv Katzman et al. (2018) and DeepHit Lee et al. (2018). We present an alternative method that leverages recent advancements in continuous relaxations of sorting operations, enabling end-to-end training of neural networks with ordering supervision (Grover et al., 2019; Blondel et al., 2020; Petersen et al., 2021). This involves incorporating a sorting algorithm into the network architecture, where the order of the samples is known, but their exact values are unsupervised. Here, we introduce _Diffsurv_, an extension of differentiable sorting methods that enables end-to-end training of survival models with censored data. Briefly, our contributions are summarised: * Our primary contribution is the extension differentiable sorting methods to account for censoring by introducing the concept of possible permutation matrices. * We empirically demonstrate that our new differentiable sorting method improve risk ranking performance across multiple simulated and real-worlds censored datasets. * We demonstrate that differentiable sorting of censored data enables the development of new methods with practical applications, using the example of end-to-end learning for top-k risk stratification. ## 2 Methods A dataset with censored event times is summarized as \(\mathcal{D}=\{t_{i},\mathbf{x}_{i},\delta_{i}\}_{i=1}^{N}\). For a patient \(i\), \(t_{i}\) is the observed minimum of the unobserved true survival time \(t_{i}^{*}\) and the censoring time \(c_{i}^{*}\), \(\delta_{i}\) is the event indicator that is 1 if an event is observed (\(t_{i}^{*}\leq c_{i}^{*}\)) or 0 if the data is censored (\(t_{i}^{*}>c_{i}^{*}\)). Covariates are \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) representing a 1-dimensional vector of size \(d\) or larger dimensional tensors such as image data. \(N\) is the total number of patients. As previously mentioned, it is common to subsample total possible risk set, we use \(n\) to represent the subsampled risk set size. In order to train models based on ordering information using differentiable sorting algorithms Petersen (2022), we can minimize the cross-entropy between the ground truth orders represented by true permutation matrix \(\mathbf{Q}\) and a doubly-stochastic predicted permutation matrix \(\mathbf{P}\). This makes it possible to interpret each element \(P_{ij}\) of the predicted permutation matrix as the predicted probability of permuting from a randomly assigned rank \(i\) to a true rank \(j\). There are multiple methods of relaxing sorting algorithms to produce \(\mathbf{P}\), we will follow Petersen et al. (2021) by using differentiable sorting networks. Sorting networks are a family of sorting algorithms that consist of two basic components: wires and conditional swaps. Wires carry values to be compared at conditional swaps, if one value is bigger than the other, the values carried forward are swapped around. For a random sample of patients to be ordered, each layer of the sorting network can be considered an independent permutation matrix \(\mathbf{P}_{l}\) with elements given by \[P_{l,ii}=P_{l,jj}=\sigma(z_{j}-z_{i})\text{ and }P_{l,ij}=P_{l,ji}=1-\sigma(z_{j}-z_ {i}). \tag{1}\] Figure 1: Differentiable Sorting for Censored Time-to-Event Data. Inputs, in this case, SVHN images, are transformed into scalar risk values, \(z_{i}\), through a neural network. A differentiable permutation matrix, \(\mathbf{P}\), is computed using sorting networks. The model can be optimized for downstream tasks, such as risk stratification and top-k highest risk prediction, by using the matrix \(\mathbf{Q}_{p}\) of possible permutations based on the observed events and censoring. These elements represent conditional swaps between two patient risk values \((z_{i},z_{j})\) and use a differentiable relaxation of the step function such as the logistic-sigmoid, where \(\sigma:x\rightarrow\frac{1}{1+e^{-\beta x}}\). The inverse temperature parameter \(\beta>0\) is introduced so when \(\beta\rightarrow\infty\) the functions tend to the exact \(\min\) and \(\max\) functions. The indices being compared are determined by the sorting network and the final predicted probability matrix is the product of each layer of sorting operations, \(\mathbf{P}=(\prod_{i=1}^{n}\mathbf{P}_{i}^{\intercal})^{\intercal}\). For the base case, \(n=2\), Diffsurv is equivalent to the pairwise ranking loss and Cox partial likelihood. Further details on the relations between Diffsurv and baselines is in Appendix A.2.1. The introduction of censored patients means we no longer have access to a ground truth permutation matrix \(\mathbf{Q}\). We cannot determine the exact rank of patients who are censored before another who experienced an event. To address this challenge, we propose a novel extension of differentiable sorting to censored data. Our approach considers the set of _possible permutations_ for each patient, taking into account uncertainty about the true ranking. In Figure 3, we show an example of observed and censored events and the resulting set of possible permutations that can be represented as a permutation matrix \(\mathbf{Q}_{p}\). For a right-censored sample \(i\), we only know that the rank must be lower than the rank of all other samples with an event time lower than the censoring time of \(t_{i}\), i.e. they must be ranked after prior events. For another sample \(j\) with an event at \(t_{j}\), we know that the rank must be lower than other samples with an event time lower than \(t_{j}\), and higher than the rank of other samples either with an event time higher than \(t_{j}\) or with a censoring time higher than \(t_{j}\). We do not know how the rank of \(j\) compares to samples with censoring time lower than \(t_{j}\). If it is possible for patient i to permute to rank j, then \(Q_{pij}=1\), otherwise \(Q_{pij}=0\). Given the possible permutation matrix \(\mathbf{Q}_{p}\) and the predicted permutation matrix \(\mathbf{P}\), the vector of probabilities \(\mathbf{p}\) of a patient being ranked within the set of possible permutations can be computed. Although the ground truth ranks are unknown, the range of possible ranks is known, and the model can be optimized to maximize the sum of the predicted permutation probabilities for the possible ranks of each sample. Noted here as the column-sum of the element-wise product \(\circ\), between \(\mathbf{Q}_{p}\) and \(\mathbf{P}\). \[\mathbf{p}=\sum_{j=1}^{n}(\mathbf{Q}_{p}\circ\mathbf{P})_{i,j}. \tag{2}\] The cross-entropy loss can then be easily applied \[\mathcal{L}=\sum_{i=1}^{n}y_{i}\log(p_{i}) \tag{3}\] where \(y_{i}\) is the true label of the set of possible ranks. Finally, we demonstrate how the algorithmic supervision of sorting algorithms enables the development of novel methods in survival analysis, using the example of top-k risk prediction. In practical settings, it is often not necessary to rank all samples correctly. Rather, it is essential to identify the samples with the highest risk, such as by a healthcare provider, to prioritize care and interventions. With Diffsurv, top-k risk prediction is straightforward to implement by optimizing possible permutations within the top-k ranks, whereby \(\mathbf{Q}_{p}\) is adjusted such that only the top-k patient's possible permutations are set to 1. ## 3 Experiments We evaluate the performance of Diffsurv on censored survival data across semi-synthetic and real-world datasets. In each experiment, we train a neural network using Diffsurv and Cox's partial likelihood loss, then compare their respective results. Cox's partial likelihood and the closely related ranking loss are used in popular baselines; Deepsurv (Katzman et al., 2018), Cox-MLP (Kvamme et al., 2019) and DeepHit Lee et al. (2018). We present a new semi-synthetic dataset, _survSVHN_, to evaluate survival models. Based on the Street View House Numbers (SVHN) dataset Petersen et al. (2021), we simulate survival times akin to survMNIST Polsterl (2019). The increased complexity of SVHN offers a testbed which is better able to discern the performance differences between methods. Each house number parameterizes an exponential time function for survival times. Risks are calculated as the logarithm of house numbers, standardized and scaled for a mean survival time of 30. We introduce censoring by randomly selecting 30% of house numbers and replacing true times with values sampled uniformly between \((0,t_{i}]\) (See Figure 5). Risk is predicted from the images with a convolutional neural network with the same hyperparameters as Petersen et al. (2021), with \(z_{i}=f_{\text{CONV}}(\mathbf{x}_{i})\). We also evaluate on four real-world healthcare datasets from Kvarmme et al. (2019). Each dataset has a fairly small number of patients (\(N\leq 8,873\)) and a flat vector of covariates as input. Further details in Appendix A.6. For these datasets, a fully connected neural network is used to find the risk, \(z_{i}=f_{\text{MLP}}(\mathbf{x}_{i})\). Further details on the training and evaluation procedures can be found in Appendix A.4. The results presented in Table 1 demonstrates that Diffsurv achieves equal to or better performance on all datasets analyzed. Additionally, when Diffsurv is optimized for predicting the top 10% of highest risk individuals, it outperforms Cox's partial likelihood on all four datasets. There is a significant improvement in the top 10% highest-risk prediction when comparing models based on Cox's Partial Likelihood (\(\mu=.825\), \(\sigma=.005\)) and Diffsurv optimized for Top-k prediction (\(\mu=.944\), \(\sigma=.008\)) on the survSVHN dataset. ## 4 Conclusion Diffsurv represents a significant step in the field of survival analysis with censored data. Our experiments demonstrate the effectiveness of differentiable sorting methods in improving survival analysis predictions, particularly in censored datasets with Diffsurv matching or improving performance against Cox partial likelihood on all datasets. Additionally, Diffsurv has the potential to drive the development of new methods, such as the top-k risk stratification method presented in this work. It is noteworthy that while our method has shown promising results, further investigation is necessary to fully understand its potential and limitations. For instance, it would be valuable to examine the scalability of the method with larger real-world datasets and its capability to handle more complex censored scenarios. Further research could also investigate the integration of Diffsurv into clustering models. With its ability to handle censored data and its end-to-end training capability, Diffsurv presents a promising approach to survival analysis and holds great potential for enhancing risk prediction in real-world applications. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Method & n=2\({}^{\dagger}\) & n=4 & n=8 & n=16 & n=32 \\ \hline Diffsurv &.918 (.003) & **.934** (.002) & **.940** (.001) & **.943** (.002) & **.941** (.002) \\ Cox Partial Likelihood &.913 (.002) &.925 (.002) &.931 (.002) &.933 (.002) &.930 (.003) \\ \hline \hline \end{tabular} (a) Semi-synthetic survSVHN Dataset Results. Mean (and standard deviation) over 5-folds measured in C-Index for the risk Metric is C-index. \({}^{\dagger}\) When \(n=2\) both methods are equivalent to the ranking loss. \end{table} Table 1: Results for semi-synthetic and real-world datasets. Bold indicates significantly higher performance (t-test with a significance level of 0.01).
2303.00229
Linear Darboux polynomials for Lotka-Volterra systems, trees and superintegrable families
We present a method to construct superintegrable $n$-component Lotka-Volterra systems with $3n-2$ parameters. We apply the method to Lotka-Volterra systems with $n$ components for $1 < n < 6$, and present several $n$-dimensional superintegrable families. The Lotka-Volterra systems are in one-to-one correspondence with trees on $n$ vertices.
G. R. W. Quispel, Benjamin K. Tapley, D. I. McLaren, Peter H. van der Kamp
2023-03-01T04:42:50Z
http://arxiv.org/abs/2303.00229v2
# Linear Darboux polynomials for Lotka-Volterra systems, trees and superintegrable families ###### Abstract We present a method to construct superintegrable \(n\)-component Lotka-Volterra systems with \(3n-2\) parameters. We apply the method to Lotka-Volterra systems with \(n\) components for \(1<n<6\), and present several \(n\)-dimensional superintegrable families. The Lotka-Volterra systems are in one-to-one correspondence with trees on \(n\) vertices. ## 1 Introduction The original 2-dimensional Lotka-Volterra (LV) system, \[\dot{x}=x(a-by),\qquad\dot{y}=y(-c+dx),\] was derived as a model to describe the interaction between predator and prey fish [18, 22, 10]. It has been generalised to \(n\)-dimensional systems of the form \[\dot{x}_{i}=x_{i}\left(b_{i}+\sum_{i}A_{i,j}x_{j}\right), \tag{1}\] where \(\mathbf{b}\) is a real vector, and \(\mathbf{A}\) is a real matrix, and these have been studied extensively from various viewpoints including integrability [1, 2, 3, 4, 6, 7, 8, 10, 12, 14, 16, 17, 19]. A vector field on an \(n\)-dimensional manifold is called _superintegrable_ if it admits \(n-1\) functionally independent constants of motion (i.e. first integrals), cf. [21]. In this paper we construct superintegrable \(n\)-component Lotka-Volterra system with \(3n-2\) parameters. Darboux polynomials (DPs) are building blocks of rational integrals and their generalizations [11, 13]. Given an ordinary differential equation (ODE) \[\frac{dx}{dt}=f(x),\] a Darboux polynomial \(P\) is defined by the existence of a polynomial \(C(x)\) s.t. \[\frac{dP(x)}{dt}=C(x)P(x) \tag{2}\] Note that (2) implies that if \(P(x(0))=0\), then \(P(x(t))=0,\forall t\). For this reason Darboux polynomials are also called second integrals. In section 2, we provide a method to obtain \(m\) integrals for an \(n\)-dimensional homogeneous quadratic ODE, from \(m+n\) Darboux polynomials. In section 3, we give conditions on \(\mathbf{b}\) and \(\mathbf{A}\) which are equivalent to \[P_{i,k}=\alpha x_{i}+\beta x_{k}\] being a DP for (1). In section 4, we look at the intersection of the above two classes, i.e. at homogeneous Lotka-Volterra systems, and use the described method and mentioned DPs to construct some superintegrable systems in dimensions 2, 3, and 4. In section 5, we explain how each of these superintegrable \(n\)-dimensional LV systems are in one-to-one correspondence with trees on \(n\) vertices. Such a tree has \(n-1\) edges, and each of these edges corresponds to an integral. If an edge exists between vertices \(i\) and \(k\), the corresponding integral can be written as a product of \(P_{i,k}\) and powers of the variables \(x_{j}\), \(j=1\ldots n\). In section 5, we cover the superintegrable LV-systems which relate to the 3 non-isomorphic trees on 5 vertices. We also describe the factorisation of the exponents of the variables in terms of minors of the matrix \(A\). In our final section we give some details for the superintegrable \(n\)-dimensional LV-systems that relate to tall trees. ## 2 A rather general method Let \[\frac{dP_{1}}{dt}=C_{1}P_{1},\qquad\frac{dP_{2}}{dt}=C_{2}P_{2}\] then \[\frac{d}{dt}\,(P_{1}^{\alpha_{1}}P_{2}^{\alpha_{2}})=(\alpha_{1}C_{1}+\alpha_{ 2}C_{2})P_{1}^{\alpha_{1}}P_{2}^{\alpha_{2}}.\] Hence cofactors \(C_{i}\) form a linear space. Note that \(C_{1}=C_{2}\) if and only if \(\frac{P_{1}}{P_{2}}\) is an integral. We also have \[P_{1}^{\alpha_{1}}P_{2}^{\alpha_{2}}\mbox{ is a first integral }\Leftrightarrow\alpha_{1}C_{1}+\alpha_{2}C_{2}=0, \tag{3}\] and more generally \[\prod_{i}P_{i}^{\alpha_{i}}\mbox{ is a first integral }\Leftrightarrow\sum_{i} \alpha_{i}C_{i}=0. \tag{4}\] It follows that integrals that arise in this way are factorisable. If there are more functionally independent DPs than the dimension of this linear space, then there must be one or more integrals. The method we introduce here, produces \(m\) integrals for an \(n\)-dimensional homogeneous quadratic ODE, from \(m+n\) Darboux polynomials. * Find \(n\) independent DPs for the ODE: \[\dot{P}_{i}(\mathbf{x})=P_{i}(\mathbf{x})C_{i}(\mathbf{x})\] (5) The \(C_{i}\) will be linear. Defining \(v_{i}:=\ln(P_{i})\), \(i=1,\ldots,n\), (5) can be written \[\dot{\mathbf{v}}=A\mathbf{x}\] (6) where \(A\) is some constant invertible matrix. * Find \(m\) additional DPs for the ODE (\(m\leq n-1\)). Defining \(w_{i}:=\ln(P_{i})\), \(i=n+1,\ldots,n+m\), we get \[\dot{\mathbf{w}}=B\mathbf{x}\] (7) Eliminating \(\mathbf{x}\), we again get \[\dot{\mathbf{w}}-BA^{-1}\dot{\mathbf{v}}=0\rightarrow\mathbf{w}-BA^{-1} \mathbf{v}=I.\] (8) For \(n\)-component Lotka-Volterra (LV) systems, \(n\) Darboux polynomials are given by the components of the vector \({\bf x}\), we set \({\bf v}={\bf x}\). From (8), by exponentiation, we obtain \(m\) integrals of the form \[P_{n+i}^{|A|}\prod_{j=1}^{n}x_{j}^{Z_{i,j}},\qquad i=1,\ldots,m,\] where \[Z:=-BA^{-1}|A| \tag{9}\] and \(|A|\) is the determinant of \(A\). ## 3 Additional Darboux polynomials for Lotka-Volterra systems The complement of \(\{i,k\}\) is denoted \(\{i,k\}^{c}:=\{1,2,\ldots,n\}\setminus\{i,k\}\). **Lemma 1**.: _Consider a system with_ \[\dot{x}_{i}=x_{i}\left(b_{i}+\sum_{j=1}^{n}A_{i,j}x_{j}\right),\qquad\dot{x}_ {k}=x_{k}\left(b_{k}+\sum_{j=1}^{n}A_{k,j}x_{j}\right). \tag{10}\] _The expression, with \(\alpha\beta\neq 0\),_ \[P_{i,k}=\alpha x_{i}+\beta x_{k}, \tag{11}\] _is a DP if and only if_ \[A_{i,j} = A_{k,j}\mbox{ for }j\in\{i,k\}^{c} \tag{12}\] \[b_{i} = b_{k}\ =\ b\] (13) \[\alpha(A_{k,k}-A_{i,k}) = \beta(A_{k,i}-A_{i,i}) \tag{14}\] _and \((A_{k,k}-A_{i,k})(A_{k,i}-A_{i,i})\neq 0\)._ _Proof._\(\Leftarrow\) We first show that if conditions (12), (13) and (14) hold, then \(P_{i,k}\) defined by (11) is a DP for the ODE defined by (10). Equation (11) implies with (10) that \[\alpha\dot{x}_{i}+\beta\dot{x}_{k} = \alpha x_{i}\left(b_{i}+\sum_{j=1}^{n}A_{i,j}x_{j}\right)+\beta x _{k}\left(b_{k}+\sum_{j=1}^{n}A_{k,j}x_{j}\right)\] \[= \alpha x_{i}\left(b_{i}+A_{i,i}x_{i}+A_{i,k}x_{k}+\Sigma^{\prime }\right)+\beta x_{k}\left(b_{k}+A_{k,i}x_{i}+A_{k,k}x_{k}+\Sigma^{\prime}\right)\] \[= (\alpha x_{i}+\beta x_{k})b+\alpha A_{i,i}x_{i}^{2}+(\alpha A_{i, k}+\beta A_{k,i})x_{i}x_{k}+\beta A_{k,k}x_{k}^{2}+(\alpha x_{i}+\beta x_{k}) \Sigma^{\prime}\] \[\mbox{ using (\ref{eq:11})}\] \[= (\alpha x_{i}+\beta x_{k})(b+A_{i,i}x_{i}+A_{k,k}x_{k}+\Sigma^{ \prime})\mbox{ using (\ref{eq:11}),}\] and where (using (12)) \[\Sigma^{\prime}:=\sum_{j\in\{i,k\}^{c}}A_{i,j}x_{j}=\sum_{j\in\{i,k\}^{c}}A_{ k,j}x_{j}.\] (15) \(\Rightarrow\) Next we show that if \(P_{i,k}\) defined by (11) is a DP for the ODE defined by (10) then (12), (13) and (14) hold. Equation (11) implies with (10) that \[\alpha\dot{x}_{i}+\beta\dot{x}_{k}=\alpha x_{i}\left(b_{i}+\sum_{j=1}^{n}A_{i,j}x_{j}\right)+\beta x_{k}\left(b_{k}+\sum_{j=1}^{n}A_{k,j}x_{j}\right). \tag{16}\] First consider all terms that contain \(x_{j}\) on the r.h.s., where \(j\in\{i,k\}^{c}\): \[\alpha x_{i}A_{i,j}x_{j}+\beta x_{k}A_{k,j}x_{j}. \tag{17}\] This must vanish if we substitute \[x_{k}=-\frac{\alpha}{\beta}x_{i}. \tag{18}\] We find \(\alpha(A_{i,j}-A_{k,j})x_{i}x_{j}=0\) and hence \[A_{i,j}=A_{k,j} \tag{19}\] for all \(j\in\{i,k\}^{c}\). Now consider all remaining terms that do not contain any \(x_{j}\), with \(j\in\{i,k\}^{c}\), i.e. \[\alpha x_{i}(b_{i}+A_{i,i}x_{i}+A_{i,k}x_{k})+\beta x_{k}(b_{k}+A_{k,i}x_{i}+A _{k,k}x_{k}). \tag{20}\] Once again (20) must vanish if we substitute (18). Hence \[x_{i}(b_{i}-b_{k})+x_{i}^{2}\left[A_{i,i}-(\frac{\alpha}{\beta}A_{i,k}+A_{k,i} )+\frac{\alpha}{\beta}A_{k,k}\right]=0,\] which implies that \[b_{i}=b_{k}=b,\text{ say},\] and \[\frac{\alpha}{\beta}=\frac{A_{i,i}-A_{k,i}}{A_{i,k}-A_{k,k}}.\] Of course several low-dimensional instances of Lemma 1 have appeared in papers by various authors over the years, cf. e.g. a 2D instance in equation (3.2) of [15], a 3D instance in Proposition 1\(\#(3)\) of [5], and a 4D instance in equation (12) of [10]. ## 4 Superintegrable \(n\)-component Lotka-Volterra systems, \(n=2,3,4\) ### \(n=2\) The system \[\left\{\begin{array}{l}\dot{x}_{1}=x_{1}\left(a_{1}x_{1}+b_{1}x_{2}\right) \\ \dot{x}_{2}=x_{2}\left(c_{1}x_{1}+a_{2}x_{2}\right)\end{array}\right. \tag{21}\] admits the Darboux polynomials \(x_{1},x_{2}\), with cofactors \(a_{1}x_{1}+b_{1}x_{2},a_{2}x_{2}+c_{1}x_{1}\), and the Darboux polynomial \(\left(c_{1}-a_{1}\right)x_{1}+\left(a_{2}-b_{1}\right)x_{2}\), with cofactor \(a_{1}x_{1}+a_{2}x_{2}\). They give rise to matrices \[A_{=}\begin{pmatrix}a_{1}&b_{1}\\ c_{1}&a_{2}\end{pmatrix}\text{ and }B=\begin{pmatrix}a_{1}&a_{2}\end{pmatrix}, \tag{22}\] and hence to the integral \[\left(\left(c_{1}-a_{1}\right)x_{1}+\left(a_{2}-b_{1}\right)x_{2}\right)^{a_{ 1}a_{2}-b_{1}c_{1}}x_{1}^{-a_{2}\left(a_{1}-c_{1}\right)}x_{2}^{-a_{1}\left(a _{2}-b_{1}\right)}.\] ### \(n=3\) The system \[\left\{\begin{array}{l}\dot{x}_{1}=x_{1}\left(a_{1}x_{1}+b_{1}x_{2}+b_{2}x_{3} \right)\\ \dot{x}_{2}=x_{2}\left(a_{2}x_{2}+b_{2}x_{3}+c_{1}x_{1}\right)\\ \dot{x}_{3}=x_{3}\left(a_{3}x_{3}+c_{1}x_{1}+c_{2}x_{2}\right)\end{array}\right. \tag{23}\] relates to matrix \[A_{2}=\begin{pmatrix}a_{1}&b_{1}&b_{2}\\ c_{1}&a_{2}&b_{2}\\ c_{1}&c_{2}&a_{3}\end{pmatrix}. \tag{24}\] The following are 2 additional Darboux polynomials: \[P_{1,2}=\left(c_{1}-a_{1}\right)x_{1}+\left(a_{2}-b_{1}\right)x_{2},\quad P_{2,3}=\left(c_{2}-a_{2}\right)x_{2}+\left(a_{3}-b_{2}\right)x_{3},\] with cofactors \[C_{1,2}=a_{1}x_{1}+a_{2}x_{2}+b_{2}x_{3},\quad C_{2,3}=c_{1}x_{1}+a_{2}x_{2}+a _{3}x_{3}.\] Thus we have \[B_{2}=\begin{pmatrix}a_{1}&a_{2}&b_{2}\\ c_{1}&a_{2}&a_{3},\end{pmatrix}\] and we find \(2=n-1\) integrals \[\left(\left(c_{1}-a_{1}\right)x_{1}+\left(a_{2}-b_{1}\right)x_{2}\right)^{|A|} x_{1}{}^{-\left(a_{2}a_{3}-b_{2}c_{2}\right)\left(a_{1}-c_{1}\right)}x_{2}{}^{- \left(a_{2}-b_{1}\right)\left(a_{1}a_{3}-b_{2}c_{1}\right)}x_{3}{}^{b_{2}\left( a_{2}-b_{1}\right)\left(a_{1}-c_{1}\right)}\] and \[\left(\left(c_{2}-a_{2}\right)x_{2}+\left(a_{3}-b_{2}\right)x_{3}\right)^{|A|} x_{1}{}^{c_{1}\left(a_{3}-b_{2}\right)\left(a_{2}-c_{2}\right)}x_{2}{}^{- \left(a_{2}-c_{2}\right)\left(a_{1}a_{3}-b_{2}c_{1}\right)}x_{3}{}^{-\left(a_{ 3}-b_{2}\right)\left(a_{1}a_{2}-b_{1}c_{1}\right)}.\] ### \(n=4\) The matrix \[A_{3}=\begin{pmatrix}a_{1}&b_{1}&b_{2}&b_{3}\\ c_{1}&a_{2}&b_{2}&b_{3}\\ c_{1}&c_{2}&a_{3}&b_{3}\\ c_{1}&c_{2}&c_{3}&a_{4}\end{pmatrix} \tag{25}\] has the property that \(A_{i,j}=A_{i+1,j}\) for all \(i\in\{1,2,3\}\) and \(j\in\{i,i+1\}^{\rm c}\). The associated Lotka-Volterra system is \[\left\{\begin{array}{l}\dot{x}_{1}=x_{1}(a_{1}x_{1}+b_{1}x_{2}+b_{2}x_{3}+b_ {3}x_{4})\\ \dot{x}_{2}=x_{2}(c_{1}x_{1}+a_{2}x_{2}+b_{2}x_{3}+b_{3}x_{4})\\ \dot{x}_{3}=x_{3}(c_{1}x_{1}+c_{2}x_{2}+a_{3}x_{3}+b_{3}x_{4})\\ \dot{x}_{4}=x_{4}(c_{1}x_{1}+c_{2}x_{2}+c_{3}x_{3}+a_{4}x_{4})\end{array}\right. \tag{26}\] The system (26) has 7 Darboux polynomials. The obvious ones are \(P_{i}=x_{i}\), \(i\in\{1,2,3,4\}\), with cofactors \(C_{i}=\sum_{j=1}^{n}A_{i,j}x_{j}\). The other three, obtained from Lemma 1, are: \[P_{1,2}=\left(c_{1}-a_{1}\right)x_{1}+\left(a_{2}-b_{1}\right)x_{2},\] \[P_{2,3}=\left(c_{2}-a_{2}\right)x_{2}+\left(a_{3}-b_{2}\right)x_{3},\] \[P_{3,4}=\left(c_{3}-a_{3}\right)x_{3}+\left(a_{4}-b_{3}\right)x_{4},\] with cofactors \[C_{1,2}=a_{1}x_{1}+a_{2}x_{2}+b_{2}x_{3}+b_{3}x_{4},\] \[C_{2,3}=c_{1}x_{1}+a_{2}x_{2}+a_{3}x_{3}+b_{3}x_{4},\] \[C_{3,4}=c_{1}x_{1}+c_{2}x_{2}+a_{3}x_{3}+a_{4}x_{4}.\] The coefficient matrix from these cofactors is \[B_{3}=\begin{pmatrix}a_{1}&a_{2}&b_{2}&b_{3}\\ c_{1}&a_{2}&a_{3}&b_{3}\\ c_{1}&c_{2}&a_{3}&a_{4}\end{pmatrix}.\] The rather general method, introduced in section 2, gives rise to the following \(3=n-1\) functionally independent integrals: \[I_{i,i+1}=P_{i,i+1}^{|A|}x_{1}{}^{Z_{i,1}}x_{2}{}^{Z_{i,2}}x_{3}{}^{Z_{i,3}}x_ {4}{}^{Z_{i,4}},\qquad i\in\{1,2,3\},\] where \(I_{1,2}\) is determined by \[Z_{1,1} =-\left(a_{2}a_{3}a_{4}-a_{2}b_{3}c_{3}-a_{3}b_{3}c_{2}-a_{4}b_{2} c_{2}+b_{2}b_{3}c_{2}+b_{3}c_{2}c_{3}\right)\left(a_{1}-c_{1}\right),\] \[Z_{1,2} =-\left(a_{2}-b_{1}\right)\left(a_{1}a_{3}a_{4}-a_{1}b_{3}c_{3}- a_{3}b_{3}c_{1}-a_{4}b_{2}c_{1}+b_{2}b_{3}c_{1}+b_{3}c_{1}c_{3}\right),\] \[Z_{1,3} =\left(a_{4}b_{2}-b_{3}c_{3}\right)\left(a_{2}-b_{1}\right)\left( a_{1}-c_{1}\right),\] \[Z_{1,4} =b_{3}\left(a_{3}-b_{2}\right)\left(a_{2}-b_{1}\right)\left(a_{1} -c_{1}\right),\] \(I_{2,3}\) is determined by \[Z_{2,1} =c_{1}\left(a_{4}-b_{3}\right)\left(a_{3}-b_{2}\right)\left(a_{2} -c_{2}\right),\] \[Z_{2,2} =-\left(a_{2}-c_{2}\right)\left(a_{1}a_{3}a_{4}-a_{1}b_{3}c_{3}- a_{3}b_{3}c_{1}-a_{4}b_{2}c_{1}+b_{2}b_{3}c_{1}+b_{3}c_{1}c_{3}\right),\] \[Z_{2,3} =-\left(a_{3}-b_{2}\right)\left(a_{1}a_{2}a_{4}-a_{1}b_{3}c_{2}- a_{2}b_{3}c_{1}-a_{4}b_{1}c_{1}+b_{1}b_{3}c_{1}+b_{3}c_{1}c_{2}\right),\] \[Z_{2,4} =b_{3}\left(a_{3}-b_{2}\right)\left(a_{2}-c_{2}\right)\left(a_{1} -c_{1}\right),\] and \(I_{3,4}\) is determined by \[Z_{3,1} =c_{1}\left(a_{4}-b_{3}\right)\left(a_{3}-c_{3}\right)\left(a_{2} -c_{2}\right),\] \[Z_{3,2} =\left(a_{4}-b_{3}\right)\left(a_{3}-c_{3}\right)\left(c_{2}a_{1} -c_{1}b_{1}\right),\] \[Z_{3,3} =-\left(a_{3}-c_{3}\right)\left(a_{1}a_{2}a_{4}-a_{1}b_{3}c_{2}- a_{2}b_{3}c_{1}-a_{4}b_{1}c_{1}+b_{1}b_{3}c_{1}+b_{3}c_{1}c_{2}\right),\] \[Z_{3,4} =-\left(a_{4}-b_{3}\right)\left(a_{1}a_{2}a_{3}-a_{1}b_{2}c_{2}- a_{2}b_{2}c_{1}-a_{3}b_{1}c_{1}+b_{1}b_{2}c_{1}+b_{2}c_{1}c_{2}\right).\] Next we consider the matrix \[A_{4}=\begin{pmatrix}a_{1}&b_{1}&b_{2}&b_{3}\\ c_{1}&a_{2}&b_{2}&b_{3}\\ c_{1}&c_{2}&a_{3}&b_{3}\\ c_{1}&c_{3}&b_{2}&a_{4}\end{pmatrix}.\] It has the property that \(A_{i,j}=A_{k,j}\) for all \((i,k)\in\{(1,2),(2,3),(2,4)\}\) and \(j\in\{i,k\}^{\rm c}\). The corresponding Lotka-Volterra system reads \[\left\{\begin{array}{l}\dot{x}_{1}=x_{1}(a_{1}x_{1}+b_{1}x_{2}+b_{2}x_{3}+b_{ 3}x_{4})\\ \dot{x}_{2}=x_{2}(c_{1}x_{1}+a_{2}x_{2}+b_{2}x_{3}+b_{3}x_{4})\\ \dot{x}_{3}=x_{3}(c_{1}x_{1}+c_{2}x_{2}+a_{3}x_{3}+b_{3}x_{4})\\ \dot{x}_{4}=x_{4}(c_{1}x_{1}+c_{3}x_{2}+b_{2}x_{3}+a_{4}x_{4})\end{array}\right. \tag{27}\] The additional Darboux polynomials are \[P_{1,2} =\left(c_{1}-a_{1}\right)x_{1}+\left(a_{2}-b_{1}\right)x_{2},\] \[P_{2,3} =\left(c_{2}-a_{2}\right)x_{2}+\left(a_{3}-b_{2}\right)x_{3},\] \[P_{2,4} =\left(c_{3}-a_{2}\right)x_{2}+\left(a_{4}-b_{3}\right)x_{4},\] with cofactors \[C_{1,2} =a_{1}x_{1}+a_{2}x_{2}+b_{2}x_{3}+b_{3}x_{4},\] \[C_{2,3} =c_{1}x_{1}+a_{2}x_{2}+a_{3}x_{3}+b_{3}x_{4},\] \[C_{3,4} =c_{1}x_{1}+a_{2}x_{2}+b_{2}x_{3}+a_{4}x_{4}.\] The coefficient matrix from these cofactors is \[B_{4}=\begin{pmatrix}a_{1}&a_{2}&b_{2}&b_{3}\\ c_{1}&a_{2}&a_{3}&b_{3}\\ c_{1}&a_{2}&b_{2}&a_{4}\end{pmatrix}\] and we obtain the functionally independent integrals \[I_{i,k}=P_{i,k}^{|A|}{}_{x_{1}}{}^{Z_{i,i}}{}_{x_{2}}{}^{Z_{i,2}}{}_{x_{3}}{}^ {Z_{i,3}}{}_{x_{4}}{}^{Z_{i,4}},\qquad(i,k)\in\{(1,2),(2,3),(2,4)\},\] where \[Z_{1,1} =-\left(a_{2}a_{3}a_{4}-a_{2}b_{2}b_{3}-a_{3}b_{3}c_{3}-a_{4}b_{2} c_{2}+b_{2}b_{3}c_{2}+b_{2}b_{3}c_{3}\right)\left(a_{1}-c_{1}\right)\] \[Z_{1,2} =-\left(a_{2}-b_{1}\right)\left(a_{1}a_{3}a_{4}-a_{1}b_{2}b_{3}- a_{3}b_{3}c_{1}-a_{4}b_{2}c_{1}+2b_{2}b_{3}c_{1}\right)\] \[Z_{1,3} =b_{2}\left(a_{4}-b_{3}\right)\left(a_{2}-b_{1}\right)\left(a_{1 }-c_{1}\right)\] \[Z_{1,4} =b_{3}\left(a_{3}-b_{2}\right)\left(a_{2}-b_{1}\right)\left(a_{1 }-c_{1}\right),\] \[Z_{2,1} =c_{1}\left(a_{4}-b_{3}\right)\left(a_{3}-b_{2}\right)\left(a_{2}-c_{2}\right)\] \[Z_{2,2} =-\left(a_{2}-c_{2}\right)\left(a_{1}a_{3}a_{4}-a_{1}b_{2}b_{3}- a_{3}b_{3}c_{1}-a_{4}b_{2}c_{1}+2b_{2}b_{3}c_{1}\right)\] \[Z_{2,3} =-\left(a_{3}-b_{2}\right)\left(a_{1}a_{2}a_{4}-a_{1}b_{3}c_{3}- a_{2}b_{3}c_{1}-a_{4}b_{1}c_{1}+b_{1}b_{3}c_{1}+b_{3}c_{1}c_{3}\right)\] \[Z_{2,4} =b_{3}\left(a_{3}-b_{2}\right)\left(a_{2}-c_{2}\right)\left(a_{1 }-c_{1}\right),\] and \[Z_{3,1} =c_{1}\left(a_{4}-b_{3}\right)\left(a_{3}-b_{2}\right)\left(a_{2} -c_{3}\right)\] \[Z_{3,2} =-\left(a_{2}-c_{3}\right)\left(a_{1}a_{3}a_{4}-a_{1}b_{2}b_{3}- a_{3}b_{3}c_{1}-a_{4}b_{2}c_{1}+2b_{2}b_{3}c_{1}\right)\] \[Z_{3,3} =b_{2}\left(a_{4}-b_{3}\right)\left(a_{2}-c_{3}\right)\left(a_{ 1}-c_{1}\right)\] \[Z_{3,4} =-\left(a_{4}-b_{3}\right)\left(a_{1}a_{2}a_{3}-a_{1}b_{2}c_{2}- a_{2}b_{2}c_{1}-a_{3}b_{1}c_{1}+b_{1}b_{2}c_{1}+b_{2}c_{1}c_{2}\right).\] ## 5 Connection to trees To each of the above \(n\)-component Lotka-Volterra systems above we associate a free (unrooted) tree \(T\) as follows. To the \(n\) rows of the matrix \(A\) we can associate \(n\) vertices of a graph. This undirected graph will have an edge between vertex \(i\) and vertex \(k\) if the condition that \(A_{i,j}=A_{k,j}\) for all \(j\in\{i,k\}^{\mathrm{c}}\) is satisfied. Thus, the systems (22), (24), (25) and (27) relate to the trees depicted in Figure 1. Vice versa, a tree \(T\) on \(n\) (ordered) vertices has \(n-1\) (ordered) edges. We associated to \(T\) a matrix \(A\) as follows. We start with an \(n\times n\) diagonal matrix \(A\), with \(A_{i,i}=a_{i}\). Then for each edge of \(T\) we fix two off-diagonal entries of \(A\) as follows. For the \(m\)-th edge of the graph \(T\), \(e_{m}=(i,k)\) with \(i<k\), we set \(A_{i,k}=b_{m}\) and \(A_{k,i}=c_{m}\). In [20] we show that the remaining entries of the matrix \(A\) are uniquely determined by the condition that \(A_{i,j}=A_{k,j}\) when \((i,k)\) is an edge of \(T\) and \(j\in\{i,k\}^{\mathrm{c}}\). The matrix \(A\) has \(3n-2\) free parameters and defines a Lotka-Volterra system \[\dot{x}_{i}=x_{i}\sum_{j=1}^{n}A_{i,j}x_{j},\qquad i=1,2,\ldots,n, \tag{28}\] with \(n-1\) integrals. In a forthcoming paper, [20], we will prove their functional independence, Theorem 2. **Theorem 2**.: _Each tree on \(n\) vertices gives rise to a Lotka-Volterra system with \(3n-2\) parameters, which admits \(n-1\) functionally independent integrals._ One can think of the parameters \(a_{i}\), \(b_{j}\), \(c_{k}\) as weights in a complete digraph \(D\) (allowing both loops and multiple edges) which is associated to \(T\). The matrix \(A\) is then nothing but the adjacency matrix of \(D\). The connection between Lotka-Volterra systems and graphs, via the adjacency matrix of the graph, has been made before [2, 9, 7, 12], but in the context of undirected or directed graphs, and (mainly) anti-symmetric (and hence Hamiltonian) Lotka-Volterra systems. The general setting of complete digraphs seems to be new. ## 6 Superintegrable 5-component Lotka-Volterra systems There are 3 non-isomorphic trees on 5 vertices, see Figure 2. Following the procedure in the Figure 1: The trees connected to the Lotka-Volterra systems (22), (24), (25) and (27) (from left to right). Figure 2: These are the three non-isomorphic trees on 5 vertices. previous subsection, the trees in Figure 2 give rise to matrices \((A)\) \[\begin{pmatrix}a_{1}&b_{1}&b_{2}&b_{3}&b_{4}\\ c_{1}&a_{2}&b_{2}&b_{3}&b_{4}\\ c_{1}&c_{2}&a_{3}&b_{3}&b_{4}\\ c_{1}&c_{2}&c_{3}&a_{4}&b_{4}\\ c_{1}&c_{2}&c_{3}&c_{4}&a_{5}\end{pmatrix},\ \begin{pmatrix}a_{1}&b_{1}&b_{2}&b_{3}&b_{4}\\ c_{1}&a_{2}&b_{2}&b_{3}&b_{4}\\ c_{1}&c_{2}&a_{3}&b_{3}&b_{4}\\ c_{1}&c_{2}&c_{3}&a_{4}&b_{4}\\ c_{1}&c_{2}&c_{4}&b_{3}&a_{5}\end{pmatrix},\ \begin{pmatrix}a_{1}&b_{1}&b_{2}&b_{3}&b_{4}\\ c_{1}&a_{2}&b_{2}&b_{3}&b_{4}\\ c_{1}&c_{2}&a_{3}&b_{3}&b_{4}\\ c_{1}&c_{2}&c_{3}&a_{4}&b_{4}\\ c_{1}&c_{2}&c_{4}&b_{3}&a_{5}\end{pmatrix}, \tag{29}\] and hence to Lotka-Volterra systems, each with 13 free parameters, \[\left\{\begin{array}{l}\dot{x}_{1}=x_{1}\left(a_{1}x_{1}+b_{1}x_{2}+b_{2}x_ {3}+b_{3}x_{4}+b_{4}x_{5}\right)\\ \dot{x}_{2}=x_{2}\left(a_{2}x_{2}+b_{2}x_{3}+b_{3}x_{4}+b_{4}x_{5}+c_{1}x_{1} \right)\\ \dot{x}_{3}=x_{3}\left(a_{3}x_{3}+b_{3}x_{4}+b_{4}x_{5}+c_{1}x_{1}+c_{2}x_{2} \right)\\ \dot{x}_{4}=x_{4}\left(a_{4}x_{4}+b_{4}x_{5}+c_{1}x_{1}+c_{2}x_{2}+c_{3}x_{3} \right)\\ \dot{x}_{5}=x_{5}\left(a_{5}x_{5}+c_{1}x_{1}+c_{2}x_{2}+c_{3}x_{3}+c_{4}x_{4} \right)\\ \end{array}\right. \tag{30}\] \[\left\{\begin{array}{l}\dot{x}_{1}=x_{1}\left(a_{1}x_{1}+b_{1}x_{2}+b_{2}x_ {3}+b_{3}x_{4}+b_{4}x_{5}\right)\\ \dot{x}_{2}=x_{2}\left(a_{2}x_{2}+b_{2}x_{3}+b_{3}x_{4}+b_{4}x_{5}+c_{1}x_{1} \right)\\ \dot{x}_{3}=x_{3}\left(a_{3}x_{3}+b_{3}x_{4}+b_{4}x_{5}+c_{1}x_{1}+c_{2}x_{2} \right)\\ \dot{x}_{4}=x_{4}\left(a_{4}x_{4}+b_{4}x_{5}+c_{1}x_{1}+c_{2}x_{2}+c_{3}x_{3} \right)\\ \dot{x}_{5}=x_{5}\left(a_{5}x_{5}+b_{3}x_{4}+c_{1}x_{1}+c_{2}x_{2}+c_{4}x_{3} \right)\end{array}\right. \tag{31}\] and \[\left\{\begin{array}{l}\dot{x}_{1}=x_{1}\left(a_{1}x_{1}+b_{1}x_{2}+b_{2}x_ {3}+b_{3}x_{4}+b_{4}x_{5}\right)\\ \dot{x}_{2}=x_{2}\left(a_{2}x_{2}+b_{2}x_{3}+b_{3}x_{4}+b_{4}x_{5}+c_{1}x_{1} \right)\\ \dot{x}_{3}=x_{3}\left(a_{3}x_{3}+b_{3}x_{4}+b_{4}x_{5}+c_{1}x_{1}+c_{2}x_{2} \right)\\ \dot{x}_{4}=x_{4}\left(a_{4}x_{4}+b_{2}x_{3}+b_{4}x_{5}+c_{1}x_{1}+c_{3}x_{2} \right)\\ \dot{x}_{5}=x_{5}\left(a_{5}x_{5}+b_{2}x_{3}+b_{3}x_{4}+c_{1}x_{1}+c_{4}x_{2} \right).\end{array}\right. \tag{32}\] Using the methods explained in section 2 and 3, we can construct 4 functionally independent integrals for each of these systems. As in section 4, the exponents in the integrals exhibit interesting factorisation properties. Below we provide the integrals for systems (30), (31) and (32), expressing each exponent as a product of differences of parameters and a minor of \(A\). We let \(A^{I;J}\) denote the matrix \(A\) with rows \(i\in I\) and columns \(j\in J\) deleted. Its determinant \(|A^{I;J}|\) is called a minor of \(A\). The Lotka-Volterra system (30) admits the four functionally independent integrals \[I_{1,2} =\left(\left(c_{1}-a_{1}\right)x_{1}+\left(a_{2}-b_{1}\right)x_{2 }\right)^{|A|}x_{1}^{(c_{1}-a_{1})|A^{1;1}|}x_{2}^{(b_{1}-a_{2})|A^{2;2}|}x_{3 }^{(a_{2}-b_{1})(a_{1}-c_{1})|A^{2,3;1,2}|}\] \[x_{4}^{(a_{3}-b_{2})(a_{2}-b_{1})(a_{1}-c_{1})|A^{2,3;4,1;2,3}}x _{5}^{(a_{4}-b_{3})(a_{3}-b_{2})(a_{2}-b_{1})(a_{1}-c_{1})b_{4}}\] \[I_{2,3} =\left(\left(c_{2}-a_{2}\right)x_{2}+\left(a_{3}-b_{2}\right)x_{3 }\right)^{|A|}x_{1}^{(a_{5}-b_{4})(a_{4}-b_{3})(a_{3}-b_{2})(a_{2}-c_{2})c_{1}} x_{2}^{(c_{2}-a_{2})|A^{2;2}|}x_{3}^{(b_{2}-a_{3})|A^{3;3}|}\] \[x_{4}^{(a_{3}-b_{2})(a_{2}-c_{2})(a_{1}-c_{1})|A^{1,3;4,1;2,3}}x _{5}^{(a_{4}-b_{3})(a_{3}-b_{2})(a_{2}-c_{2})(a_{1}-c_{1})b_{4}}\] \[I_{3,4} =\left(\left(c_{3}-a_{3}\right)x_{3}+\left(a_{4}-b_{3}\right)x_{4 }\right)^{|A|}x_{1}^{(a_{5}-b_{4})(a_{4}-b_{3})(a_{3}-c_{3})(a_{2}-c_{2})c_{1}} x_{2}^{(a_{5}-b_{4})(a_{4}-b_{3})(a_{3}-c_{3})|A^{2,4,5;3,4,5}|}\] \[x_{3}^{(c_{3}-a_{3})|A^{3;3}|}x_{4}^{(b_{3}-a_{4})|A^{4;4}}x_{5}^{(a _{4}-b_{3})(a_{3}-c_{3})(a_{2}-c_{2})(a_{1}-c_{1})b_{4}}\] \[I_{4,5} =\left(\left(c_{4}-a_{4}\right)x_{4}+\left(a_{5}-b_{4}\right)x_{5 }\right)^{|A|}x_{1}^{(a_{5}-b_{4})(a_{4}-c_{4})(a_{3}-c_{3})(a_{2}-c_{2})c_{1}} x_{2}^{(a_{5}-b_{4})(a_{4}-c_{4})(a_{3}-c_{3})|A^{2,3,5;3,4,5}|}\] \[x_{3}^{(a_{5}-b_{4})(a_{4}-c_{4})|A^{3,5;4,5}}x_{4}^{(c_{4}-a_{4})|A^ {4;4}}x_{5}^{(b_{4}-a_{5})|A^{5;5}|}\] The Lotka-Volterra system (31) admits the four functionally independent integrals \[I_{1,2} =((c_{1}-a_{1})\,x_{1}+(a_{2}-b_{1})\,x_{2})^{|A|}\,x_{1}^{(c_{1}-a_ {1})|A^{1;1}|}x_{2}^{(b_{1}-a_{2})|A^{2;2}|}x_{3}^{(a_{1}-c_{1})(a_{2}-b_{1})|A^{ 2;3;1,2}|}\] \[\quad x_{4}^{(a_{1}-c_{1})(a_{2}-b_{1})(a_{3}-b_{2})(a_{5}-b_{4})b _{3}}x_{5}^{(a_{1}-c_{1})(a_{2}-b_{1})(a_{3}-b_{2})(a_{4}-b_{3})b_{4}}\] \[I_{2,3} =((c_{2}-a_{2})\,x_{2}+(a_{3}-b_{2})\,x_{3})^{|A|}\,x_{1}^{(a_{2}- c_{2})(a_{3}-b_{2})(a_{4}-b_{3})(a_{5}-b_{4})c_{1}}x_{2}^{(c_{2}-a_{2})|A^{2;2}|}x_{3 }^{(b_{2}-a_{3})|A^{3;3}|}\] \[\quad x_{4}^{(a_{1}-c_{1})(a_{2}-c_{2})(a_{3}-b_{2})(a_{5}-b_{4})b _{3}}x_{5}^{(a_{1}-c_{1})(a_{2}-c_{2})(a_{3}-b_{2})(a_{4}-b_{3})b_{4}},\] \[I_{3,4} =((c_{3}-a_{3})\,x_{3}+(a_{4}-b_{3})\,x_{4})^{|A|}\,x_{1}^{(a_{2}- c_{2})(a_{3}-c_{3})(a_{4}-b_{3})(a_{5}-b_{4})c_{1}}x_{2}^{(a_{3}-c_{3})(a_{4}-b_{3})(a_{5}- b_{4})|A^{2;4,5;3,4,5}|}\] \[\quad x_{3}^{(c_{3}-a_{3})|A^{3;3}|}x_{4}^{(b_{3}-a_{4})|A^{4;4}|} x_{5}^{(a_{1}-c_{1})(a_{2}-c_{2})(a_{3}-c_{3})(a_{4}-b_{3})b_{4}},\] \[I_{3,5} =((c_{4}-a_{3})\,x_{3}+(a_{5}-b_{4})\,x_{5})^{|A|}\,x_{1}^{(a_{2}- c_{2})(a_{3}-c_{4})(a_{4}-b_{3})(a_{5}-b_{4})c_{1}}x_{2}^{(a_{3}-c_{4})(a_{4}-b_{3})( a_{5}-b_{4})|A^{2;4,5;3,4,5}|}\] \[\quad x_{3}^{(c_{4}-a_{3})|A^{3;3}|}x_{4}^{(a_{1}-c_{1})(a_{2}-c_ {2})(a_{3}-c_{4})(a_{5}-b_{4})b_{3}}x_{5}^{(b_{4}-a_{5})|A^{5;5}|}.\] The Lotka-Volterra system (32) admits the four functionally independent integrals \[I_{1,2} =((c_{1}-a_{1})\,x_{1}+(a_{2}-b_{1})\,x_{2})^{|A|}\,x_{1}^{(c_{1} -a_{1})|A^{1;1}|}x_{2}^{(b_{1}-a_{2})|A^{2;2}|}x_{3}^{(a_{1}-c_{1})(a_{2}-b_{1} )(a_{4}-b_{3})(a_{5}-b_{4})b_{2}}\] \[\quad x_{4}^{(a_{1}-c_{1})(a_{2}-b_{1})(a_{3}-b_{2})(a_{5}-b_{4}) b_{3}}x_{5}^{(a_{1}-c_{1})(a_{2}-b_{1})(a_{3}-b_{2})(a_{4}-b_{3})b_{4}},\] \[I_{2,3} =((c_{2}-a_{2})\,x_{2}+(a_{3}-b_{2})\,x_{3})^{|A|}\,x_{1}^{(a_{2} -c_{2})(a_{3}-b_{2})(a_{4}-b_{3})(a_{5}-b_{4})c_{1}}x_{2}^{(c_{2}-a_{2})|A^{2;2 }|}x_{3}^{(b_{2}-a_{3})|A^{3;3}|}\] \[\quad x_{4}^{(a_{1}-c_{1})(a_{2}-c_{2})(a_{3}-b_{2})(a_{5}-b_{4} )b_{3}}x_{5}^{(a_{1}-c_{1})(a_{2}-c_{2})(a_{3}-b_{2})(a_{4}-b_{3})b_{4}},\] \[I_{2,4} =((c_{3}-a_{2})\,x_{2}+(a_{4}-b_{3})\,x_{4})^{|A|}\,x_{1}^{(a_{2} -c_{3})(a_{3}-b_{2})(a_{4}-b_{3})(a_{5}-b_{4})c_{1}}x_{2}^{(c_{3}-a_{2})|A^{2;2 }|}\] \[\quad x_{3}^{(a_{1}-c_{1})(a_{2}-c_{3})(a_{4}-b_{3})(a_{5}-b_{4} )b_{2}}x_{4}^{(b_{3}-a_{4})|A^{4;4}|}x_{5}^{(a_{1}-c_{1})(a_{2}-c_{3})(a_{3}- b_{2})(a_{4}-b_{3})b_{4}},\] \[I_{2,5} =((c_{4}-a_{2})\,x_{2}+(a_{5}-b_{4})\,x_{5})^{|A|}\,x_{1}^{(a_{2} -c_{4})(a_{3}-b_{2})(a_{4}-b_{3})(a_{5}-b_{4})c_{1}}x_{2}^{(c_{4}-a_{2})|A^{2;2 }|}\] \[\quad x_{3}^{(a_{1}-c_{1})(a_{2}-c_{4})(a_{4}-b_{3})(a_{5}-b_{4} )b_{2}}x_{4}^{(a_{1}-c_{1})(a_{2}-c_{4})(a_{3}-b_{2})(a_{5}-b_{4})b_{3}}x_{5}^{( b_{4}-a_{5})|A^{5;5}|}.\] The factorisation will be described in more detail in [20]. ## 7 A hierarchy of superintegrable Lotka-Volterra systems Consider the tall tree on \(n\) vertices depicted in Figure 3. It gives rise to the \(n\times n\) matrix: \[A=\begin{pmatrix}a_{1}&b_{1}&b_{2}&b_{3}&\cdots&b_{n-1}\\ c_{1}&a_{2}&b_{2}&b_{3}&\cdots&b_{n-1}\\ c_{1}&c_{2}&a_{3}&b_{3}&\cdots&b_{n-1}\\ c_{1}&c_{2}&c_{3}&a_{4}&\cdots&b_{n-1}\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ c_{1}&c_{2}&c_{3}&c_{4}&\cdots&a_{n}\end{pmatrix}, \tag{33}\] of which matrices (22),(24),(25), and the left matrix in (29), are special cases taking \(n=2,3,4\) and \(5\) respectively. Figure 3: Tall tree on \(n\) indices. The vertices are labeled in black, the edges in red. For arbitrary \(n\), the tall tree provides us with the Lotka-Volterra system: \[\left\{\begin{aligned} \dot{x}_{1}&=x_{1}\left(a_{1}x_{1}+b_{ 1}x_{2}+b_{2}x_{3}+\cdots+b_{n-1}x_{n}\right)\\ \dot{x}_{2}&=x_{2}\left(c_{1}x_{1}+a_{2}x_{2}+b_{2}x _{3}+\cdots+b_{n-1}x_{n}\right)\\ \dot{x}_{3}&=x_{3}\left(c_{1}x_{1}+c_{2}x_{2}+a_{3} x_{3}+\cdots+b_{n-1}x_{n}\right)\\ &\quad\vdots\\ \dot{x}_{n-1}&=x_{n-1}\left(c_{1}x_{1}+c_{2}x_{2}+ \cdots+a_{n-1}x_{n-1}+b_{n-1}x_{n}\right)\\ \dot{x}_{n}&=x_{n}\left(c_{1}x_{1}+c_{2}x_{2}+ \cdots+c_{n-1}x_{n-1}+a_{n}x_{n}\right),\end{aligned}\right. \tag{34}\] The \(n\) coordinates \(x_{i}\), \(i=1,\ldots,n\), are Darboux polynomials. The system (34) admits \(n-1\) additional Darboux polynomials of the form \[P_{i,i+1}=\left(c_{i}-a_{i}\right)x_{i}+\left(a_{i+1}-b_{i}\right)x_{i+1}, \qquad i=1,\ldots,n-1,\] with cofactors \[C_{i,i+1}=c_{1}x_{1}+\cdots+c_{i-1}x_{i-1}+a_{i}x_{i}+a_{i+1}x_{i+1}+b_{i+1}x_ {i+2}+\cdots b_{n-1}x_{n}.\] Their coefficients can be organised into the following \((n-1)\times n\) matrix: \[B=\begin{pmatrix}a_{1}&a_{2}&b_{2}&b_{3}&\cdots&b_{n-2}&b_{n-1}\\ c_{1}&a_{2}&a_{3}&b_{3}&\cdots&b_{n-2}&b_{n-1}\\ c_{1}&c_{2}&a_{3}&a_{4}&\cdots&b_{n-2}&b_{n-1}\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ c_{1}&c_{2}&c_{3}&c_{4}&\cdots&a_{n-1}&b_{n-1}\\ c_{1}&c_{2}&c_{3}&c_{4}&\cdots&a_{n-1}&a_{n}\end{pmatrix}, \tag{35}\] Using the matrices \(A\) and \(B\) we define \(Z=-BA^{-1}|A|\) where \(|A|\) is the determinant of \(A\). We obtain \(n-1\) integrals of the form \[K_{i}=P_{i,i+1}^{|A|}\prod_{j=1}^{n}x_{j}^{Z_{i,j}},\qquad i=1,\ldots,n-1.\] One can show, cf. [20], that the exponents factorise and that the integrals \(K_{i}\) are functionally independent (which implies superintegrability). Introducing the notation \[\mathbb{N}_{j}^{n}=\{k\in\mathbb{N}:j\leq k\leq n\},\] we find, for all \(i\in\mathbb{N}_{1}^{n-1},j\in\mathbb{N}_{1}^{n}\), \[Z_{i,j}=\begin{cases}(a_{i}-c_{i})\prod_{j<k<i}(a_{k}-c_{k})\prod_{i<k\leq n} (a_{k}-b_{k-1})|A^{\mathbb{N}_{j}^{i-1}\cap\mathbb{N}_{i+1}^{n}:\mathbb{N}_{j+ 1}^{n}}|&j<i,\\ (c_{i}-a_{i})|A^{i;i}|&j=i,\\ (b_{i}-a_{i+1})|A^{i+1;i+1}|&j=i+1,\\ (a_{i}-c_{i})\prod_{1<k<i}(a_{k}-c_{k})\prod_{i<k<j}(a_{k}-b_{k-1})|A^{ \mathbb{N}_{i}^{i-1}\cap\mathbb{N}_{i+1}^{j}:\mathbb{N}_{i}^{j-1}}|&j>i+1.\end{cases}\] This formula provides a more efficient way to calculate the exponents in the integrals \(K_{i}\) than using the definition of \(Z\), which involves matrix multiplication, inversion and taking the determinant of an \(n\times n\) matrix. The special case \(a_{i}=0\quad(i=1,\ldots,n),b_{i}=-c_{i+1}\quad(i=1,\ldots,n-1)\) was studied in [17]. **Acknowledgement** GRWQ is grateful to Silvia Perez Cruz for alleviating the plague years and to Sydney Mathematical Research Institute (SMRI) for travel support.
2308.15230
Providing Previously Unseen Users Fair Recommendations Using Variational Autoencoders
An emerging definition of fairness in machine learning requires that models are oblivious to demographic user information, e.g., a user's gender or age should not influence the model. Personalized recommender systems are particularly prone to violating this definition through their explicit user focus and user modelling. Explicit user modelling is also an aspect that makes many recommender systems incapable of providing hitherto unseen users with recommendations. We propose novel approaches for mitigating discrimination in Variational Autoencoder-based recommender systems by limiting the encoding of demographic information. The approaches are capable of, and evaluated on, providing users that are not represented in the training data with fair recommendations.
Bjørnar Vassøy, Helge Langseth, Benjamin Kille
2023-08-29T11:37:33Z
http://arxiv.org/abs/2308.15230v1
# Providing Previously Unseen Users Fair Recommendations Using Variational Autoencoders ###### Abstract An emerging definition of fairness in machine learning requires that models are oblivious to demographic user information, e.g., a user's gender or age should not influence the model. Personalized recommender systems are particularly prone to violating this definition through their explicit user focus and user modelling. Explicit user modelling is also an aspect that makes many recommender systems incapable of providing hitherto unseen users with recommendations. We propose novel approaches for mitigating discrimination in Variational Autoencoder-based recommender systems by limiting the encoding of demographic information. The approaches are capable of, and evaluated on, providing users that are not represented in the training data with fair recommendations. ## 1 Introduction Fairness in recommender systems is becoming a popular and diverse research field. Burke (2017) formalized the multi-stakeholder nature of the recommendation setting: Producers are interested in mitigating popularity bias (Ahanger et al., 2022) to ensure their products are given the exposure they deserve. Consumers expect to be treated similarly regardless of demographic attributes like age, race, and gender. In addition to the stakeholder perspectives, there is no single definition of what constitutes a fair recommendation. A consumer-side perspective focusing on discrimination of demographic user groups may require that similar ratings are estimated for each group (Kamishima et al., 2018), that each group receives similar recommendations (Farnadi et al., 2018), that each group are equally satisfied with their recommendations (Yao and Huang, 2017), or that model representations do not correlate with the user groups. This research focuses on the latter notion (_Neutral Representations_) while also evaluating whether similar recommendations are given to different demographic groups (_Recommendation Parity_). A mostly unexplored perspective of fair recommender systems relates to recommending for users that were not part of the training data. A user's first impression of a platform may decide if they will continue using it. A man who aspires to be a florist and likes romance movies may feel stereotypes and be discouraged from further interaction if he is first recommended physical labour careers or action movies. Unlike contemporary fair recommender systems, our research focuses on introducing fairness in a model architecture that can recommend for all users, including users not represented in training data. We propose Variational Autoencoder (VAE) approaches that only require a list of items a user has interacted with and no pre-trained user representations. The approaches may replace contextual and item-to-item recommender systems used to onboard new users or serve as a complete recommendation platform that does not require frequent model updates. One goal of this research is to provide more insight into the competitiveness of VAE-based recommender systems given their limited use in recommender system research. Given this, we wish to verify if our VAE-based models can fairly process unseen users and if the same models produce state-of-the-art fair recommendations for said users. Formalized research questions are as follows: **1.** Are VAE-based recommender systems competitive? and **2.** Can the encoded demographic information in VAE latent states be reduced when processing new users? ## 2 Related Work Multiple methods have been proposed for filtering out sensitive information embedded in model representations and parameters. Many of these methods train adversarial models tasked with classifying sensitive attributes given representations belonging to the recommender systems. The recommender systems are then penalized for encoding sensitive information by adding additional objectives of fooling the adversarial models. This strategy has been applied directly to latent factors of factorization models (Resheff et al., 2019; Xu et al., 2021; Wu et al., 2021), indirectly to train attribute filters (Bose and Hamilton, 2019), and for filtering graph embeddings (Wu et al., 2021; Liu et al., 2022). Another strategy encourages representation neutrality by optimizing for representations that are orthogonal to sensitive dimensions in the representation space (Wu et al., 2021; Islam et al., 2021). There are also examples of introducing neutrality by adjusting sampling schemes applied while training representations (Rahman et al., 2019; Li et al., 2022), and methods for causally isolating sensitive information to specific model factors that are replaced or dropped during inference (Buyl and Bie, 2020; Frisch et al., 2021). The framework proposed by Li et al. (2023) shares the most high-level similarities with this work in considering Representation Neutrality fairness optimized through adversarial methods and in being capable of recommending for unseen users. The latter is achieved by training an auxiliary mapping function to map new users to the representation space of the trained model. Creager et al. (2019) propose a VAE-based classification model that allows its users to specify which sensitive attributes it should be oblivious to. One of the key contributions is to isolate all sensitive user information in one part of the latent representation. The sensitive part of the latent representation is subjected to a secondary task of classifying the user's sensitive attributes, and an approximated KL-divergence measure is minimized to encourage independence between the two parts of the latent representation. ## 3 Background ### Variational Autoencoders The Variational Autoencoder (VAE) is a variational Bayesian model initially proposed by Kingma and Welling (2014) that has since seen wide application within representation learning and image generation. The vanilla VAE is posed as a graphical model where the observed data \(\mathbf{x}\) depends on a latent variable \(\mathbf{z}\). The final objective of maximizing the Evidence Lower Bound (ELBO) can be derived from minimizing the KL-divergence from the variational distribution \(q(\mathbf{z}|\mathbf{x})\) to the true posterior \(p(\mathbf{z}|\mathbf{x})\). \[\text{ELBO}(q,p)=\mathbb{E}_{q(\mathbf{z}|\mathbf{x})}\left[\log p(\mathbf{x}|\mathbf{z}) \right]-D_{KL}\left[q(\mathbf{z}|\mathbf{x})||p(\mathbf{z})\right], \tag{1}\] where \(p(\mathbf{x}|\mathbf{z})\) is the likelihood and \(p(\mathbf{z})\) is the latent prior. Neural networks typically parameterize \(p(\mathbf{x}|\mathbf{z})\) and \(q(\mathbf{z}|\mathbf{x})\). Higgins et al. (2017) proposed the \(\beta\)-VAE which adds a \(\beta\) factor associated with the KL-divergence term. In particular, they explore \(\beta>1\) to produce disentangled encodings, i.e., the increased focus on independence between dimensions of \(\mathbf{z}\) leads to semantically different concepts encoded in them. Similarly, Kim and Mnih (2018) adds an optimization term for increased disentanglement that penalizes dependency between dimensions of the latent representation. \[\text{Loss}_{\text{FactorVAE}}(q,p)=\text{ELBO}(q,p)-\gamma D_{KL}\left[q(\mathbf{z}) ||\prod_{i}q(\mathbf{z}_{i})\right] \tag{2}\] ### VAE-based Recommender System Liang et al. (2018) propose applying a VAE as a Recommender System by having \(\mathbf{x}\) represent user interaction history, e.g., items that the user has interacted with or rated. Recommendations are extracted from the fuzzy reconstruction of the user interaction history based on values assigned to items the user has not previously interacted with. ## 4 Methodology The key idea behind our model is to leverage the bottleneck characteristics of the VAE to encourage the reduction of sensitive user information such that the provided recommendations are minimally influenced by such information. The probabilistic nature and low dimensionality of the latent representation can simplify the objective of filtering out sensitive information. The main proposed model setups are illustrated in Figure 1. ### Base Model The underlying recommender system is based on the model proposed by Liang et al. (2018). In addition to posing the recommendation task as the objective of reconstructing the users' interaction histories, Liang et al. (2018) propose multiple extensions. They apply dropout on the input during training (noisy VAE) to improve generalization. They explore Gaussian and Logistic likelihood \(p(\mathbf{x}|\mathbf{z})\) before settling on the Multinomial for its performance and nice properties. The multinomial likelihood does not explicitly penalize probability density allocated to the items the user has not Figure 1: Illustration of the Split Latent model setups. The encoder is dynamically designed and is, in practice, implemented as two separate encoders, one for \(\mathbf{z}\) and one for \(\mathbf{b}\). Key details are that no explicit sensitive information \(\mathbf{s}\) is provided or required during inference, and the sensitive part of the latent representation \(\mathbf{b}\) is not used for recommendation. interacted with, which in turn avoids the assumption of many other options in that these are all items that the user dislike and that should be allocated zero probability. \[\log p(\mathbf{x}|\mathbf{z})=\log\left[\prod_{i}p(\mathbf{x}_{i}|\mathbf{z})^{\mathbf{x}_{i}}\right] =\sum_{i}\mathbf{x}_{i}\text{log}p(\mathbf{x}_{i}|\mathbf{z}),\text{where}\sum_{i}p(\mathbf{x}_{i }|\mathbf{z})=1 \tag{3}\] A specific item \(\mathbf{x}_{i}\)'s contribution is zero if it is not found in the user's interaction history, regardless of what the decoder parameterizes the item's probability \(p(\mathbf{x}_{i}|\mathbf{z})\) to be. Unlike Higgins et al. (2017), Liang et al. (2018) explore \(\beta<1\), citing that generation has limited applications in the recommender system setting, and identify \(\beta=0.2\) as a good candidate. The model of Liang et al. (2018) was altered slightly in this work: The proposed \(\beta\)-annealing strategy was dropped since it did not yield noticeable improvements, the Hyperbolic Tangent activation was switched out with the SELU (Klambauer et al., 2017), and better results were achieved when reducing the dimensionality of the latent state. The latent dimension was set to 64 for the baseline setup and 24 for all fairness setups, as opposed to 200 in Liang et al. (2018). Further, \(\beta\) was set to 1 for the extended setups as it was found to synergize better with the fairness extensions. ### Adversarial Setup The adversarial setup is considered a baseline setup and couples the base model with an adversarial model tasked with classifying sensitive attributes from the latent state \(\mathbf{z}\). This is one of the extensions explored by Borges and Stefanidis (2022). Insight from the adversarial model help the main model avoid encoding sensitive information. ### Split Latent Setups The split latent state setups bisect the latent state into one part for encoding sensitive user information \(\mathbf{b}\) and another part that is free for sensitive information \(\mathbf{z}\). Fair recommendations are produced by decoding \(\mathbf{z}\), while \(\mathbf{b}\) is discarded during evaluation. The motivation behind the bisected latent state is to leverage the sensitive information in \(\mathbf{b}\) to inform the model of the information that should not be encoded in \(\mathbf{z}\). An inspirational split latent setup is proposed by Creager et al. (2019). Our approaches differ in that we consider recommendation rather than classification, we do not limit the encoding of each sensitive attribute to single dimensions in \(\mathbf{b}\), and we posit isotropic Gaussian priors to \(\mathbf{b}\). A classification task is coupled with \(\mathbf{b}\) through another decoder, which will be referred to as the Sensitive Decoder, with the goal of inferring the user's sensitive attributes. In both Split Latent setups, binary sensitive attributes were considered, and the re-classification was optimized using cross-entropy. The choice of an isotropic Gaussian prior \(p(\mathbf{z},\mathbf{b})\) will inherently optimize the VAE to produce independent dimensions of \([\mathbf{z}\ \mathbf{b}]\), but this is supplemented with an explicit term for penalizing correlation between \(\mathbf{z}\) and \(\mathbf{b}\). The full objective to be maximized is \[\begin{split}\text{SplitLatentObj}(q,p)=\\ \mathbb{E}_{q(\mathbf{z},\mathbf{b}|\mathbf{x})}\left[\log p(\mathbf{x}|\mathbf{z})+ \alpha\log p(\mathbf{s}|\mathbf{b})\right]-\beta D_{KL}\left[q(\mathbf{z},\mathbf{b}|\mathbf{x})||p (\mathbf{z},\mathbf{b})\right]-\gamma D_{KL}\left[q(\mathbf{z},\mathbf{b})||q(\mathbf{z})q(\mathbf{b}) \right],\end{split} \tag{4}\] where \(\beta\), \(\alpha\) and \(\gamma\) are hyperparameters for adjusting the influence of terms, \(\mathbb{E}_{q(\mathbf{z}|\mathbf{x})}\left[\alpha\text{log}p(\mathbf{s}|\mathbf{b})\right]\) comprise the re-classification objective of sensitive attribute \(\mathbf{s}\) given \(\mathbf{b}\), and \(D_{KL}\left[q(\mathbf{z},\mathbf{b})||q(\mathbf{z})q(\mathbf{b})\right]\) is the term introduced for further penalizing correlation between \(\mathbf{z}\) and \(\mathbf{b}\). The latter is defined and implemented in two different ways. **Generative Adversarial Network (GAN) KL.** Inspiration has been taken from Kim and Mnih (2018), who proposed approximating the KL divergence from the aggregate posterior \(\hat{q}(\mathbf{z})=\mathbb{E}_{\mathbf{x}\in\mathcal{D}_{s}}\left[q(\mathbf{z}|\mathbf{x})\right]\), where \(\mathcal{D}_{s}\) is the sampled minibatch, to a factorization over each dimension using an adversarial model. This measure is approximated by an adversarial model trained to tell the original latent representations apart from factorized ones where dimensions have been shuffled across the minibatch. The equivalent for \(q(\mathbf{z})q(\mathbf{b})\) is to shuffle \(\mathbf{b}\) in its entirety across the minibatch. **Empiric KL.** GAN approaches are often hard to optimize due to the moving objectives and the balancing of two competing models. An alternative approach is based on the analytic KL-divergence of Gaussian distributions. The empiric covariance of a minibatch of latent representations replaces the covariances of \(q(\mathbf{z},\mathbf{b})\), while the covariances of \(q(\mathbf{z})q(\mathbf{b})\) are replaced by the same empiric covariance matrix where all covariances between \(\mathbf{z}\) and \(\mathbf{b}\) are set to zero. \[D_{KL}\left[q(\mathbf{z},\mathbf{b})||q(\mathbf{z})q(\mathbf{b})\right]=\frac{1}{2}\left[ \log\frac{|\hat{\Sigma}_{2}|}{|\hat{\Sigma}_{1}|}-d+\text{tr}\left\{\hat{ \Sigma}_{2}^{-1}\hat{\Sigma}_{1}\right\}\right],\text{when }\hat{\mu}_{1}=\hat{\mu}_{2}, \tag{5}\] where \(\hat{\mu}\) are empiric means, \(\hat{\Sigma}\) are empiric covariances, \(d\) is the number of dimensions in the latent state, and \(\text{tr}\{\}\) is the trace operator. The covariance matrix \(\hat{\Sigma}_{2}\) is block-diagonal, so its inverse can be computed block by block. ## 5 Results ### Experimental Setup Two established datasets were used to conduct the experiments. Unlike contemporary work, both datasets were split into training, validation and test datasets containing disjoint sets of users to accommodate the setting of providing unseen users with recommendations. Common for both datasets was the choice of two binary sensitive attributes: age and gender. The datasets provide gender as'male', 'female', or various forms for'missing'/'undefined'/'other', but the application of labels other than male and female was deemed too inconsistent to make out one or more additional sensitive labels. Age was also made binary to complement the gender attribute, and a threshold age of 35 was chosen. Users that miss either sensitive attribute are filtered out since these are required during training and evaluation. **MovieLens1M**: The 1 million version of the MovieLens dataset (Harper and Konstan, 2015) is the most applied dataset for evaluating consumer-side fairness in recommender systems. The raw dataset contains 1 million movie ratings and is the largest MovieLens dataset that provides user attributes that are considered sensitive. Following established practices for converting ratings into implicit feedback data, ratings of 4 or 5 out of 5 were labelled 1 while lower ratings were set to 0. **LastFM 2B**: The LFM-2b dataset (Melchiorre et al., 2021) comprises 2 billion listening events collected on the lastFM platform. This dataset was processed to focus on the most recent and relevant data, as well as reducing the number of items. The most recent two years of data were extracted, and the objective was set to recommend artists rather than albums or songs. Finally, artists with fewer than 110 listening events were filtered out. Key statistics of the datasets are summarized in Table 1. #### 5.1.1 Metrics **NDCG@k**: NDCG@k was used as the recommendation utility metric. NDCG is a popular ranking metric in recommender systems that rewards ranking relevant items high. For all experiments, k was set to 10. **AUC**: AUC(Area Under the ROC Curve) is a metric commonly used for classification that considers the True Positive Rate and the False Positive Rate for all possible threshold values. For fair recommender systems it has been used to evaluate how well sensitive information has been filtered out of representations, which is its role in this work. For this particular objective and binary sensitive attributes, the perfect AUC score is 0.5 which indicates that all sensitive information has been filtered out. AUC is measured by training an auxiliary classification model on the model representations. One challenge with this metric is that it cannot be applied to the SLIM baseline. \(\chi^{2}\)**-statistic**: \(\chi^{2}\)-test can be used to estimate the probability that two independent samples were drawn from the same distribution. For the considered setting, it is natural to compare how often items occur in recommendations given to different sensitive groups, e.g., young and senior users. Top recommendations are not independent, rendering the test unreliable, and we instead focus on the statistic. The top 100 recommendations given to each user were selected to aggregate the contingency tables of each sensitive attribute. The number of items considered was set for each dataset such that each cell had an expected value of at least 3 in all setups since the long-tail nature of recommendation yields a lot of cells with low expectations and since the statistic is highly sensitive to such cells. Expectations are adjusted according to the number of users who can be recommended each item, i.e., users who have not already interacted with it. One issue with considering the top recommendations observations is that we lose the ordering information in the ranking, where rank 1 is assigned more confidence than rank 100. **Kendall-Tau distance**: Kendall-Tau distance is a distance metric between two ordered lists. The original distance is undefined when items only occur in one of the lists, so an extension designed for the recommender setting1 is applied. The extension is shown to have some intuitive properties and will output values from 1, being a perfect match, to -1, when the original recommendation lists contain disjointed sets of items. This metric was also considered for top-\(k\) recommendations given to sensitive groups, where \(k\) is set to 100. The recommendations assigned to each sensitive group were aggregated over individual user recommendations while applying a rank discounting scheme to give higher importance to highly ranked items and to serve as a shared normalization strategy for the different models. Footnote 1: [https://godatadriven.com/blog/using-kendalls-tau-to-compare-recommendations/](https://godatadriven.com/blog/using-kendalls-tau-to-compare-recommendations/) #### 5.1.2 Models Since no implementations of viable fair baselines have been identified, comparisons are made with non-fair baselines and focus on how the different proposed fairness extensions of a base VAE recommender system impact recommendation utility and fairness measures. All code is publically available on GitHub2. Footnote 2: [https://github.com/BjornarVass/fair-vae-rec](https://github.com/BjornarVass/fair-vae-rec) **SLIM**: SLIM [11] is an established baseline recommender system that culminates in an item-to-item parameter matrix. The item-to-item nature allows it to recommend for users not considered during model fitting. \begin{table} \begin{tabular}{c|c c c c c} **Dataset** & **\# Users** & **\#Female/\#Male** & **\#Senior/\#Young** & **\# Items** & **\# Records** \\ \hline MovieLens & 6k & 1.7k/4.3k & 1.4k/4.6k & 3.5k & 575k \\ LastFM & 14.6k & 3k/11.5k & 1.3k/13.3k & 22k & 8750k \\ \hline \end{tabular} \end{table} Table 1: Key dataset statistics. **VAErec**: The model proposed by Liang et al. (2018) with minor alterations as specified in Section 4.1. This model is not trained with any fairness objectives and serves as the base model that is extended to make out the other VAE models. **VAEadv**: VAERec extended with an adversarial model that filters sensitive information from the latent representation. **VAEgan**: VAERec extended with bisected latent representation for isolating and filtering out sensitive information. Independence between sections is optimized using an adversarial model that approximates a KL-divergence term. **VAEemp**: VAERec extended with bisected latent representation for isolating and filtering out sensitive information. Independence between sections is optimized using an analytic KL-divergence term with empiric covariances. ### Main Results VAErec achieves comparable NDCG as SLIM on the MovieLens dataset, but SLIM performs better on the LastFM dataset. This is contrary to the results of Liang et al. (2018) which showed improved NDCG@100 over SLIM on two movie datasets, of which the results on the 20 million version of MovieLens were successfully reproduced using our implementation. It is unclear which factors affect the performance of the VAE-recommender, particularly since our changes further improved the original model and considering that two datasets from the same source conflict in performance. All fairness extensions are shown to improve the AUC scores significantly. In particular, the AUC scores of VAErec on the MovieLens dataset indicate that the latent aptly encodes user gender and age with AUCs above 0.8, whereas VAEemp reduces this to 0.65-0.63 at the cost of 0.035 NDCG (\(\approx 11\%\)). VAEemp outperforms the other options when considering NDCG and Representation Neutrality and manages to filter out age better than gender despite the base model inherently encoding more age information. The secondary fairness metrics suggest that all fair extensions produce more similar recommendations for users of different sensitive groups than SLIM and VAErec. The \(\chi^{2}\)-statistic of VAErec is worse than that of SLIM, but the fair extensions all significantly outperform SLIM. Interestingly, VAEemp is the worst-performing fair extension on these metrics, which suggests they do not perfectly reflect the AUC metric. The LastFM dataset offers different insights on minority groups since the sensitive groups are very skewed, e.g., only 10% of the users are considered senior. This seems to be reflected in a smaller improvement of Representation Neutrality achieved by the fair extensions compared with their performance on the MovieLens dataset. The improved gender AUC is comparable, but the base AUC of VAErec is lower. All three fair extensions performed similarly on gender, but VAEemp struggles more than the other two on age. The extensions' lesser improvements of AUC over VAErec come at a smaller relative reduction in NDCG than the one seen on the Movielens dataset (i 4% for VAEadv and VAEemp). Kendall Tau on LastFM is significantly better for all VAE-based models, and the fair models all outperform VAErec. On the other hand, only VAEadv achieve a \(\chi^{2}\)-statistic that is better than that of SLIM for this dataset. One confounding factor is that SLIM is observed to provide \begin{table} \begin{tabular}{c c c c c c c c} **Model** & **NDCG@101\({}^{\dagger}\)** & **AUC GJ** & **AUC A\({}_{\downarrow}\)** & **\(\chi^{2}\)@100 C j\({}_{\downarrow}\)** & **\(\chi^{2}\)@100 A\({}_{\downarrow}\)** & **K.T@100 G\({}^{\dagger}\)** & **K.T@100 A\({}^{\dagger}\)** \\ \hline **SLIM** & **0.328\(\pm\)**0.009 & - & - & 2825\(\pm\)280.1 & 2198\(\pm\)237.6 & 0.476\(\pm\)0.075 & 0.448\(\pm\)0.045 \\ **VAErec** & 0.321\(\pm\)0.008 & 0.804\(\pm\)0.024 & 0.859\(\pm\)0.019 & 2990\(\pm\)415.9 & 2636\(\pm\)359.3 & 0.559\(\pm\)0.054 & 0.537\(\pm\)0.035 \\ **VAEadv** & 0.280\(\pm\)0.008 & 0.678\(\pm\)0.036 & 0.675\(\pm\)0.043 & 1121\(\pm\)273.4 & 904.01\(\pm\)194.5 & 0.820\(\pm\)0.025 & 0.792\(\pm\)0.038 \\ **VAEgem** & 0.277\(\pm\)0.010 & 0.687\(\pm\)0.037 & 0.695\(\pm\)0.050 & **1054\(\pm\)**232.6 & **852.52\(\pm\)**280.0 & **0.884\(\pm\)**0.036 & **0.841\(\pm\)**0.029 \\ **VAEemp** & 0.286\(\pm\)0.008 & **0.652\(\pm\)**0.032 & **0.629\(\pm\)**0.041 & 1355\(\pm\)302.7 & 1151\(\pm\)228.8 & 0.804\(\pm\)0.033 & 0.770\(\pm\)0.043 \\ \hline \end{tabular} \end{table} Table 2: MovieLens results. more diverse recommendations, with roughly 60% of recommendations being the top 10% popular items, vs roughly 85% for VAEemp, meaning that the 750 most popular items considered in the \(\chi^{2}\)-statistic of LastFM cover far fewer of the total SLIM recommendations. When comparing the VAE-based models, it is clear that the fair extensions succeed in reducing the very large initial \(\chi^{2}\)-statistics of VAErec. ### Sampling Feature VAE-based models typically only consider the parameterized mean of the variational distribution during inference, i.e., it is deterministic. In a setting where fairness is of utmost importance, one can leverage the parameterized mean and variation to sample latent states that are inherently noisy. Sampled latent states are typically fairer, i.e., more neutral, but produce less accurate recommendations. Thus, VAE-based recommender systems can dynamically offer two different modes depending on how the user values the performance and fairness tradeoff. The sampling feature is compared for different values of the hyperparameter \(\beta\) since it directly controls the loss term regulates the parameterized distribution. The default setting of \(\beta=1.0\) yields the biggest difference in results when applying sampled or deterministic latent states. Sampling with \(\beta=1.0\) resulted in the best fairness scores but also the worst NDCG score. On the other hand, \(\beta\) set to 0.2 and 0.6 produced marginally better NDCG and AUC for the deterministic mode with \(\beta=0.2\) coming out on top. Reducing \(\beta\) appears to improve the fairness of the deterministic mode at the loss of fairness in the sampling mode. Notably, sampling with \(\beta=0.2\) and not sampling with \(\beta=1\) achieves similar NDCG, but the AUCs achieved by the former is noticably lower. This suggests that coupling the choice of \(\beta\) and the sampling strategy may yield good settings for scenarios where one metric is assigned a strict upper or lower bound constraint. Large \(\beta\) may be ideal in dynamic scenarios where users can choose to turn on sampling to improve fairness. ## 6 Conclusion and Future Work Relating to the first research question, this research indicates that VAE-based recommenders consistently perform well but are not universally competitive. While few architectures excel in all scenarios, this insight may motivate research into other model types capable of providing new users \begin{table} \begin{tabular}{c c c c} **Model** & **NDCG@10\(\uparrow\)** & **AUC G\(\downarrow\)** & **AUC A\(\downarrow\)** \\ \hline **VAEemp**\(\beta=1.0\) & 0.286\(\pm\)0.008 & 0.651\(\pm\)0.032 & 0.629\(\pm\)0.041 \\ **sampled** & 0.256\(\pm\)0.009 & 0.595\(\pm\)0.035 & 0.562\(\pm\)0.026 \\ \hline **VAEemp**\(\beta=0.6\) & 0.292\(\pm\)0.009 & 0.652\(\pm\)0.029 & 0.615\(\pm\)0.042 \\ **sampled** & 0.269\(\pm\)0.007 & 0.603\(\pm\)0.036 & 0.573\(\pm\)0.038 \\ \hline **VAEemp**\(\beta=0.2\) & 0.292\(\pm\)0.008 & 0.640\(\pm\)0.044 & 0.607\(\pm\)0.030 \\ **sampled** & 0.279\(\pm\)0.008 & 0.619\(\pm\)0.042 & 0.587\(\pm\)0.038 \\ \hline \end{tabular} \end{table} Table 4: Results from different \(\beta\) settings, with and without sampling latent representations. with fair recommendations. To answer the second research question, we have shown that these models can be successfully applied and extended to significantly limit the demographic information encoded in the latent state, meaning that they can provide new and established users alike with fairer recommendations. The improvement in fairness comes with a minor deterioration of recommendation utility, as is seen in similar research. The VAE also offers a means for further obfuscating user representation through parameterized latent state variance. For the primary fairness definition of Neutral Representation, the proposed extensions significantly outperform the base model. The models also perform well on the secondary fairness definition of Recommendation Parity where the proposed VAE extensions either outperform or match relevant models on two metrics. For future work, it could be interesting to evaluate the model on other datasets representing different recommendation settings. It would also be desirable to compare with other models that optimize for fairness when recommending for new users. Such models could be based on contextual recommender systems or other architectures that do not learn explicit user representations. ## Acknowledgments This publication has been partly funded by the SFI NorwAI, (Centre for Research-based Innovation, 309834). The authors gratefully acknowledge the financial support from the Research Council of Norway and the partners of the SFI NorwAI.
2307.14973
Insufficient Gibbs Sampling
In some applied scenarios, the availability of complete data is restricted, often due to privacy concerns; only aggregated, robust and inefficient statistics derived from the data are made accessible. These robust statistics are not sufficient, but they demonstrate reduced sensitivity to outliers and offer enhanced data protection due to their higher breakdown point. We consider a parametric framework and propose a method to sample from the posterior distribution of parameters conditioned on various robust and inefficient statistics: specifically, the pairs (median, MAD) or (median, IQR), or a collection of quantiles. Our approach leverages a Gibbs sampler and simulates latent augmented data, which facilitates simulation from the posterior distribution of parameters belonging to specific families of distributions. A by-product of these samples from the joint posterior distribution of parameters and data given the observed statistics is that we can estimate Bayes factors based on observed statistics via bridge sampling. We validate and outline the limitations of the proposed methods through toy examples and an application to real-world income data.
Antoine Luciano, Christian P. Robert, Robin J. Ryder
2023-07-27T16:09:19Z
http://arxiv.org/abs/2307.14973v2
# Insufficient Gibbs Sampling ###### Abstract In some applied scenarios, the availability of complete data is restricted, often due to privacy concerns, and only aggregated, robust and inefficient statistics derived from the data are accessible. These robust statistics are not sufficient, but they demonstrate reduced sensitivity to outliers and offer enhanced data protection due to their higher breakdown point. In this article, operating within a parametric framework, we propose a method to sample from the posterior distribution of parameters conditioned on different robust and inefficient statistics: specifically, the pairs (median, MAD) or (median, IQR), or one or more quantiles. Leveraging a Gibbs sampler and the simulation of latent augmented data, our approach facilitates simulation according to the posterior distribution of parameters belonging to specific families of distributions. We demonstrate its applicability on the Gaussian, Cauchy, and translated Weibull families. **Keywords: Gibbs Sampling, Robust Statistics, Markov Chain Monte Carlo, latent variables, completion** ## 1 Introduction Tukey (1960) highlighted the sensitivity of traditional statistical methods to deviations from Gaussian assumptions. This led to theoretical advancements by Huber (1964) and Hampel (1968), laying the foundation for robust statistical techniques. Due to data protection laws, the sharing of sensitive personal data is restricted among businesses and scientific institutions. To address this, organizations such as Eurostat and the World Bank often do not release individual-level data \(X\), but only robust and insufficient aggregated summary statistics \(T(X)\) instead. In other cases, observations may be summarized with robust statistics to reduce the impact of outliers or of model misspecification. This limitation creates a need for statistical methods that can effectively infer parameters from observed robust statistics. In the Bayesian setting, we might impose a parametric distribution \((\mathcal{F}_{\theta})_{\theta\in\Theta}\) on the original observations, and wish to sample from the posterior distribution of \(\theta\) given robust statistics. The posterior distribution is typically intractable, making its simulation challenging and an interesting area of research. Previous studies have employed Approximate Bayesian Computation (ABC) with robust summary statistics, such as the median, Median Absolute Deviation (MAD), or Interquartile Range (IQR) (Green et al, 2015; Marin et al, 2014; Turner and Van Zandt, 2012). Huang et al (2023) argue that for ABC or other simulation-based inference methods, robust summary statistics make the pseudo-posterior robust to model misspecification (Frazier et al, 2020). While ABC provides an approach to infer posterior distributions when likelihood evaluations are difficult, these methods only enable simulation from an approximation of the posterior distribution and are less satisfactory than a scheme to sample from the exact posterior. Matching quantiles have also been explored in various contexts (McVinish, 2012; Nirwan and Bertschinger, 2022). We propose here a method to sample from the posterior distribution of \(\theta\) given robust statistics \(T(X)\) using augmented data simulation on \(X\) as in Tanner and Wong (1987), from a parametric family \((\mathcal{F}_{\theta})_{\theta\in\Theta}\). This is achieved through a two-step Gibbs sampler based on a decomposition: \[\pi\left(\theta\mid T(X)\right)\propto\int_{\mathbb{R}^{N}}\pi(\theta,X\mid T (X))dX\propto\int_{\mathbb{R}^{N}}\pi(\theta\mid X)\pi(X\mid T(X),\theta)dX\] Thus, in each iteration, we first simulate from \(X\mid T(X),\theta\) and then simulate from \(\theta\mid X\) assuming \(X\sim\mathcal{F}_{\theta}\). We discuss in detail the first step, which is the main contribution of this work. The second step can be straightforward when the distribution family admits a conjugate prior (e.g., Gaussian case) or by using a Metropolis-within-Gibbs step in other cases. We consider specific cases where \(T\) is a pair of robust location and scale statistics, such as (median, Median Absolute Deviation) and (median, Interquartile Range), as well as cases where \(T\) is a collection of empirical quantiles of \(X\). Our only assumption on the family of distributions \(\mathcal{F}_{\theta}\) is that we can evaluate pointwise the probability density function \(f_{\theta}\) and the cumulative density function \(F_{\theta}\). In this setting, it is in particular possible to sample from a truncated distribution, either directly or by rejection sampling. The examples we consider are the Gaussian and Cauchy distributions from the location-scale family, and the translated Weibull distribution. However, our strategy can be applied to other continuous distribution families. The paper is structured as follows: In Section 2, we introduce our method for observing a sequence of quantiles. Then, in Section 3, we address the scenario where only the median and the interquartile range are observed. The most intriguing case of observing the median and the Median Absolute Deviation (MAD) of the sample is discussed in Section 4. Finally, we discuss some compelling numerical results in Section ## 2 Quantile case We first present the case where the observed robust statistics are a set of quantiles. This setting has already been considered in the literature, but our approach uses a different method, which we will extend in later sections to more complex sets of robust statistics. In this section, we consider the case where we observe a vector of \(M\in\mathbb{N}^{*}\) quantiles of the data \(X\). A collection of probabilities \((p_{j})_{j=1\ldots M}\) is pre-specified, and we observe \(T(X)=(q_{j})_{j=1\ldots M}\) where \(q_{j}\) is the empirical \(p_{j}\) quantile of \(X\). Akinshin (2022) also proposed an MCMC method, implemented in STAN (NUTS or HMC versions), to sample from the posterior distribution when only quantiles are observed. However, they treat the observed quantiles as theoretical ones, and thus assume that they observe the collection \(\left(F_{\theta}^{(-1)}(p_{j})\right)\). This assumption is reasonable when the sample size \(N\) is large. However, observed quantiles are actually calculated differently in most standard software, and this assumption can lead to a bias in the posterior inference of the parameters, especially with small sample sizes \(N\). Therefore, in this paper, we adopt a different approach by considering the observed quantiles as empirical quantiles obtained from the widely used quantile estimator \(Q(\cdot,p)\), as defined in Hyndman and Fan (1996, Definition 7). This estimator is commonly implemented in major statistical software: it is for example the default of the quantile() function in R, the default of the Python function numpy.quantile, the default of the Julia function Statistics.quantile!, and the behavior of the PERCENTILE function in Excel. It is given by the following formula: \[Q(X,p)=(1-g)X_{(i)}+gX_{(i+1)} \tag{1}\] where \(h=(N-1)p+1\), \(i=\lfloor h\rfloor\) (the integer part of \(h\)), and \(g=h-i\) (the fractional part of \(h\)). We will later note those variables \(h_{j},i_{j}\) and \(g_{j}\) for the observed \(p_{j}\) quantile. Note that other definitions of the empirical quantile function, corresponding to slightly different linear interpolations, were also proposed by Hyndman and Fan (1996) and are available in certain software; our approach can easily be adapted to any of these alternative definitions. We now develop a computational method to simulate a vector \(X\) that follows a distribution \(\mathcal{F}_{\theta}\) and satisfies the conditions \(Q(X,p_{j})=q_{j}\) for \(j=1,\ldots,J\), where \(Q\) is the quantile estimator. In this scenario, we have complete knowledge of the apportionment of the vector \(X\) across \(M+1\) intervals. The theoretical apportionment is presented in Figure 1. However, as we consider the empirical quantiles here, these intervals and proportions may slightly vary. This knowledge allows us to resample the vector \(X\) while preserving the verified conditions \((Q(X,p_{1}),\ldots,Q(X,p_{M}))=(q_{1},\ldots,q_{M})\). First, we simulate the coordinates of \(X\) that determine the observed quantiles \(Q(X,p_{1}),\ldots,Q(X,p_{M})\). Second, we simulate the remaining coordinates of \(X\) using truncated distributions, ensuring that the correct number of coordinates falls within each zone defined by the previously simulated coordinates. We detail the first step of this process; the second is straightforward. To simulate these coordinates according to the correct distribution, we must first identify the indexes of the order statistics. From the above definition, we have for \(j=1,\ldots,M\): * If \(g_{j}=0\), we have \(Q(X,p_{j})=X_{(i_{j})}\), which we refer to as "deterministic", and we denote \(i_{j}\) as its index. * If \(g_{j}\neq 0\), we have \(Q(X,p_{j})=(1-g_{j})X_{(i_{j})}+g_{j}X_{(i_{j}+1)}\), which we say is a linear combination of the order statistics with indexes \(i_{j}\) and \(i_{j}+1\). In this case, we sample \(X_{(i_{j})}\), and then obtain \(X_{(i_{j}+1)}\) as a deterministic transformation of \(X_{(i_{j})}\) and \(q_{j}\). We denote \(J_{D}=\{j\in\{1,\ldots,M\}\mid g_{j}=0\}\) and \(J_{S}=\{j\in\{1,\ldots,M\}\mid g_{j}>0\}\). We have \(\{1,\ldots,M\}=J_{D}\cup J_{S}\). Thus, the quantiles of interest \(Q(X,p_{1}),\ldots,Q(X,p_{M})\) are totally determined by the order statistics of indexes in \(I=\{i_{j}\mid j=1,\ldots,M\}\cup\{i_{j}+1\mid j\in J_{S}\}\). **Remark 1**.: _For simplicity, we make the assumption (in our presentation and in our code) that \(\forall j,p_{j+1}-p_{j}\geq\frac{2}{N+1}\). Under this assumption, each order statistic appears at most once in the set of empirical constraints of the form of Equation 1; in other words, we assume that \(\forall j\in J_{S},i_{j}+1<i_{j+1}\). Our method could easily be generalized to lift this assumption._ In the remainder of this section, we describe our MCMC algorithm for this setting. We begin by proposing a method for initializing a vector \(X^{0}\) which satisfies the observed quantiles. This initialization step ensures that our resampled data adheres to the desired quantile values. Subsequently, we introduce a method for global resampling of our augmented data using order statistics results and Markov chain simulations. We used the Metropolis-Hastings algorithm with a kernel to facilitate the generation of new samples based on the order statistics. ### Initialization with observed quantiles Our algorithm requires an initial value of the vector \(X^{0}\) which verifies that \(\forall j,Q(X^{0},p_{j})=q_{j}\). First, we initialize the parameter vector \(\theta^{0}\) arbitrarily to enable the simulation of our vector. In order to meet the observed quantiles, we set the order statistics that determine the values in the quantiles. For deterministic quantiles (\(g_{j}=0\)), we directly assign an observation equal to \(q_{j}\). For quantiles requiring simulation, we introduce a Figure 1: Apportionment of the vector \(X\) with observed \(p_{j}\) quantiles \(q_{j}\). The values above the axis represent the theoretical proportions of observations contained in these intervals. positive distance parameter \(\epsilon_{j}\). Specifically, for all \(j\) in \(J_{S}\), we set \(X_{(i_{j})}=q_{j}-\epsilon_{j}g_{j}\) and \(X_{(i_{j}+1)}=q_{j}+\epsilon_{j}(1-g_{j})\), ensuring that \(g_{j}X_{(i_{j}+1)}+(1-g_{j})X_{(i_{j})}=q_{j}\). To enhance the efficiency of our initialization, we can normalize \(\epsilon_{j}\) to be equal to the variance of \(X_{(i_{j})}\) under the assumption that \(X\) follows a distribution denoted as \(\mathcal{F}_{\theta^{0}}\). Once these observations have been initialized, we can complete the initial vector \(X^{0}\) by simulating the remaining observations in the appropriate intervals using a truncated distribution \(\mathcal{F}_{\theta^{0}}\). ### Full resampling with observed quantiles We present a method for conducting complete resampling of our vector \(X\) according to the distribution \(\mathcal{F}_{\theta}\), while simultaneously preserving the observed quantile values. As mentioned previously, these quantiles are determined by the order statistics of \(I\) (where \(I\) represents the previously introduced set of coordinates corresponding to the order statistics that determine the quantile values, i.e., \(I=\{i_{j}\mid j=1,\ldots,M\}\cup\{i_{j}+1\mid j\in J_{S}\}\)). Therefore, we need to simulate the order statistics \((X_{(i_{j})})_{j\in J_{S}}\); recall that the \((X_{(i_{j})})_{j\in I\setminus J_{S}}\) are deterministic conditional on the \((X_{(i_{j})})_{j\in J_{S}}\). Our objective is to simulate from the conditional distribution \((X_{(i_{j})})_{j\in J_{S}}\mid Q(X,p_{j})=q_{j}\) for \(j=1,\ldots,M\). To achieve this, we compute the density of this distribution up to a constant, enabling us to launch a Markov chain that targets this distribution using a Metropolis-Hastings kernel. We begin by considering the joint density of the order statistics vector \(I\). The joint probability density function of \(M\) statistics of order \((i_{1},\ldots,i_{M})\) from a vector \(X\) of size \(N\) following a distribution \(\mathcal{F}_{\theta}\) with density \(f_{\theta}\) and cumulative distribution function (cdf) \(F_{\theta}\), is known and can be expressed as shown in Equation 2, which is derived from David and Nagaraja (2004): \[f(x_{1},\ldots,x_{M})=N!\prod_{j=1}^{M}f_{\theta}(x_{j})\prod_{j=0}^{M}\frac{ (F_{\theta}(x_{j+1})-F_{\theta}(x_{j}))^{i_{j+1}-i_{j}-1}}{(i_{j+1}-i_{j}-1)!} \tag{2}\] where \(x_{0}=-\infty\), \(x_{M+1}=+\infty\), \(i_{0}=0\), and \(i_{M+1}=N+1\). To simulate from the joint distribution of \((X_{(i_{j})})_{j\in J_{S}}\) and \((Q(X,p_{j}))_{j=1,\ldots,M}\), we perform a change of variables denoted as \(\phi\). This transformation is injective and continuously differentiable, ensuring that the determinant of its Jacobian is nonzero. The transformation is described below by the system on the right: \[\begin{cases}q_{j}=X_{(i_{j})}&\forall j\in J_{D}\\ q_{j}=(1-g_{j})X_{(i_{j})}+g_{j}X_{(i_{j}+1)}&\forall j\in J_{S}\end{cases} \iff\begin{cases}\quad X_{(i_{j})}=q_{j}&\forall j\in J_{D}\\ X_{(i_{j}+1)}=\frac{q_{j}-X_{(i_{j})}(1-g_{j})}{g_{j}}&\forall j\in J_{S}\end{cases}\] Finally, as the observed values \(q_{j}\) are fixed, we know that the densities of the joint and conditional distributions are proportional. Hence, we have: \[f_{(X_{(i_{j})})_{j\in J_{S}}|(Q(X,p_{1}),\ldots,Q(X,p_{J}))=(q_{1}, \ldots,q_{j})}(x_{1},\ldots,x_{|J_{S}|}) \tag{3}\] \[\propto f_{(X_{(i_{j})})_{k\in J_{S}},(Q(X,p_{1}),\ldots,Q(X,p_{j}) )=(q_{1},\ldots,q_{j})}(x_{1},\ldots,x_{|J_{S}|},q_{1},\ldots,q_{j})\] \[\propto f_{(i)_{i\in I}}(\phi^{-1}(x_{1},\ldots,x_{|J_{S}|},q_{1}, \ldots,q_{j}))\] We have now obtained the conditional density, up to a constant, of the order statistics of interest given the observed quantiles. This enables us to simulate data based on our specified conditions. Therefore, we can construct a Markov chain that targets the desired distribution by employing a Metropolis-Hastings acceptance kernel. In our case, we utilize a random walk kernel with a variance that can be empirically adjusted. While it is possible to resample all the order statistics simultaneously using a kernel of size \(\mathbb{R}^{|J_{S}|}\), for the purpose of achieving higher acceptance rates, we resample them one by one or in parallel. To maximize acceptance, we recommend normalizing the variance of the kernel of \(X_{(i_{j})}\) by a constant \(\tilde{c_{j}}=\mathrm{Var}(X_{(i_{j})})/(1-g_{j})\), assuming that \(X\sim\mathcal{F}_{\theta}\). Here, we approximate the variance of the order statistics using the formula presented in Baglivo (2005, p. 120): \(\mathrm{Var}(X_{(i)})\approx\frac{p_{i}(1-p_{i})}{(N+2)f_{\theta}(Q_{\theta}( p_{i}))^{2}}\), where \(N\) is the sample size, \(p_{i}=\frac{i}{N-1}\), and \(f_{\theta}\) and \(Q_{\theta}\) are the density and quantile functions of our distribution. This approximation allows us to handle some cases of order statistics with infinite variance as for the Cauchy distribution. Implementation results for this case are shown in Section 5.3. ## 3 Median and IQR case We now present a computational method to simulate a vector \(X\) which follows a distribution \(\mathcal{F}_{\theta}\) and verifies the conditions \(\mathrm{med}(X)=m\) and \(\mathrm{IQR}(X)=i\) where \(m\in\mathbb{R}\) and \(i>0\). Here, \(\mathrm{med}(X)\) is the median of \(X\), and \(\mathrm{IQR}(X)\) is the interquartile range of \(X\). The interquartile range is the difference between the \(0.75\) quantile, which is the third quartile denoted \(Q_{3}\), and the \(0.25\) quantile, which is the first quartile denoted \(Q_{1}\), i.e., \(\mathrm{IQR}(X)=Q(X,0.75)-Q(X,0.25)=Q_{3}-Q_{1}\). This scale estimator has a long history in robust statistics, dating back to the early development of robust estimation techniques. Its resistance to outliers, as quantified by its breakdown point of \(25\%\), has made it a fundamental tool in robust statistical analysis. Today, the IQR continues to hold a prominent position in robust statistics due to its properties and its ability to summarize the variability of a dataset in a resistant manner. The IQR, being equal to twice the MAD in the case of a symmetric distribution, not only measures the dispersion of the data but also offers a way to capture the asymmetry of the distribution. This section is thus linked to the previous section with the case \(p_{1},p_{2},p_{3}=0.25,0.5,0.75\), except we observe only \(q_{2}=m\) (the median) and the difference \(q_{3}-q_{1}=i\) with \(i>0\). In this scenario, we can isolate four different cases and we focus here on a specific scenario where \(N=4n+1\) (see Appendix C for other cases), which simplifies the problem as there is not linear interpolation required to computed the empirical quartiles. In this case, we have the first quartile \(Q_{1}=X_{(n+1)}\) the median \(Q_{2}=m=X_{(2n+1)}\), and the third quartile \(Q_{3}=X_{(3n+1)}\).The vector \(X\) respects the apportionment described in Figure 3. As in the previous sections, we first present a method to initialize the vector \(X^{0}\) and then a method to resample it keeping its median and its IQR unchanged. ### Initialization of \(X^{0}\) with observed median and IQR We must initialize our MCMC with values \((X^{0},\theta^{0})\) that verify the constraints \(\operatorname{median}(X^{0})=m,\operatorname{IQR}(X^{0})=i\), and \(\prod_{j}f_{\theta^{0}}(X^{0}_{j})>0\). If the family \((\mathcal{F}_{\theta})_{\theta\in\Theta}\) has support the whole real line, we simulate a vector \(Z\) of size \(N\) from an arbitrary distribution (such as \(\mathcal{N}(0,1)\) or \(\mathcal{F}_{\theta^{0}}\)) and then apply a linear transformation to it so that it verifies the constraints: \(\operatorname{median}(X^{0})=m\) and \(\operatorname{IQR}(X^{0})=i\). So we have \[X^{0}=(Z-\operatorname{median}(Z))\frac{i}{\operatorname{IQR}(Z)}+m\] In the case where the distribution is defined on a strict subset of \(\mathbb{R}\), this technique is inappropriate, as it may lead to initial values which lie outside the support of the distribution. In this situation, we use a deterministic initialization instead. The initialization vector is then given by: \[X_{1}=X_{2}=\ldots=X_{n} = m-\frac{3i}{4}\] \[X_{n+1} = q_{1}\] \[X_{n+2}=\ldots=X_{2n} = m-\frac{i}{4}\] \[X_{2n+1} = m\] \[X_{2n+2}=\ldots=X_{3n} = m+\frac{i}{4}\] \[X_{3n+1} = q_{3}\] \[X_{3n+2}=\ldots=X_{4n+1} = m+\frac{3i}{4}\] assuming that all these values have positive density under \(f_{\theta^{0}}\). This initial vector verifies the constraints. In practice, with this initialization, we find that a burn-in time of about \(5N\) iterations is sufficient to reach the stationary distribution. ### Full resampling with median and IQR observed To perform full resampling while maintaining the observed median and IQR, we simulate the coordinates of \(X\) that determine the quartiles \(q_{1},q_{2},q_{3}\). In the case \(N=4n+1\), the value \(X_{(2n+1)}=q_{2}\) is deterministic. We simulate \(X_{(n+1)}\), and then deterministically update \(X_{(3n+1)}=X_{(n+1)}+i\) Thus, we aim to simulate \(X_{(n+1)}\) according to the conditional distribution \(X_{(n+1)}\mid\operatorname{med}(X)=m,\operatorname{IQR}(X)=i\). Using the general framework for quantiles described in section 2, we start with the joint distribution of the order statistics \(X_{(n+1)},X_{(2n+1)},X_{(3n+1)}\) given by Equation (2), and apply a change of variables to obtain the joint distribution of the first quartile, the median and the IQR. As in the previous section, we then use this density in a Metropolis-within-Gibbs step. The cases where \(N\neq 4n+1\) involve more order statistics since the empirical quartiles comprise a linear interpolation, but the same strategy applies. We give details in Appendix C ## 4 Median and MAD case We now focus on the most intriguing scenario explored in this paper, where we are provided with the median (a robust statistic for location) and the MAD (a robust statistic for scale). Recall that the median is the 0.5 quantile of our sample \(X\) and is defined as follows: \[\operatorname{median}(X)=\left\{\begin{array}{ll}X_{(n)}\text{; if }N=2n+1\\ \frac{X_{(n)}+X_{(n+1)}}{2}\text{, if }N=2n\end{array}\right.\text{ where }X_{(i)}\text{ denotes the }i\text{th order statistic of }X\text{.}\] The Median Absolute Deviation (MAD) is a measure of statistical dispersion that is commonly used as a robust alternative to the standard deviation. This statistic was first promoted by Hampel (1974), who attributed it to Gauss (1816). For a sample \(X=(X_{1},\ldots,X_{N})\) of i.i.d. random variables, the MAD is defined as: \[\operatorname{MAD}(X)=\operatorname{median}(|X-\operatorname{median}(X)|).\] Let \(\sigma\) be the true standard deviation of the data generating distribution. For certain families of distribution, a family-specific constant \(c\) is known such that \(\operatorname{MAD}(X_{1},\ldots,X_{N})\xrightarrow[n\to\infty]{P}c\sigma\). Some papers thus refer instead to the normalized MAD MAD\((X)/c\), which provides a consistent estimator of the standard deviation. Despite their poor statistical efficiencies (respectively 63.6% and 36.7%), a key similarity between the median and the MAD that makes them popular is their breakdown point. Both the median and the MAD have a breakdown point of 50%, meaning that half of the observations in the dataset can be contaminated without significantly impacting their estimates. This high breakdown point ensures the robustness of these estimators in the presence of outliers and underscores their usefulness in robust statistical analysis. Since the median and MAD are based on order statistics, the cases where \(X\) has an even or odd size exhibit distinct characteristics. Here, we focus on the simpler case where \(N\) is odd i.e \(N=2n+1\) with \(n\in\mathbb{N}^{*}\); we relegate the even case to Appendix A. We denote \(\operatorname{median}(X)=m\) and \(\operatorname{MAD}(X)=s\) respectively, where \(m\in\mathbb{R}\) and \(s>0\). In this scenario, since \(\mathcal{F}_{\theta}\) is continuous, the median is necessarily one of the coordinates of the vector \(X\), denoted as \(X_{i}=m\) for some \(i\in\{1,\ldots,N\}\). Additionally, there exists another coordinate, denoted as \(X_{\operatorname{MAD}}\), that determines the MAD, such that \(|X_{j}-m|=s\) for some \(j\in\{1,\ldots,N\}\) (if \(N>1\)). Note that \(X_{\operatorname{MAD}}\) can only take two values: \(X_{\operatorname{MAD}}\in\{m-s,m+s\}\). We introduce the indicator variable \(\delta=\mathbb{1}_{X_{\operatorname{MAD}}=m+s}\) to capture the location of this second coordinate. We can partition the data into four intervals: * \(Z_{1}=(-\infty,m-s)\) * \(Z_{2}=(m-s,m)\) * \(Z_{3}=(m,m+s)\) * \(Z_{4}=(m+s,+\infty)\) These intervals are represented in Figure 3. The constraint on the median implies that \(|Z_{1}\cup Z_{2}\cup\{X_{\operatorname{MAD}}\}|=|Z_{3}\cup Z_{4}|\cup\{X_{ \operatorname{MAD}}\}|=n\). Moreover, the MAD requires that half of the data falling within the interval \((m-s,m+s)\) and the remaining half outside of this interval, we have \(|Z_{2}\cup Z_{3}|=|Z_{1}\cup Z_{4}|=n-1\) (the \(n\)th points are respectively at \(m\) and \(X_{\operatorname{MAD}}\)). Let \(k=\sum_{i}\mathbb{1}_{X_{i}\geq m+s}\in\{1,\ldots,n\}\). Given the values of \(\delta\) and \(k\), the apportionment of the observations between the four zones is fixed, as shown in in Figure 3: \(|Z_{1}|=n-k+\delta\), \(|Z_{2}|=k-1\), \(|Z_{3}|=n-k\) and \(|Z_{4}|=k-\delta\). In the remainder of this section, we describe our Gibbs sampler when median and MAD are observed. We first give an initialization which follows the constraints in Subsection 4.1, and then describe in Subsection 4.2 how to update the vector \(X^{t}\) at step \(t\) conditionally on the value of \(\theta^{t}\). Let \(X_{-i}\), respectively \(X_{-ij}\), be the vector of all coordinates of \(X\) except coordinate \(i\), respectively except coordinates \(i\) and \(j\). A standard Gibbs strategy would be to cycle through the indexes, updating in turn each \(X_{i}\) conditionally on \(\theta\), \(X_{-i}\) and the constraints \(m\) and \(s\). This strategy does not adequately explore the full posterior. Indeed, the distribution of \(X_{i}|\theta,X_{-i},m,s\) takes values only in the zone that \(X_{i}\) belongs to. With such a strategy, the values of \(k\) and \(\delta\) would never change. Instead, we must update two coordinates at a time: we will draw randomly two indexes \(i\) and \(j\) and sample from the joint conditional of \(X_{i},X_{j}|\theta,X_{-ij},m,s\). These joint conditionals are tractable, and we show in Appendix B that this produces an ergodic Markov Chain so that the MCMC will explore the full posterior. ### Initialization with observed median and MAD As with the case where we observe the median and the IQR presented in Section 3.1, we introduce two techniques for initializing the vector \(X^{0}\). The first, and default, Figure 3: Apportionment of the vector \(X\) when \(N=2n+1\) with \(n\in\mathbb{N}^{*}\), \(\operatorname{median}(X)=m\) and \(\operatorname{MAD}(X)=s\). technique consists in simulating a vector \(Z\) of size \(N\) from an arbitrary distribution (such as \(\mathcal{N}(0,1)\) or \(\mathcal{F}_{\theta^{0}}\)) and then applying a linear transformation to it so that it verifies the constraints: \(\operatorname{median}(X^{0})=m\) and \(\operatorname{MAD}(X^{0})=s\). So we have \[X^{0}=(Z-\operatorname{median}(Z))\frac{s}{\operatorname{MAD}(Z)}+m\] In our numerical experiments following this method, we observe that the burn-in period is extremely short. As in the IQR scenario, if the support of the distribution is a strict subset of \(\mathbb{R}\), we resort to the deterministic initialization, which corresponds to an apportionment with \(k=\lceil\frac{n}{2}\rceil=\lceil\frac{N-1}{4}\rceil\) and \(\delta=1\). Therefore, we define: \[X_{1}=X_{2}=\ldots=X_{n-k+1} = m-\frac{3s}{2}\] \[X_{n-k+2}=\ldots=X_{n} = m-\frac{s}{2}\] \[X_{n+1} = m\] \[X_{n+2}=\ldots=X_{2n-k+1} = m+\frac{s}{2}\] \[X_{2n-k+2} = m+s\] \[X_{2n-k+3}=\ldots=X_{2n+1} = m+\frac{3s}{2}\] assuming that these values all lie within the support of the distribution. This initial vector verifies the constraints. In practice, with this initialization, we find that a burn-in time of about \(5N\) iterations is sufficient to reach stationarity. ### Partial resampling with observed median and MAD We now propose a Gibbs sampling step that allows us to sample from the conditional distribution \(X_{i},X_{j}|\theta,X_{-ij},m,s\) Here again, we focus on the case where \(X\) has an odd size \(N\), which is relatively simpler (see Section A for the even case). Note that the apportionment of observations among the four zones, which we defined previously, is not fixed and can vary between iterations. Specifically, the value of \(k\) (which controls the apportionment of the observations between \(Z_{1}\cup Z_{3}\) and \(Z_{2}\cup Z_{4}\)) can range from \(1\) to \(n\) (where \(n\in\mathbb{N}^{*}\) such that \(N=2n+1\)) and the value of \(\delta\) can take on values \(\{0,1\}\). The Gibbs sampling step in this algorithm involves selecting two indexes \(i\) and \(j\) from the vector \(X\) and resampling their values while maintaining the conditions \(\operatorname{median}(X)=m\) and \(\operatorname{MAD}(X)=s\). The algorithm to generate the new values \(\tilde{X}_{i},\tilde{X}_{j}\) is as follows: 1. If \((X_{i},X_{j})=(m,X_{\operatorname{MAD}})\), we must keep their values unchanged: \((\tilde{X}_{i},\tilde{X}_{j})=(X_{i},X_{j})\). 2. If \(X_{i}=m\), we perform the following steps: * \(\tilde{X}_{j}\) is sampled from the distribution \(\mathcal{F}_{\theta}\) truncated to the zone to which \(X_{j}\) belongs. * \(\tilde{X}_{i}\) remains unchanged: \(\tilde{X}_{i}=X_{i}=m\). 3. If \(X_{j}=X_{\mathrm{MAD}}\), we further consider two cases: 1. If \((X_{i}-m)(X_{j}-m)>0\), indicating that both \(X_{i}\) and \(X_{j}\) are on the same side of the median, we perform the following steps: * \(\tilde{X}_{i}\) is resampled from the distribution \(\mathcal{F}_{\theta}\) truncated to the zone to which \(X_{i}\) belongs. * \(\tilde{X}_{j}\) remains unchanged: \(\tilde{X}_{j}=X_{j}\). 2. If \((X_{i}-m)(X_{j}-m)\leq 0\), indicating that \(X_{i}\) and \(X_{j}\) are on different sides of the median, we perform the following steps: * \(\tilde{X}_{i}\) is sampled from the distribution \(\mathcal{F}_{\theta}\) in the union of the zones which \(X_{i}\) belongs and its "symmetric":\(\left\{\begin{array}{ll}Z_{1}\cup Z_{4}&\mbox{if }X_{i}\in Z_{1}\cup Z_{4}\\ Z_{2}\cup Z_{3}&\mbox{if }X_{i}\in Z_{2}\cup Z_{3}\end{array}\right.\) * \(\tilde{X}_{j}\) is determined based on the value of \(\tilde{X}_{i}\) to maintain the same number of observations on either side of the median \(m\): * If \(\tilde{X}_{i}>m\), then \(\tilde{X}_{j}=m-s\). * Otherwise, \(\tilde{X}_{j}=m+s\). Note that the value of \(\delta\) may change. 4. If none of the above conditions are met, indicating that \(X_{i}\) and \(X_{j}\) neither of \(X_{i}\) and \(X_{j}\) is \(m\) or \(X_{\mathrm{MAD}}\), we further consider two sub-cases. Assume without loss of generality that \(X_{i}<X_{j}\). 1. If either (\(X_{i}\in Z_{1}\) and \(X_{j}\in Z_{3}\)) or (\(X_{i}\in Z_{2}\) and \(X_{j}\in Z_{4}\)), then they can switch together in the other couple of zones (and then the apportionment indicator \(k\) changes). We perform the following steps: * \(\tilde{X}_{i}\) is sampled from the distribution \(\mathcal{F}_{\theta}\) on all the support of it. * \(\tilde{X}_{j}\) is sampled from the distribution \(\mathcal{F}_{\theta}\) truncated to the "complementary" zone: \(\left\{\begin{array}{ll}Z_{3}&\mbox{if }\tilde{X}_{i}\in Z_{1}\\ Z_{4}&\mbox{if }\tilde{X}_{i}\in Z_{2}\\ Z_{1}&\mbox{if }\tilde{X}_{i}\in Z_{3}\\ Z_{2}&\mbox{if }\tilde{X}_{i}\in Z_{4}\end{array}\right.\) 2. Otherwise, when the zones of \(X_{i}\) and \(X_{j}\) are not "complementary", they are each sampled from the distribution \(\mathcal{F}_{\theta}\) truncated to their respective zones * \(\tilde{X}_{i}\) is sampled from the distribution \(\mathcal{F}_{\theta}\) truncated to the zone to which \(X_{i}\) belongs. * \(\tilde{X}_{j}\) is sampled from the distribution \(\mathcal{F}_{\theta}\) truncated to the zone to which \(X_{j}\) belongs. Finally, we set \((X_{i},X_{j})=(\tilde{X}_{i},\tilde{X}_{j})\) to update the values of the selected coordinates. It can be checked that in each case, the median and MAD conditions are preserved throughout the resampling process. This method allows us to have an ergodic Markov chain on the space of latent variables \(X\) that satisfy these conditions (proof in Appendix section B). The case where \(N\) is even follows the same ideas but requires slightly different updates, which are described in Appendix A. ## 5 Numerical results and discussion In this section, we discuss the numerical results obtained from the different methods we presented above. Here, we refer to the Gibbs sampler introduced in this paper as Robust Gibbs. ### Gaussian case We initially focus on the Gaussian distribution with conjugate priors distributions for the mean and variance parameters: the Normal-Inverse Gamma distribution (abbreviated into NIG below). This provides a convenient and analytically tractable framework for straightforwardly sampling from the posterior of the parameters in the second step of the Gibbs Sampler. In the Gaussian case, the asymptotic efficiencies of the empirical median, empirical MAD, and empirical IQR estimators have been well-studied in the frequentist framework (Rousseeuw and Croux, 1993). The empirical median has an asymptotic efficiency of approximately \(\mathrm{eff}_{\mathrm{med}}=\frac{2}{\pi}\approx 0.637\); the empirical MAD has an asymptotic efficiency \(\mathrm{eff}_{\mathrm{MAD}}\approx 0.3675\)(Akinshin, 2022). These efficiency values measure the relative accuracy of these estimators compared to the conventional estimators (empirical mean and standard deviation in this instance) as the sample size tends to infinity. Under the prior \(\mu,\sigma^{2}\sim\mathrm{NIG}(\mu_{0},\tau,\alpha,\beta)\) where \(\mu\in\mathbb{R},\tau,\alpha,\beta>0\) are the hyper-parameters, the posterior distribution given \(X\) is known in closed-form: \(\mu,\sigma^{2}\mid X\sim\mathrm{NIG}(M,C,A,B)\) where \[\begin{split} M=\frac{\nu\mu_{0}+N\bar{X}}{\nu+N}\quad\text{and} \quad C=\nu+N\\ A=\alpha+\frac{N}{2}\quad\text{and}\quad B=\beta+\frac{1}{2}(NS^{ 2}+\frac{N\nu}{\nu+N}(\bar{X}-\mu_{0})^{2}).\end{split} \tag{4}\] with \(\bar{X}\) the empirical mean and \(S^{2}\) the empirical variance. Our Robust Gibbs algorithm allows us to obtain a sample from the posterior distribution \(\pi(\mu,\sigma^{2}\mid\mathrm{median}(X),\mathrm{MAD}(X))\), which is displayed in Figure 4. Our numerical results allow us to observe a high-quality approximation to this posterior when \(N\) is large. Returning to Equation 4, we can replace \(\bar{X}\) and \(S^{2}\) by their estimators based on the median and MAD: \(\bar{X}\approx m\), \(S\approx c\cdot s\) with \(c=1/\Phi^{-1}(.75)\approx 1.4826\). We also replace the values of \(N\) by multiplying by the asymptotic efficiencies of these estimators \(N_{\mathrm{med}}=\mathrm{eff}_{\mathrm{med}}\cdot N\) and \(N_{\mathrm{MAD}}=\mathrm{eff}_{\mathrm{MAD}}\cdot N\). Figure 4 shows that our posterior of interest \(\pi(\mu,\sigma^{2}\mid\mathrm{median}(X),\mathrm{MAD}(X))\) is well approximated by the distribution \(\mathrm{NIG}(\tilde{M},\tilde{C},\tilde{A},\tilde{B})\) where: \[\begin{split}\tilde{M}=\frac{\nu\mu_{0}+N_{\mathrm{med}}\cdot m }{\nu+N_{\mathrm{med}}},\quad\tilde{C}=\nu+N_{\mathrm{med}},\\ \tilde{A}=\alpha+\frac{N_{\mathrm{MAD}}}{2},\quad\tilde{B}=\beta +\frac{1}{2}\left(N_{\mathrm{MAD}}\cdot(c\cdot s)^{2}+\frac{N_{\mathrm{MAD}} \nu}{\nu+N_{\mathrm{MAD}}}(m-\mu_{0})^{2}\right).\end{split} \tag{5}\] To our knowledge, this high-quality approximation, which can be easily sampled from, was not previously known. Our numerical results apply only to the Gaussian case with observed median and MAD; we leave to future work the question of whether similar results hold for other distribution and robust statistics with known asymptotic efficiency. ### Cauchy distribution The sample median and MAD are routinely used as estimators for the location-mean Cauchy distribution, both because the Cauchy's mean and variance are undefined, and because the location parameter \(x_{0}\in\mathbb{R}\) is equal to the theoretical median and the scale parameter \(\gamma>0\) is equal to the theoretical MAD (and to the half of the theoretical IQR). The Cauchy distribution serves as a valuable tool for exploring robust statistical methods and understanding their performance under challenging conditions, especially in the presence of heavy-tailed data or outliers. By leveraging the robust estimators of location and scale, we can overcome the limitations posed by traditional measures such as the mean and variance, and obtain more reliable estimates of the parameters of interest. We conducted a Metropolis-Hastings within Gibbs random walk with Cauchy and Gamma priors on the two parameters. We performed \(T\) simulations based on the posterior distribution of the Cauchy parameters \((x_{0},\gamma)\) while observing only the median and the MAD. Previous studies have resorted to Approximate Bayesian Computation (ABC) to approximate the posterior given the median and MAD (Green et al, 2015; Marin et al, 2014; Turner and Van Zandt, 2012). To compare our results, we also carried out simulations using standard ABC methods, using the same observed median and MAD as summary statistics, along with the same non-informative priors. We ran both algorithms for an equal amount of computing time. In ABC, we obtained more than 10 times more simulations by parallelizing the computations. We fixed the threshold by retaining the \(T\) best simulations. We thus obtain a fair comparison, with two algorithms run for the same time leading to identical sample sizes. The results of these Figure 4: Posterior distribution of \(\mu\) and \(\sigma^{2}\) for a sample of size \(N=1000\), with \(m=-2\), \(s=3\) such that \(\mathrm{med}(X)=m,\mathrm{MAD}(X)=s\) with Robust Gibbs (filled curve) and the approximation \(\bar{\pi}\) (in )dashed blue line). The estimands \(m\) and \((c\cdot s)^{2}\) are in black dashed line. simulations are presented in Figure 5. Our Robust Gibbs method yields a posterior that is much more peaked around the theoretical parameter values compared to the ABC approach. This is to be expected, since Robust Gibbs samples from the exact posterior, whereas ABC samples from an approximation, which typically inflates the variance. ### Weibull Distribution In this section, we focus on the three-parameter Weibull distribution, also referred to as the translated Weibull distribution. In addition to the classical parameters of scale \(\gamma\) and shape \(\beta\), this one proposes a parameter of location \(x_{0}\). The density of this family of distributions is given by: \[f(x)=\frac{\beta}{\gamma}\left(\frac{x-x_{0}}{\gamma}\right)^{\beta-1}e^{-( \frac{x-x_{0}}{\gamma})^{\beta}}\mathbb{1}_{x\geq x_{0}}.\] Bandourian et al (2002) recommend this distribution to model the life expectancy or income of individuals.. When we observe a two-dimensional summary statistic \(T(X)\), such as the median and extr the MAD or the IQR, the information contained in \(T(X)\) does not allow us to identify all three parameters. The median provides information about the location parameter, while the MAD or IQR gives information about the scale parameter. However, there is no direct information about the shape parameter. Therefore, we have an insufficient number of statistics to estimate all three parameters accurately. As a result, our three chains can evolve within a submanifold of \(\mathbb{R}^{3}\). However, in cases where the location parameter is fixed (e.g., \(x_{0}=0\) for the classical Weibull distribution), we can uniquely identify the scale and shape parameters. When we consider quantiles as observations, we investigate the impact of the number of quantiles, denoted as \(M\), on the posterior distribution. Specifically, we choose Figure 5: Posterior distributions with true Cauchy parameters \(x_{0}=-2\) and \(\gamma=3\), and sample size \(N=1000\), obtained using the Robust Gibbs method (in orange) and the ABC method (in blue), along with the theoretical values (in black dashed line). the quantile values \((p_{j})_{j=1,\ldots,M}\) such that \(p_{j}=\frac{j}{M+1}\) for \(j=1,\ldots,M\). For example, when \(M=3\), we have \((p_{1},p_{2},p_{3})=(.25,.5,.75)\), and for \(M=9\), we obtain the nine deciles. As observed in Figure 6, it becomes apparent that using only two quantiles is insufficient to capture all three parameters accurately. However, with a minimum of three quantiles, we can successfully identify all the parameters of the Weibull distribution. Additionally, increasing the number of quantiles leads to a posterior distribution that closely aligns with the theoretical parameters, indicating improved estimation precision. ## 6 Conclusion This paper has presented a novel method for simulating from the posterior distribution when only robust statistics are observed. Our approach, based on Gibbs sampling and the simulation of augmented data as latent variables, offers a versatile tool for a wide range of applied problems. The Python code implementing this method is available as a Python package ([https://github.com/AntoineLuciano/Insufficient-Gibbs-Sampling](https://github.com/AntoineLuciano/Insufficient-Gibbs-Sampling)), enabling its application in various domains. Among the three examples of robust statistics addressed in this paper, two exhibited similarities, where the observed quantiles or the median and interquartile range yielded comparable results to existing methods. The unique case of median absolute deviation (MAD) introduced a novel challenge, for which we proposed a partial data augmentation technique ensuring ergodicity of the Markov chain. While our focus in this study was on continuous univariate distributions, future research avenues could explore the extension of our method to discrete distributions or multivariate data. These directions promise to further enhance the applicability and generality of our approach. Figure 6: Posterior of the three parameters of the Weibull distribution for a sample of size \(N=1000\) given \(T(X)=((q_{j})_{j=1,\ldots,M},(p_{j})_{j=1,\ldots,M})\) where the \((q_{j})\)s are the theoretical quantiles of the 3-parameter Weibull distribution with \(x_{0}=10\), \(\gamma=2\) and \(\beta=3\) (black dashed lines) for \(M=2\) (blue dotted line), \(M=3\) (orange dashed line), \(M=4\) (green dashed-dotted line) and \(M=9\) (red full line). ## Acknowledgements We are grateful to Edward I. George for a helpful discussion and in particular for suggesting the title to this paper. Antoine Luciano is supported by a PR[AI]RIE PhD grant. Christian P. Robert is funded by the European Union under the GA 101071601, through the 2023-2029 ERC Synergy grant OCEAN and by a PR[AI]RIE chair from the Agence Nationale de la Recherche (ANR-19-P3IA-0001).
2305.02968
Masked Trajectory Models for Prediction, Representation, and Control
We introduce Masked Trajectory Models (MTM) as a generic abstraction for sequential decision making. MTM takes a trajectory, such as a state-action sequence, and aims to reconstruct the trajectory conditioned on random subsets of the same trajectory. By training with a highly randomized masking pattern, MTM learns versatile networks that can take on different roles or capabilities, by simply choosing appropriate masks at inference time. For example, the same MTM network can be used as a forward dynamics model, inverse dynamics model, or even an offline RL agent. Through extensive experiments in several continuous control tasks, we show that the same MTM network -- i.e. same weights -- can match or outperform specialized networks trained for the aforementioned capabilities. Additionally, we find that state representations learned by MTM can significantly accelerate the learning speed of traditional RL algorithms. Finally, in offline RL benchmarks, we find that MTM is competitive with specialized offline RL algorithms, despite MTM being a generic self-supervised learning method without any explicit RL components. Code is available at https://github.com/facebookresearch/mtm
Philipp Wu, Arjun Majumdar, Kevin Stone, Yixin Lin, Igor Mordatch, Pieter Abbeel, Aravind Rajeswaran
2023-05-04T16:12:19Z
http://arxiv.org/abs/2305.02968v1
# Masked Trajectory Models for Prediction, Representation, and Control ###### Abstract We introduce Masked Trajectory Models (MTM) as a generic abstraction for sequential decision making. MTM takes a trajectory, such as a state-action sequence, and aims to reconstruct the trajectory conditioned on random subsets of the same trajectory. By training with a highly randomized masking pattern, MTM learns versatile networks that can take on different roles or capabilities, by simply choosing appropriate masks at inference time. For example, the same MTM network can be used as a forward dynamics model, inverse dynamics model, or even an offline RL agent. Through extensive experiments in several continuous control tasks, we show that the same MTM network - i.e. same weights - can match or outperform specialized networks trained for the aforementioned capabilities. Additionally, we find that state representations learned by MTM can significantly accelerate the learning speed of traditional RL algorithms. Finally, in offline RL benchmarks, we find that MTM is competitive with specialized offline RL algorithms, despite MTM being a generic self-supervised learning method without any explicit RL components. Machine Learning, ICML ## 1 Introduction Sequential decision making is a field with a long and illustrious history, spanning various disciplines such as reinforcement learning (Sutton and Barto, 1998), control theory (Bertsekas, 1995; Astrom and Murray, 2008), and operations research (Powell, 2007). Throughout this history, several paradigms have emerged for training agents that can achieve long-term success in unknown environments. However, many of these paradigms necessitate the learning and integration of multiple component pieces to obtain decision-making policies. For example, model-based RL methods require the learning of world models and actor-critic methods require the learning of critics. This leads to complex and unstable multi-loop training procedures and often requires various ad-hoc stabilization techniques. In parallel, the emergence of self-supervised learning (Devlin et al., 2018; Jing and Tian, 2019) has led to the development of simple training objectives such as masked prediction and contrastive prediction, which can train generic backbone models for various tasks in computer vision and natural language processing (NLP). Motivated by this advancement, we explore if self-supervised learning can lead to the cre Figure 1: **Masked Trajectory Modeling (MTM) Framework.** (Left) The training process involves reconstructing trajectory segments from a randomly masked view of the same. (Right) After training, MTM can enable several downstream use-cases by simply changing the masking pattern at inference time. See Section 3 for discussion on training and inference masking patterns. ation of generic and versatile models for sequential decision making with capabilities including future prediction, imitation learning, and representation learning. Towards this end, we propose the use of Masked Trajectory Models (MTM) as a generic abstraction and framework for prediction, representation, and control. Our approach draws inspiration from two recent trends in Artificial Intelligence. The first is the success of masked prediction, also known as masked autoencoding, as a simple yet effective self-supervised learning objective in NLP (Devlin et al., 2018; Liu et al., 2019; Brown et al., 2020) and computer vision (Bao et al., 2021; He et al., 2021). This task of masked prediction not only forces the model to learn good representations but also develops its conditional generative modeling capabilities. The second trend that inspires our work is the recent success of transformer sequence models, such as decision transformers, for reinforcement (Chen et al., 2021; Janner et al., 2021) and imitation learning (Reed et al., 2022; Shafiullah et al., 2022). Motivated by these breakthroughs, we investigate if the combination of masked prediction and transformer sequence models can serve as a generic self-supervised learning paradigm for decision-making. Conceptually, MTM is trained to take a trajectory sequence of the form: \(\mathbf{\tau}:=(\mathbf{s}_{k},\mathbf{a}_{k},\mathbf{s}_{k+1},\mathbf{a}_{k+1}, \ldots\mathbf{s}_{t},\mathbf{a}_{t})\) and reconstruct it given a masked view of the same, i.e. \[\hat{\mathbf{\tau}}=\mathbf{h}_{\theta}\left(\texttt{Masked}(\mathbf{\tau})\right)\] (MTM) where \(\mathbf{h}_{\theta}(\cdot)\) is a bi-directional transformer and \(\texttt{Masked}(\mathbf{\tau})\) is a masked view of \(\mathbf{\tau}\) generated by masking or dropping some elements in the sequence. For example, one masked view of the above sequence could be: \((\mathbf{s}_{k},\underline{\mathbf{\tau}},\underline{\mathbf{\tau}},\mathbf{a}_{k+1}, \underline{\mathbf{\tau}},\ldots,\mathbf{s}_{t},\underline{\mathbf{\tau}})\) where \(\underline{\mathbf{\tau}}\) denotes a masked element. In this case, MTM must infill intermediate states and actions in the trajectory as well as predict the next action in the sequence. A visual illustration of our paradigm is shown in Figure 1. Once trained, MTM can take on multiple roles or capabilities at inference time by appropriate choice of masking patterns. For instance, by unmasking actions and masking states in the sequence, MTM can function as a forward dynamics model. Our ContributionsOur main contribution is the proposal of MTM as a versatile modeling paradigm and pre-training method. We empirically investigate the capabilities of MTM on several continuous control tasks including planar locomotion (Fu et al., 2020) and dexterous hand manipulation (Rajeswaran et al., 2018). We highlight key findings and unique capabilities of MTM below. 1. **One Model, Many Capabilities:** The same model trained with MTM (i.e. the same set of weights) can be used zero-shot for multiple purposes including inverse dynamics, forward dynamics, imitation learning, offline RL, and representation learning. 2. **Heteromodality:** MTM is uniquely capable of consuming heteromodal data and performing missing data imputation, since it was trained to reconstruct full trajectories conditioned on randomly masked views. This capability is particularly useful when different trajectories in the dataset contain different modalities, such as a dataset containing both state-only trajectories as well as state-action trajectories (Baker et al., 2022). Following the human heteromodal cortex (Donnelly, 2011), we refer to this capability as heteromodality. 3. **Data Efficiency:** Training with random masks enables different training objectives or combinations, thus allowing more learning signal to be extracted from any given trajectory. As a result, we find MTM to be more data efficient compared to other methods. 4. **Representation Learning:** We find that state representations learned by MTM transfer remarkably well to traditional RL algorithms like TD3 (Fujimoto et al., 2018), allowing them to quickly reach optimal performance. This suggests that MTM can serve as a powerful self-supervised pre-training paradigm, even for practitioners who prefer to use conventional RL algorithms. Overall, these results highlight the potential for MTM as a versatile paradigm for RL, and its ability to be used as a tool for improving the performance of traditional RL methods. ## 2 Related Work Autoencoders and Masked Prediction.Autoencoders have found several applications in machine learning. The classical PCA (Jolliffe and Cadima, 2016) can be viewed as a linear autoencoder. Denoising autoencoders (Vincent et al., 2008) learn to reconstruct inputs from noise corrupted versions of the same. Masked autoencoding has found recent success in domains like NLP (Devlin et al., 2018; Brown et al., 2020) and computer vision (He et al., 2021; Bao et al., 2021). Our work explores the use of masked prediction as a self-supervised learning paradigm for RL. Offline Learning for ControlOur work primarily studies the offline setting for decision making, where policies are learned from static datasets. This broadly falls under the paradigm of offline RL (Lange et al., 2012). A large class of offline RL algorithms modify their online counterparts by incorporating regularization to guard against distribution shift that stems from the mismatch between offline training and online evaluation (Kumar et al., 2020; Kidambi et al., 2020; Fujimoto et al., 2018; Yu et al., 2021; Liu et al., 2020). In contrast, our work proposes a generic self-supervised pre-training paradigm for decision making, where the resulting model can be directly repurposed for offline RL. Zheng et al. (2022) introduces a self supervised approach for the heteromodal offline RL settings where only a small subset of the trajectories have action labels. We leverage this setting in the investigation of Heteromodal MTM, which can be trained without any change to the algorithm. Self-Supervised Learning for ControlThe broad idea of self-supervision has been incorporated into RL in two ways. The first is self-supervised **data collection**, such as task-agnostic and reward-free exploration (Pathak et al., 2017; Laskin et al., 2021; Burda et al., 2018). The second is concerned with self-supervised **learning** for control, which is closer to our work. Prior works typically employ self-supervised learning to obtain state representations (Yang and Nachum, 2021; Parisi et al., 2022; Nair et al., 2022; Xiao et al., 2022) or world models (Hafner et al., 2020; Hansen et al., 2022;b; Seo et al., 2022), for subsequent use in standard RL pipelines. In contrast, MTM uses self-supervised learning to train a single versatile model that can exhibit multiple capabilities. Transformers and Attention in RLOur work is inspired by the recent advances in AI enabled by transformers (Vaswani et al., 2017), especially in offline RL (Chen et al., 2021; Janner et al., 2021; Jiang et al., 2022) and imitation learning (Reed et al., 2022; Shafiullah et al., 2022; Brohan et al., 2022; Jiang et al., 2022; Zhou et al., 2022). Of particular relevance are works that utilize transformers in innovative ways beyond the standard RL paradigm. Decision Transformers and related methods (Schmidhuber, 2019; Srivastava et al., 2019; Chen et al., 2021) use return-conditioned imitation learning, which we also adopt in this work. However, in contrast to Chen et al. (2021) and Janner et al. (2021) who use next token prediction as the self-supervised task, we use a bi-directional masked prediction objective. This masking pattern enables the learning of versatile models that can take on different roles based on inference-time masking pattern. Recently, Liu et al. (2022) and Carroll et al. (2022) explore the use of bi-directional transformers for RL and we build off their work. In contrast to Liu et al. (2022) which studies downstream tasks like goal reaching and skill prompting, we study a different subset of tasks such as forward and inverse dynamics. Liu et al. (2022) also studies offline RL by applying TD3 and modifying the transformer attention mask to be causal, while we study the return conditioned behavior cloning setting. In contrast to Carroll et al. (2022), we study the broader capabilities of our model on several high-dimensional control tasks. VPT (Baker et al., 2022) also tackles sequential decision making using transformers, focusing primarily on extracting action labels with a separate inverse dynamics model. Furthermore, unlike prior work, we also demonstrate that our model has unique and favorable properties like data efficiency, heteromodality, and the capability to learn good state representations. ## 3 Masked Trajectory Modeling We now describe the details of our masked trajectory modeling paradigm, such as the problem formulation, training objective, masking patterns, and overall architecture used. ### Trajectory Datasets MTM is designed to operate on trajectory datasets that we encounter in decision making domains. Taking the example of robotics, a trajectory comprises of proprioceptive states, camera observations, control actions, task/goal commands, and so on. We can denote such a trajectory comprising of \(M\) different modalities as \[\boldsymbol{\tau}=\left\{\left(\mathbf{x}_{1}^{1},\mathbf{x}_{1}^{2},\ldots \mathbf{x}_{1}^{M}\right),\ \ldots\left(\mathbf{x}_{T}^{1},\mathbf{x}_{T}^{2},\ldots \mathbf{x}_{T}^{M}\right)\right\}, \tag{1}\] where \(\mathbf{x}_{n}^{m}\) refers to the \(m^{\rm th}\) modality in the \(t^{\rm th}\) timestep. In our empirical investigations, following prior work (Chen et al., 2021; Janner et al., 2021), we use state, action, and return-to-go (RTG) sequences as the different data modalities. Note that in-principle, our mathematical formulation is generic and can handle any modality. ### Architecture and Masked Modeling To perform masked trajectory modeling, we first "tokenize" the different elements in the raw trajectory sequence, by lifting them to a common representation space using modality-specific encoders. Formally, we compute \[\mathbf{z}_{t}^{m}=E_{\theta}^{m}(\mathbf{x}_{t}^{m})\quad\forall t\in[1,T],\ m\in[1,M],\] where \(E_{\theta}^{m}\) is the encoder corresponding to modality \(m\). We subsequently arrange the embeddings in a 1-D sequence of length \(N=M\times T\) as: \[\boldsymbol{\tau}=\left(\mathbf{z}_{1}^{1},\mathbf{z}_{1}^{2},\ldots\mathbf{z }_{1}^{M},\ldots\mathbf{z}_{t}^{m},\ldots\mathbf{z}_{T}^{M}\right).\] The self-supervised learning task in MTM is to reconstruct the above sequence conditioned on a masked view of the same. We denote the latter with \(\mathtt{Masked}(\boldsymbol{\tau})\), where we randomly drop or "mask" a subset of elements in the sequence. The final self-supervised objective is given by: \[\max_{\theta}\ \mathbb{E}_{\boldsymbol{\tau}}\sum_{t=1}^{T}\sum_{m=1}^{M}\log P_{ \theta}\left(\mathbf{z}_{t}^{m}|\mathtt{Masked}(\boldsymbol{\tau})\right), \tag{2}\] where \(P_{\theta}\) is the prediction of the model. This encourages the learning of a model that can reconstruct trajectories from parts of it, forcing it to learn about the environment as well as the data generating policy, in addition to good representations of the various modalities present in the trajectory. Architecture and EmbeddingsWe adopt an encoder-decoder architecture similar to He et al. (2021) and Liu et al. (2022), where both the encoder and decoder are bi-directional transformers. We use a modality-specific encoder to lift the raw trajectory inputs to a common representation space for tokens. Further, to allow the transformer to disambiguate between different elements in the sequence, a fixed sinusoidal timestep encoding and a learnable model-specific encoding are added, as illustrated in Figure 2. The resulting sequence is then flattened and fed into the transformer encoder where only unmasked tokens are processed. The decoder processes the full trajectory sequence, and uses values from the encoder when available, or a mode-specific mask token when not. The decoder is trained to predict the original sequence, including the unmasked tokens, using an MSE loss (He et al., 2021), which corresponds to a Gaussian probabilistic model. We also note that the length of episodes/trajectories in RL can be arbitrarily long. In our practical implementation, we model shorter "trajectory segments" that are randomly sub-selected contiguous segments of fixed length from the full trajectory. Masking PatternIntuitively, we can randomly mask elements in the sequence with a sufficiently high mask ratio to make the self-supervised task difficult. This has found success in computer vision (He et al., 2021). We propose to use a variation of this - a random autoregressive masking pattern. This pattern requires at least one token in the masked sequence to be autoregressive, meaning it must be predicted based only on previous tokens, and all future tokens are masked. This means the last element in each sampled trajectory segment is necessarily masked. See Figure 3 for an illustration. We note that the autoregressive mask in our context is **not** using a causal mask in attention weights, but instead corresponds to masking at the input and output token level, similar to MAE. In the case of computer vision and NLP, the entire image or sentence is often available at inference time. However, in the case of RL, the sequence data is generated as the agent interacts with the environment. As a result, at inference time, the model is forced to be causal (i.e. use only the past tokens). By using our random autoregressive masking pattern, the model both learns the underlying temporal dependencies in the data, as well as the ability to perform inference on past events. We find that this simple modification is helpful in most tasks we study. ### Mtm as a generic abstraction for RL The primary benefit of MTM is its versatility. Once trained, the MTM network can take on different roles, by simply using different masking patterns at inference time. We outline a few examples below. See Figure 3 for a visual illustration. 1. Firstly, MTM can be used as a stand-alone algorithm for offline RL, by utilizing a return-conditioned behavior cloning (RCBC) mask at inference time, analogous to DT (Chen et al., 2021) and RvS (Emmons et al., 2021). However, in contrast to DT and RvS, we use a different self-supervised pre-training task and model architecture. We find in Section 4.3 that using MTM in "RCBC-mode" outperforms DT and RvS. 2. Alternatively, MTM can be used to recover various components that routinely feature in traditional RL pipelines, as illustrated in Figure 3. Conceptually, by appoppriate choice of masking patterns, MTM can: (a) provide state representation that accelerates the learning of traditional RL algorithms; (b) perform policy initialization through behavior cloning; (c) act as a world model for model-based RL algorithms; (d) act as an inverse dynamics model to recover action sequences that track desired reference state trajectories. ## 4 Experiments Through detailed empirical evaluations, we aim to study the following questions. 1. Is MTM an effective algorithm for offline RL? 2. Is MTM a versatile learner? Can the same network trained with MTM be used for different capabilities without additional training? 3. Is MTM an effective heteromodal learner? Can it consume heteromodal datasets, like state-only and state-action trajectories, and effectively use such a dataset to improve performance? 4. Can MTM learn good representations that accelerate downstream learning with standard RL algorithms? See Appendix for additional details about model architecture and hyperparameters. Figure 2: **Tokenization of the trajectory sequence** comprises three components. A modality specific encoder lifts from the raw modality space to a common representation space, where we additionally add timestep embeddings and modality type embeddings. Collectively, these allow the transformer to distinguish between different elements in the sequence. ### Benchmark Datasets To help answer the aforementioned questions, we draw upon a variety of continuous control tasks and datasets that leverage the MuJoCo simulator Todorov et al. (2012). Additional environment details can be found in Appendix B. **D4RL**Fu et al. (2020) is a popular offline RL benchmark consisting of several environments and datasets. Following a number of prior work, we focus on the locomotion subset: Walker2D, Hopper, and HalfCheetah. For each environment, we consider 4 different dataset settings: Expert, Medium-Expert, Medium, and Medium-Replay. The Expert dataset is useful for benchmarking imitation learning with BC, while the other datasets enable studying offline RL and other capabilities of MTM such as future prediction and inverse dynamics. **Advroit**Rajeswaran et al. (2018) is a collection of dexterous manipulation tasks with a simulated five-fingered. We experiment with the Pen, and Door tasks that test an agent's ability to carefully coordinate a large action-space to accomplish complex robot manipulation tasks. We collect Medium-Replay and Expert trajectories for each task using a protocol similar to D4RL. **ExORL**Yarats et al. (2022) dataset consists of trajectories collected using various unsupervised exploration algorithms. \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline Environment & Dataset & BC & CQL & IQL & TT & MOPO & RsV & DT & **MTM (Ours)** \\ \hline HalfCheetah & Medium-Replay & 36.6 & 45.5 & 44.2 & 41.9 & 42.3 & 38.0 & 36.6 & 43.0 \\ Hopper & Medium-Replay & 18.1 & 95.0 & 94.7 & 91.5 & 28.0 & 73.5 & 82.7 & 92.9 \\ Walker2d & Medium-Replay & 26.0 & 77.2 & 73.9 & 82.6 & 17.8 & 60.6 & 66.6 & 77.3 \\ \hline HalfCheetah & Medium & 42.6 & 44.0 & 47.4 & 46.9 & 53.1 & 41.6 & 42.0 & 43.6 \\ Hopper & Medium & 52.9 & 58.5 & 66.3 & 61.1 & 67.5 & 60.2 & 67.6 & 64.1 \\ Walker2d & Medium & 75.3 & 72.5 & 78.3 & 79.0 & 39.0 & 71.7 & 74.0 & 70.4 \\ \hline HalfCheetah & Medium-Expert & 55.2 & 91.6 & 86.7 & 95.0 & 63.7 & 92.2 & 86.8 & 94.7 \\ Hopper & Medium-Expert & 52.5 & 105.4 & 91.5 & 110.0 & 23.7 & 101.7 & 107.6 & 112.4 \\ Walker2d & Medium-Expert & 107.5 & 108.8 & 109.6 & 101.9 & 44.6 & 106.0 & 108.1 & 110.2 \\ \hline \hline Average & & 51.9 & 77.6 & 77.0 & 78.9 & 42.2 & 71.7 & 74.7 & 78.7 \\ \hline \hline \end{tabular} \end{table} Table 1: **Results on D4RL.** Offline RL results on the V2 locomotion suite of D4RL are reported here, specified by the normalized score as described in Fu et al. (2020). We find that MTM outperforms RvS and DT, which also use RCBC for offline RL. Figure 3: **Masking Pattern for Training and Inference.** (Training: box in orange) MTM is trained to reconstruct trajectory segments conditioned on a masked view of the same. We use a random autoregressive masking pattern, where elements in the input sequence are randomly masked, with the added constraint that at least one masked token must have no future unmasked tokens. This means the last element in the sequence must necessarily be masked. We note that the input sequence can start and end on arbitrary modalities. In this illustrated example, \(R_{3}\) is the masked token that satisfies the autoregressive constraint. That is the prediction of \(R_{3}\) is conditioned on no future tokens in the sequence. (Inference: boxes in gray) By changing the masking pattern at inference time, MTM can either be used directly for offline RL using RCBC Chen et al. (2021), or be used as a component in traditional RL pipelines as a state representation, dynamics model, policy initialization, and more. These different capabilities are shown in gray. Modes not shown at the input are masked out and modes not shown at the output are not directly relevant for the task of interest. Yarats et al. (2022) showed that TD3 (Fujimoto et al., 2018) can be effectively used to learn in this benchmark. We use data collected by a ProtoRL agent (Yarats et al., 2021) in the Walker2D environment to learn three different tasks: Stand, Walk, and Run. ### Offline RL results We first test the capability of MTM to learn policies in the standard offline RL setting. To do so, we train MTM with the random autoregressive masking pattern as described in Section 3. Subsequently, we use the Return Conditioned Behavior Cloning (RCBC) mask at inference time for evaluation. This is inspired by DT (Chen et al., 2021) which uses a similar RCBC approach, but with a GPT model. Our empirical results are presented in Table 1. We find that MTM outperforms the closest algorithms of DT and RvS, suggesting that masked prediction is an effective pre-training task for offline RL when using RCBC inference mask. More surprisingly, MTM is competitive with highly specialized and state-of-the-art offline RL algorithms like CQL (Kumar et al., 2020) and IQL (Kostrikov et al., 2021) despite training with a purely self-supervised learning objective without any explicit RL components. ### Mtm Capabilities We next study if MTM is a versatile learner by evaluating it across four different capabilities on Adroit and D4RL datasets. We emphasize that we test these capabilities for a single MTM-model (i.e. same weights) by simply altering the masking pattern during inference time. See Figure 3 for a visual illustration of the inference-time masking patterns. 1. **Behavior Cloning (BC)**: Predict next action given state-action history. This is a standard approach to imitation learning as well as a popular initialization method for subsequent RL (Rajeswaran et al., 2018). 2. **Return Conditioned Behavior Cloning (RCBC)** is similar to BC, but additionally conditions on the desired Return-to-Go. Recent works (Chen et al., 2021; Emmons et al., 2021) have shown that RCBC can lead to successful policies in the offline RL setting. 3. **Inverse Dynamics (ID)**, where we predict the action using the current and future desired state. This can be viewed as a 1-step goal-reaching policy. It has also found application in observation-only imitation learning (Radosavovic et al., 2021; Baker et al., 2022). 4. **Forward Dynamics (FD)**, where we predict the next state given history and current action. Forward dynamics models are an integral component of several model-based RL algorithms (Janner et al., 2019; Rajeswaran et al., 2020; Hafner et al., 2020). We consider two variations of MTM. The first variant, S-MTM, trains a specialized model for each capability using the corresponding masking pattern at _train time_. The second variant, denoted simply as MTM, trains a single model using the random autoregressive mask specified in Section \begin{table} \begin{tabular}{l l l l l|c c} \hline \hline Domain & Dataset & Task & MLP & S-MTM (Ours) & MTM (Ours) & (MTM)\(\gtrsim\)(S-MTM)? \\ \hline \multirow{4}{*}{D4RL Hopper} & Expert & (\(\uparrow\)) BC & 111.14 \(\pm\) 0.33 & 111.81 \(\pm\) 0.18 & 107.35 \(\pm\) 7.77 & ✓ \\ & Expert & (\(\uparrow\)) RCBC & 111.17 \(\pm\) 0.56 & 112.64 \(\pm\) 0.47 & 112.49 \(\pm\) 0.37 & ✓ \\ & Expert & (\(\downarrow\)) ID & 0.009 \(\pm\) 0.000 & 0.013 \(\pm\) 0.000 & 0.050 \(\pm\) 0.026 & ✗ \\ & Expert & (\(\downarrow\)) FD & 0.072 \(\pm\) 0.000 & 0.517 \(\pm\) 0.025 & 0.088 \(\pm\) 0.049 & ✓ \\ \hline \multirow{4}{*}{D4RL Hopper} & Medium Replay & (\(\uparrow\)) BC & 35.63 \(\pm\) 6.27 & 36.17 \(\pm\) 4.09 & 29.46 \(\pm\) 6.74 & ✗ \\ & Medium Replay & (\(\uparrow\)) RCBC & 88.61 \(\pm\) 1.68 & 93.30 \(\pm\) 0.33 & 92.95 \(\pm\) 1.51 & ✓ \\ & Medium Replay & (\(\downarrow\)) ID & 0.240 \(\pm\) 0.028 & 0.219 \(\pm\) 0.008 & 0.534 \(\pm\) 0.009 & ✗ \\ & Medium Replay & (\(\downarrow\)) FD & 2.179 \(\pm\) 0.052 & 3.310 \(\pm\) 0.425 & 0.493 \(\pm\) 0.030 & ✓ \\ \hline \multirow{4}{*}{Adroit Pen} & Expert & (\(\uparrow\)) BC & 62.75 \(\pm\) 1.43 & 66.28 \(\pm\) 3.28 & 61.25 \(\pm\) 5.06 & ✓ \\ & Expert & (\(\uparrow\)) RCBC & 68.41 \(\pm\) 2.27 & 66.29 \(\pm\) 1.39 & 64.81 \(\pm\) 1.70 & ✓ \\ & Expert & (\(\downarrow\)) ID & 0.128 \(\pm\) 0.001 & 0.155 \(\pm\) 0.001 & 0.331 \(\pm\) 0.049 & ✗ \\ & Expert & (\(\downarrow\)) FD & 0.048 \(\pm\) 0.002 & 0.360 \(\pm\) 0.020 & 0.321 \(\pm\) 0.048 & ✓ \\ \hline \multirow{4}{*}{Adroit Pen} & Medium Replay & (\(\uparrow\)) BC & 33.73 \(\pm\) 1.00 & 54.84 \(\pm\) 5.08 & 47.10 \(\pm\) 7.13 & ✗ \\ & Medium Replay & (\(\uparrow\)) RCBC & 41.26 \(\pm\) 4.99 & 57.50 \(\pm\) 3.76 & 58.76 \(\pm\) 5.63 & ✓ \\ & Medium Replay & (\(\downarrow\)) ID & 0.308 \(\pm\) 0.004 & 0.238 \(\pm\) 0.004 & 0.410 \(\pm\) 0.064 & ✓ \\ & Medium Replay & (\(\downarrow\)) FD & 0.657 \(\pm\) 0.023 & 0.915 \(\pm\) 0.007 & 0.925 \(\pm\) 0.026 & ✓ \\ \hline \hline \end{tabular} \end{table} Table 2: **Evaluation of various MTM capabilities.** MTM refers to the model trained with the random autoregressive mask, and evaluated using the appropriate mask at inference time. S-MTM (“Specialized”) refers to the model that uses the appropriate mask both during training and inference time. We also compare with a specialized MLP baseline trained separately for each capability. Note that higher is better for BC and RCBC, while lower is better for FD and ID. We find that MTM is often comparable or better than training on specialized masking patterns, or training specialized MLPs. We use a box outline to indicate that a single model was used for all the evaluations within it. The right most column indicates if MTM is comparable or better than S-MTM, and we find this to be true in most cases. 3. Subsequently, the same model (i.e. same set of weights) is evaluated for all the four capabilities. We also compare our results with specialized MLP models for each capability. We evaluate the best checkpoint across all models and report mean and standard deviation across \(4\) seeds, taking the average of \(20\) trajectory executions per seed. For all experiments we train on 95% of the dataset and reserve \(5\%\) of the data for evaluation. For BC and RCBC results, we report the normalized score obtained during evaluation rollouts. For ID and FD, we report normalized loss values on the aforementioned \(5\%\) held-out data. A snapshot of our results are presented in Table 2 for a subset of environments. Please see Appendix A for detailed results on all the environments. The last column of the table indicates the performance difference between the versatile MTM and the specialized S-MTM. We find that MTM is comparable or even better than specialized masks, and also matches the performance of specialized MLP models. We suspect that specialized masks may require additional tuning of parameters to prevent overfitting or underfitting, whereas random autoregressive masking is more robust across tasks and hyperparameters. ### Impact of Masking Patterns We study if the masking pattern influences the capabilities of the learned model. Figure 4 shows that random autoregressive masking matches or outperforms purely random masking on RCBC for a spread of environments for offline RL. We note that pure random masking, as done in MAE and BERT, which focuses on only learning good representations, can lead to diminished performance for downstream capabilities. Random autoregressive masking mitigates these issues by allowing the learning of a single versatile model while still matching or even exceeding the performance of specialized masks, as seen in Table 2. ### Heteromodal Datasets MTM is uniquely capable of learning from heteromodal datasets. This is enabled by the training procedure, where any missing data can be treated as if it were masked. During Figure 4: **Impact of Masking Patterns.** This plot shows MTM RCBC performance trained with three different masking patterns, random, random autoregressive, and a specialized RCBC mask. We find that autoregressive random often outperforms random, and in most cases is even competitive with the specialized (or oracle) RCBC mask. \(Y\)-axis normalized with using RCBC mask. Figure 5: **MTM can effectively learn from heteromodal datasets.** Real world data may not always contain action labels. We simulate this setting by training a MTM models on Expert datasets across domains where only a small fraction of the data have action labels. Our Heteromodal MTM model is able to effectively improve task with the additional data over baseline MTM and MLP that train on only the subset of data with actions. \(Y\)-axis normalized with respect to performance of Heteromodal MTM. Figure 6: **Dataset efficiency.** We train MTM in the D4RL Hopper and Adroit Door environments across a range of dataset sizes, measured by the percent of the original dataset (\(\approx 1\) million transitions). We see that MTM is able to consistently outperform specialized MLP models in the low data regime. Furthermore, we see that Heteromodal MTM (i.e. MTM trained on heteromodal data containing both state-only and state-action trajectories) is further able to provide performance improvement in low data regimes. training we apply the loss only to modes that exist in the dataset. For these experiments we take the Expert subset of our trajectory data and remove action labels from the majority of the dataset. The training data consists of \(1\%\) of the data with all modes (states, actions, return-to-go) and \(95\%\) percent of the data with no action labels. As is done in all experiments, the remainder is reserved for testing. From our initial experiments, we found that naively adding in the state only data during training, and evaluating with the RCBC mask did not always result in improved performance. This was despite improvement in forward dynamics prediction as a result of adding state-only trajectories. Based on this observation, we propose a two-stage action inference procedure. First, we predict future states given current state and desired returns. This can be thought of as a forward dynamics pass where the desired returns are used instead of actions, which are masked out (or more precisely, missing). Next, we predict actions using the current state and predicted future states using the inverse dynamics mask. We refer to this model trained on heteromodal data, along with the two stage inference procedure, as Heteromodal MTM. We present the results in Figure 5, where we find that Heteromodal MTM consistently improves performance over the baseline MLP and MTM that are trained only on the subset of data with action labels. ### Data Efficiency Figure 5 not only showed the effectiveness of MTM on heteromodal data, but also that MTM is able to achieve higher performance than baseline (specialized) MLPs in the low data regimes. To explicitly test the data efficiency of MTM, we study the performance as a function of the training dataset size, and present results in Figure 6. We observe that MTM is more sample efficient and achieves higher performance for any given dataset size. Heteromodal MTM also outperforms MTM throughout, with the performance gap being quite substantial in the low-data regime. We hypothesize that the data efficiency of MTM is due to better usage of the data. Specifically, since the model encounters various masks during training, it must learn general relationships between different elements. As a result, MTM may be able to squeeze out more learning signal from any given trajectory. ### Representations of MTM Finally, we study if the representations learned by MTM are useful for downstream learning with traditional RL algorithms. If this is the case, MTM can also be interpreted as an offline pre-training exercise to help downstream RL. To instantiate this in practice, we consider the setting of offline RL using TD3 on the ExORL dataset. The baseline method is to simply run TD3 on this dataset using the raw state as input to the TD3 algorithm. We compare this to our proposed approach of using MTM state representations for TD3. To do this, we first pretrain an MTM model on state-action sequences in the ExORL dataset. Subsequently, to use state representations from MTM, we simply use the MTM encoder to tokenize and encode each state individually. This latent representation of the state can be used in the place of raw states for the TD3 algorithm. The critic of TD3 is conditioned on states and actions. We additionally test state-action representations of MTM by using the latent representation of the state and action encoded jointly with MTM. We allow end to end finetuning of the representations during training. We compare training TD3 on raw states to training Figure 7: **MTM Representations enable faster learning.** The plot visualizes a walker agent’s performance as it is trained using TD3 on different representations across 3 tasks (Stand, Walk, Run). The agent is trained completely offline using data from the ExORL dataset. For MTM state representations, we encode the raw state with MTM. MTM state-action representations additionally jointly encode the state and action for the critic of TD3. The learning curves show that finetuned MTM representations enable the agent to more quickly learn the task at hand, reaching or exceeding the asymptotic performance of TD3 on raw states. Both MTM state representations and MTM state-action representations are comparable in terms of learning speed and performance. In addition, we see that in some cases, like the Run task, state-action representations from MTM helps achieve better performance than alternatives. We also show the asymptotic performance reached by TD3 on raw states and actions after training for 100000 iterations and plot the average of 5 seeds. TD3 with (a) state representations from the MTM model, and (b) state-action representations from the MTM model with the offline RL loss (i.e. TD3 objective). Figure 7 depicts the learning curves for the aforementioned experiment. In all cases we see significant improvement in training efficiency by using MTM representations - both with state and state-action representations. In the Walk task, we note it actually _improves_ over the asymptotic performance of the base TD3 (Fujimoto et al., 2018) algorithm within 10% of training budget. Additionally, we find that the state-action representation from MTM can provide significant benefits, as in the case of the Walk task. Here, finetuning state-action representation from MTM leads to better asymptotic performance compared to state-only representation or learning from scratch. We provide additional plots of MTM frozen representations in Appendix E.3 ## 5 Summary In this paper, we introduced MTM as a versatile and effective approach for sequential decision making. We empirically evaluated the performance of MTM on a variety of continuous control tasks and found that a single pretrained model (i.e. same weights) can be used for different downstream purposes like inverse dynamics, forward dynamics, imitation learning, offline RL, and representation learning. This is accomplished by simply changing the masks used at inference time. In addition, we showcase how MTM enables training on heterogeneous datasets without any change to the algorithm. Future work includes incorporating training in online learning algorithms for more sample efficient learning, scaling MTM to longer trajectory sequences, and more complex modalities like videos. ## Acknowledgements The authors thank researchers and students in Meta AI and Berkeley Robot Learning Lab for valuable discussions. Philipp Wu was supported in part by the NSF Graduate Research Fellowship Program. Arjun Majumdar was supported in part by ONR YIP and ARO PECASE. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, any sponsor, or employer.
2303.01137
Algebraic Monograph Transformations
Monographs are graph-like structures with directed edges of unlimited length that are freely adjacent to each other. The standard nodes are represented as edges of length zero. They can be drawn in a way consistent with standard graphs and many others, like E-graphs or $\infty$-graphs. The category of monographs share many properties with the categories of graph structures (algebras of monadic many-sorted signatures), except that there is no terminal monograph. It is universal in the sense that its slice categories (or categories of typed monographs) are equivalent to the categories of graph structures. Type monographs thus emerge as a natural way of specifying graph structures. A detailed analysis of single and double pushout transformations of monographs is provided, and a notion of attributed typed monographs generalizing typed attributed E-graphs is analyzed w.r.t. attribute-preserving transformations.
Thierry Boy de la Tour
2023-03-02T10:33:14Z
http://arxiv.org/abs/2303.01137v1
# Algebraic Monograph Transformations ###### Abstract Monographs are graph-like structures with directed edges of unlimited length that are freely adjacent to each other. The standard nodes are represented as edges of length zero. They can be drawn in a way consistent with standard graphs and many others, like E-graphs or \(\infty\)-graphs. The category of monographs share many properties with the categories of graph structures (algebras of monadic many-sorted signatures), except that there is no terminal monograph. It is universal in the sense that its slice categories (or categories of typed monographs) are equivalent to the categories of graph structures. Type monographs thus emerge as a natural way of specifying graph structures. A detailed analysis of single and double pushout transformations of monographs is provided, and a notion of attributed typed monographs generalizing typed attributed E-graphs is analyzed w.r.t. attribute-preserving transformations. **Keywords:** Algebraic Graph Transformation, Graph Structures, Typed Graphs ## 1 Introduction Many different notions of graphs are used in mathematics and computer science: simple graphs, directed graphs, multigraphs, hypergraphs, etc. One favourite notion in the context of logic and rewriting is that also known as _quivers_, i.e., structures of the form \((N,E,s,t)\) where \(N,E\) are sets and \(s,t\) are functions from \(E\) (edges) to \(N\) (nodes), identifying the source and target tips of every edge (or arrow). One reason for this is that the category of quivers is isomorphic to the category of algebras of the many-sorted signature with two sorts nodes and edges and two operator names src and tgt of type edges\(\rightarrow\)nodes. In conformity with this tradition, by _graph_ we mean quiver throughout this paper. In order to conveniently represent elaborate data structures it is often necessary to enrich the structure of graphs with attributes: nodes or edges may be labelled with elements from a fixed set, or with values taken in some algebra, or with sets of values as in [1], etc. An interesting example can be found in [2] with the notion of E-graphs, since the attributes are also considered as nodes. More precisely, an E-graph is an algebra whose signature can be represented by the following graph: The names given to the sorts and operators help to understand the structure of E-graphs: the edges relate the nodes among themselves, the nv-edges relate the nodes to the values, and the ev-edges relate the edges to the values. Hence the sort values holds attributes that are also nodes. But then we see that in E-graphs the ev-edges are adjacent to edges. This is non standard, but we may still accept such structures as some form of graph, if only because we understand how they can be drawn. Hence the way of generalizing the notion of graphs seems to involve a generalization of the signature of graphs considered as algebras. This path has been followed by Michael Lowe in [3], where a _graph structure_ is defined as a monadic many-sorted signature. Indeed in the examples above, and in many examples provided in [3], all operators have arity 1 and can therefore be considered as edges from their domain to their range sort. Is this the reason why they are called graph structures? But the example above shows that E-graphs are very different from the graph that represent their signature. Besides, it is not convenient that our understanding of such structures should be based on syntax, i.e., on the particular names given to sorts and operators in the signature. Furthermore, it is difficult to see how the algebras of some very simple monadic signatures can be interpreted as graphs of any form. Take for instance the signature of graphs and reverse the target function to \(\mathtt{tgt}:\mathtt{nodes}\rightarrow\mathtt{edges}\). Then there is a symmetry between the sorts nodes and edges, which means that in an algebra of this signature nodes and edges would be objects of the same nature. Is this still a graph? Can we draw it? Worse still, if the two sorts are collapsed into one, does it mean that a node/edge can be adjacent to itself? We may address these problems by restricting graph structures to some class of monadic signatures whose algebras are guaranteed to behave in an orthodox way, say by exhibiting clearly separated edges and nodes. But this could be prone to arbitrariness, and it would still present another drawback: that the notion of graph structure does not easily give rise to a category. Indeed, it is difficult to define morphisms between algebras of different signatures, if only because they can have any number of carrier sets. The approach adopted here is rather to reject any _structural_ distinction between nodes and edges, hence to adopt a unified view of nodes as edges of length 0, and standard edges as edges of length 2 since they are adjacent to two nodes. This unified view logically allows edges to be adjacent to any edges and not just to nodes, thus generalizing the ev-edges of E-graphs, and even to edges that are adjacent to themselves. Finally, there is no reason to restrict the length of edges to \(0\) or \(2\), and we will find good reasons (in Section 6) for allowing edges of infinite, ordinal length. The necessary notions and notations are introduced in Section 2. The structure of _monograph_ (together with morphisms) is defined in Section 3, yielding a bestiary of categories of monographs according to some of their characteristics. The properties of these categories w.r.t. the existence of limits and co-limits are analyzed in Section 4. We then see in Section 5 how monographs can be accurately represented by drawings, provided of course that they have finitely many edges and that these have finite length. In particular, such drawings correspond to the standard way of drawing a graph for those monographs that can be identified with standard graphs, and similarly for E-graphs. Section 6 is devoted to the comparison between monographs and graph structures, and the corresponding algebras (that we may call _graph structured algebras_). We show a property of universality of monographs, in the sense that all graph structured algebras can be represented (though usually not in a canonical way) as _typed monographs_, i.e., as morphisms of monographs. The notion of graph structure has been introduced in [3] in order to obtain categories of partial homomorphisms in which techniques of algebraic graph rewriting could be carried out. The correspondence with monographs established in Section 6 calls for a similar development of partial morphisms of monographs in Section 7. The single and double pushout methods of rewriting monographs can then be defined, analyzed and compared in Section 8. The notion of E-graph has been introduced in [2] in order to obtain well-behaved categories (w.r.t. graph rewriting) of _attributed graphs_, and hence to propose suitable representations of real-life data structures. This is achieved by enriching E-graphs with a data type algebra, and by identifying nodes of sort value with the elements of this algebra. We pursue a similar approach in Section 9 with the notion of _attributed typed monograph_ by identifying elements of an algebra with edges, and obtain similarly well-behaved categories. Due to the universality of monographs we see that any \(\Sigma\)-algebra can be represented as an attributed typed monograph. We conclude in Section 10. Note that parts of Sections 4 to 6 have been published in [4]. ## 2 Basic Definitions and Notations ### Sets For any sets \(A\), \(B\), relation \(R\subseteq A\times B\) and subset \(X\subseteq A\), let \(R[X]\stackrel{{\mbox{\tiny def}}}{{=}}\{y\in B\mid x\in X \wedge(x,y)\in R\}\). For any \(x\in A\), by abuse of notation we write \(R[x]\) for \(R[\{x\}]\). If \(R\) is functional we write \(R(x)\) for the unique element of \(R[x]\), and if \(S\subseteq C\times D\) is also functional and \(R[A]\subseteq C\) let \(S\circ R\stackrel{{\mbox{\tiny def}}}{{=}}\{(x,S(R(x)))\mid x\in A\}\). A _function_\(f:A\to B\) is a triple \((A,R,B)\) where \(R\subseteq A\times B\) is a functional relation. We write \(f[X]\) and \(f(x)\) for \(R[X]\) and \(R(x)\) respectively. For any \(Y\supseteq f[X]\), let \(f|_{X}^{Y\stackrel{{\mbox{\tiny def}}}{{=}}(X,R\cap(X\times Y),Y)\) and \(f|_{X}^{\mbox{\tiny def}}=f|_{X}^{B}\). A function \(g=(C,S,D)\) may be composed on the left with \(f\) if \(B=C\), and then \(g\circ f\stackrel{{\mbox{\tiny def}}}{{=}}(A,S\circ R,D)\). If \(R[A]\subseteq C\) we may write \(g\circ R\) or \(S\circ f\) for \(S\circ R\). Sets and functions form the category \(\mathbf{Sets}\) with identities \(\operatorname{Id}_{A}\stackrel{{\mbox{\tiny def}}}{{=}}(A,\{(x,x )\mid x\in A\},A)\). In \(\mathbf{Sets}\) we use the standard product \((A\times B,\pi_{1},\pi_{2})\) and coproduct \((A+B,\mu_{1},\mu_{2})\) of pairs of sets \((A,B)\). The elements \(p\in A\times B\) are pairs of elements of \(A\) and \(B\), i.e., \(p=(\pi_{1}(p),\pi_{2}(p))\). For functions \(f:C\to A\) and \(g:C\to B\) we write \(\langle f,g\rangle:C\to A\times B\) for the unique function such that \(\pi_{1}\circ\langle f,g\rangle=f\) and \(\pi_{2}\circ\langle f,g\rangle=g\), i.e., \(\langle f,g\rangle(z)\stackrel{{\mbox{\tiny def}}}{{=}}(f(z),g(z))\) for all \(z\in C\). The elements of \(A+B\) are pairs \(\mu_{1}(x)\stackrel{{\mbox{\tiny def}}}{{=}}(x,0)\) or \(\mu_{2}(y)\stackrel{{\mbox{\tiny def}}}{{=}}(y,1)\) for all \(x\in A\) and \(y\in B\), so that \(A^{\prime}\subseteq A\) and \(B^{\prime}\subseteq B\) entail \(A^{\prime}+B^{\prime}=\mu_{1}[A^{\prime}]\cup\mu_{2}[B^{\prime}]\). An _ordinal_ is a set \(\alpha\) such that every element of \(\alpha\) is a subset of \(\alpha\), and such that the restriction of the membership relation \(\in\) to \(\alpha\) is a strict well-ordering of \(\alpha\) (a total order where every non empty subset of \(\alpha\) has a minimal element). Every member of an ordinal is an ordinal, and we write \(\lambda<\alpha\) for \(\lambda\in\alpha\). For any two ordinals \(\alpha\), \(\beta\) we have either \(\alpha<\beta\), \(\alpha=\beta\) or \(\alpha>\beta\) (see e.g. [5]). Every ordinal \(\alpha\) has a successor \(\alpha\cup\{\alpha\}\), denoted \(\alpha+1\). Natural numbers \(n\) are identified with finite ordinals, so that \(n=\{0,1,\dots,n-1\}\) and \(\omega\stackrel{{\mbox{\tiny def}}}{{=}}\{0,1,\dots\}\) is the smallest infinite ordinal. ### Sequences For any set \(E\) and ordinal \(\lambda\), an _\(E\)-sequence \(s\) of length \(\lambda\)_ is an element of \(E^{\lambda}\), i.e., a function \(s:\lambda\to E\). Let \(\varepsilon\) be the only element of \(E^{0}\) (thus leaving \(E\) implicit), and for any \(e\in E\) let \(e\mathord{\uparrow}\lambda\) be the only element of \(\{e\}^{\lambda}\). For any \(s\in E^{\lambda}\) and \(\iota<\lambda\), the image of \(\iota\) by \(s\) is written \(s_{\iota}\). If \(\lambda\) is finite and non zero then \(s\) can be described as \(s=s_{0}\cdots s_{\lambda-1}\). For any \(x\in E\) we write \(x\mid s\) and say that \(x\)_occurs in \(s\)_ if there exists \(\iota<\lambda\) such that \(s_{\iota}=x\). For any ordinal \(\alpha\), let \(E^{<\alpha}\stackrel{{\mbox{\tiny def}}}{{=}}\bigcup_{\lambda< \alpha}E^{\lambda}\); this is a disjoint union. For any \(s\in E^{<\alpha}\) let \(|s|\) be the length of \(s\), i.e., the unique \(\lambda<\alpha\) such that \(s\in E^{\lambda}\). For any set \(F\) and function \(f:E\to F\), let \(f^{<\alpha}:E^{<\alpha}\to F^{<\alpha}\) be the function defined by \(f^{<\alpha}(s)\stackrel{{\mbox{\tiny def}}}{{=}}f\circ s\) for all \(s\in E^{<\alpha}\). We have \(\operatorname{Id}_{E}^{<\alpha}=\operatorname{Id}_{E^{<\alpha}}\) and \((g\circ f)^{<\alpha}=g^{<\alpha}\circ f^{<\alpha}\) for all \(g:F\to G\). Since \(s\in E^{\lambda}\) entails \(f\circ s\in F^{\lambda}\), then \(|f^{<\alpha}(s)|=|s|\). If \(s\) and \(s^{\prime}\) are respectively \(E\)- and \(F\)-sequences of length \(\lambda\), then they are both functions with domain \(\lambda\) hence there is a function \(\langle s,s^{\prime}\rangle\) of domain \(\lambda\). Thus \(\langle s,s^{\prime}\rangle\) is an \((E\times F)\)-sequence of length \(\lambda\), and then \(\pi_{1}^{<\alpha}(\langle s,s^{\prime}\rangle)=\pi_{1}\circ\langle s,s^{ \prime}\rangle=s\) and similarly \(\pi_{2}^{<\alpha}(\langle s,s^{\prime}\rangle)=s^{\prime}\) for all \(\alpha>\lambda\). If \(f:E\to F\) and \(g:E\to G\) then \(\langle f,g\rangle:E\to F\times G\), hence for all \(s\in E^{<\alpha}\) of length \(\lambda<\alpha\) we have \(\langle f,g\rangle^{<\alpha}(s)=\langle f,g\rangle\circ s=\langle f\circ s,g \circ s\rangle=\langle f^{<\alpha}(s),g^{<\alpha}(s)\rangle\) is an \((F\times G)\)-sequence of length \(\lambda\). For \(s\in E^{<\omega}\) and \((A_{e})_{e\in E}\) an \(E\)-indexed family of sets, let \(A_{s}\stackrel{{\mbox{\tiny def}}}{{=}}\prod_{\iota<|s|}A_{s_{\iota}}\). In particular we take \(A_{\varepsilon}\stackrel{{\mbox{\tiny def}}}{{=}}1\) as a terminal object in \(\mathbf{Sets}\). For \((B_{e})_{e\in E}\) an \(E\)-indexed family of sets and \((f_{e}:A_{e}\to B_{e})_{e\in E}\) an \(E\)-indexed family of functions, let \(f_{s}\stackrel{{\mbox{\tiny def}}}{{=}}\prod_{\iota<|s|}f_{s_{\iota}} :A_{s}\to B_{s}\). ### Signatures and Algebras A _signature_ is a function1\(\Sigma:\Omega\to S^{<\omega}\), such that \(\Sigma(o)\neq\varepsilon\) for all \(o\in\Omega\). The elements of \(\Omega\) are called _operator names_ and those of \(S\)_sorts_. The _arity_ of an operator name \(o\in\Omega\) is the finite ordinal \(n\stackrel{{\mbox{\tiny def}}}{{=}}|\Sigma(o)|-1\), its _range_ is \(\mbox{Rng}(o)\stackrel{{\mbox{\tiny def}}}{{=}}\Sigma(o)_{n}\) (the last element of the \(S\)-sequence \(\Sigma(o)\)) and its _domain_ is \(\mbox{Dom}(o)\stackrel{{\mbox{\tiny def}}}{{=}}\Sigma(o)|_{n}\) (the rest of the sequence). \(o\) is _monadic_ if \(n=1\). The signature \(\Sigma\) is _finite_ if \(\Omega\) and \(S\) are finite, it is a _graph structure_ if all its operator names are monadic. Footnote 1: For the sake of simplicity we do not allow the overloading of operator names as in [6]. These names will turn out to be irrelevant anyway. A _\(\Sigma\)-algebra_\(\mathcal{A}\) is a pair \(((\mathcal{A}_{s})_{s\in S},(o^{\mathcal{A}})_{o\in\Omega})\) where \((\mathcal{A}_{s})_{s\in S}\) is an \(S\)-indexed family of sets and \((o^{\mathcal{A}}:\mathcal{A}_{\mbox{\scriptsize Dom}(o)}\to\mathcal{A}_{ \mbox{\scriptsize Rng}(o)})_{o\in\Omega}\) is an \(\Omega\)-indexed family of functions. A _\(\Sigma\)-homomorphism_\(h\) from \(\mathcal{A}\) to a \(\Sigma\)-algebra \(\mathcal{B}\) is an \(S\)-indexed family of functions \((h_{s}:\mathcal{A}_{s}\to\mathcal{B}_{s})_{s\in S}\) such that \[o^{\mathcal{B}}\circ h_{\mbox{\scriptsize Dom}(o)}=h_{\mbox{\scriptsize Rng}( o)}\circ o^{\mathcal{A}}\] for all \(o\in\Omega\). Let \(1_{\mathcal{A}}\stackrel{{\mbox{\tiny def}}}{{=}}(\mbox{Id}_{ \mathcal{A}_{s}})_{s\in S}\) and for any \(\Sigma\)-homomorphism \(k:\mathcal{B}\to\mathcal{C}\), the \(\Sigma\)-homomorphism \(k\circ h:\mathcal{A}\to\mathcal{C}\) is defined by \((k\circ h)_{s}\stackrel{{\mbox{\tiny def}}}{{=}}k_{s}\circ h_{s}\) for all \(s\in S\). Let \(\Sigma\)-**Alg** be the category of \(\Sigma\)-algebras and \(\Sigma\)-homomorphisms. ### Categories We assume familiarity with the notions of functors, limits, colimits and their preservation and reflection by functors, see [7]. Isomorphism between objects in a category is denoted by \(\simeq\) and equivalence between categories by \(\approx\). For any object \(T\) of \(\boldsymbol{A}\), the _slice category_\(\boldsymbol{A}\backslash T\) has as objects the morphisms of codomain \(T\) of \(\boldsymbol{A}\), as morphisms from object \(a:A\to T\) to object \(b:B\to T\) the morphisms \(f:A\to B\) of \(\boldsymbol{A}\) such that \(b\circ f=a\), and the composition of morphisms in \(\boldsymbol{A}\backslash T\) is defined as the composition of the underlying morphisms in \(\boldsymbol{A}\) (see [2] or [7, Definition 4.19]). ## 3 Monographs and their Morphisms **Definition 3.1** (monographs, edges, ordinal for \(A\)).: _A set \(A\) is a monograph if there exists a set \(E\) (whose elements are called edges of \(A\)) and an ordinal \(\alpha\) (said to be an ordinal for \(A\)) such that \((E,A,E^{<\alpha})\) is a function._ A monograph is therefore a functional relation, which means that its set of edges is uniquely determined. On the contrary, there are always infinitely many ordinals for a monograph. As running example we consider the monograph \(A=\{(x,x\,y\,x),(y,y\,x\,y)\}\) then its set of edges is \(E=\{x,y\}\). Since \(A(x)\) and \(A(y)\) are elements of \(E^{3}\subseteq E^{<4}\), then \((E,A,E^{<4})\) is a function. Hence 4 is an ordinal for \(A\), and so are all the ordinals greater than 4. It is easy to see that for any set of monographs there exists a common ordinal for all its members. **Definition 3.2** (length \(|x|\), edge \(x_{\iota}\), trace \(\operatorname{tr}(A)\), \(O\)-monographs).: _For any monograph \(A\) with set of edges \(E\), the length of an edge \(x\in E\) is the length \(|A(x)|\), also written \(|x|\) if there is no ambiguity. Similarly, for any \(\iota<|x|\) we may write \(x_{\iota}\) for \(A(x)_{\iota}\). The trace of \(A\) is the set \(\operatorname{tr}(A)\stackrel{{\text{\tiny def}}}{{=}}\{|x|\mid x \in E\}\). For any set \(O\) of ordinals, \(A\) is an \(O\)-monograph if \(\operatorname{tr}(A)\subseteq O\)._ Since any ordinal is a set of ordinals, we see that an ordinal \(\alpha\) is for a monograph iff this is an \(\alpha\)-monograph. Hence all edges of a monograph have finite length iff it is an \(\omega\)-monograph. **Definition 3.3** (adjacency, nodes \(\operatorname{N}_{A}\), standard monographs).: _For any monograph \(A\) and edges \(x,y\) of \(A\), \(x\) is adjacent to \(y\) if \(y\mid A(x)\). A node is an edge of length 0, and the set of nodes of \(A\) is written \(\operatorname{N}_{A}\). \(A\) is standard if \(y\mid A(x)\) entails \(y\in\operatorname{N}_{A}\), i.e., all edges are sequences of nodes._ The running example \(A\) has no nodes and is therefore not standard. Since \(A(x)=x\,y\,x\) then \(x\) is adjacent to \(y\) and to itself. Similarly, \(A(y)=y\,x\,y\) yields that \(y\) is adjacent to \(x\) and to itself. In this case the adjacency relation is symmetric, but this is not generally the case, e.g., a node is never adjacent to any edge, while edges may be adjacent to nodes. **Definition 3.4** (morphisms of monographs).: _A morphism \(f\) from monograph \(A\) to monograph \(B\) with respective sets of edges \(E\) and \(F\), denoted \(f:A\to B\), is a function \(f:E\to F\) such that \(f^{<\alpha}\circ A=B\circ f\), where \(\alpha\) is any ordinal for \(A\)._ Building on the running example, we consider the permutation \(f=(x\,y)\) of \(E\) (in cycle notation), we see that \(f^{<4}\circ A(x)=f^{<4}(x\,y\,x)=y\,x\,y=A(y)=A\circ f(x)\) and similarly that \(f^{<4}\circ A(y)=f^{<4}(y\,x\,y)=x\,y\,x=A(x)=A\circ f(y)\), hence \(f^{<4}\circ A=A\circ f\) and \(f\) is therefore a morphism from \(A\) to \(A\). Since \(f\circ f=\operatorname{Id}_{E}\) is obviously the identity morphism \(1_{A}\) then \(f\) is an isomorphism. Note that the terms of the equation \(f^{<\alpha}\circ A=B\circ f\) are functional relations and not functions. One essential feature is that this equation holds for all ordinals \(\alpha\) for \(A\) iff it holds for one. Thus if we are given a morphism then we know that the equation holds _for all_ big enough \(\alpha\)'s, and if we want to prove that a function is a morphism then we need only prove that _there exists_ a big enough \(\alpha\) such that the equation holds. This equation is of course equivalent to \(f^{<\alpha}\circ A(x)=B\circ f(x)\) for all \(x\in E\). The terms of this last equation are \(F\)-sequences that should therefore have the same length: \[|x|=|A(x)|=|f^{<\alpha}\circ A(x)|=|B\circ f(x)|=|f(x)|,\] i.e., the length of edges are preserved by morphisms. Hence \(\operatorname{tr}(A)\subseteq\operatorname{tr}(B)\), and the equality holds if \(f\) is surjective. This means that if \(B\) is an \(O\)-monograph then so is \(A\), and that every ordinal for \(B\) is an ordinal for \(A\). This also means that the images of nodes can only be nodes: \[f^{-1}[\mathrm{N}_{B}]=\{x\in E\mid|f(x)|=0\}=\{x\in E\mid|x|=0\}=\mathrm{N}_{A}.\] The sequences \(f^{<\alpha}\circ A(x)\) and \(B\circ f(x)\) should also have the same elements \[(f^{<\alpha}\circ A(x))_{\iota}=(f\circ(A(x)))_{\iota}=f(A(x)_{ \iota})=f(x_{\iota})\] \[\text{and }(B\circ f(x))_{\iota}=B(f(x))_{\iota}=f(x)_{\iota}\] for all \(\iota<|x|\). Thus \(f:E\to F\) is a morphism iff \[|f(x)|=|x|\text{ and }f(x_{\iota})=f(x)_{\iota}\text{ for all }x\in E\text{ and all }\iota<|x|.\] Assuming that \(f:A\to B\) is a morphism and that \(B\) is standard, we have \(f(x_{\iota})=f(x)_{\iota}\in\mathrm{N}_{B}\) thus \(x_{\iota}\in f^{-1}[\mathrm{N}_{B}]=\mathrm{N}_{A}\) for all \(x\in E\) and \(\iota<|x|\), hence \(A\) is also standard. Given morphisms \(f:A\to B\) and \(g:B\to C\), we see that \(g\circ f\) is a morphism from \(A\) to \(C\) by letting \(\alpha\) be an ordinal for \(B\), so that \[(g\circ f)^{<\alpha}\circ A=g^{<\alpha}\circ f^{<\alpha}\circ A=g^{<\alpha} \circ B\circ f=C\circ g\circ f.\] **Definition 3.5** (categories of monographs, functor \(\mathsf{E}\)).: _Let_ **Monogr** _be the category of monographs and their morphisms. Let_ **SMonogr** _be its full subcategory of standard monographs. For any set \(O\) of ordinals, let \(O\)-_**Monogr** _(resp. \(O\)-_**SMonogr**_) be the full subcategory of \(O\)-monographs (resp. standard \(O\)-monographs). Let_ **FMonogr** _be the full subcategory of finite \(\omega\)-monographs._ _Let \(\mathsf{E}\) be the forgetful functor from_ **Monogr** _to_ **Sets**_, i.e., for every monograph \(A\) let \(\mathsf{E}A\) be the set of edges of \(A\), and for every morphism \(f:A\to B\) let \(\mathsf{E}f:\mathsf{E}A\to\mathsf{E}B\) be the underlying function, usually denoted \(f\)._ There is an obvious similitude between standard \(\{0,2\}\)-monographs and graphs. It is actually easy to define a functor \(\mathsf{M}:\textbf{Graphs}\to\{0,2\}\)-**SMonogr** by mapping any graph \(G=(N,E,s,t)\) to the monograph \(\mathsf{M}G\) whose set of edges is the coproduct \(N+E\), and that maps every edge \(e\in E\) to the sequence of nodes \(s(e)\,t(e)\) (and of course every node \(x\in N\) to \(\varepsilon\)). Similarly graph morphisms are transformed into morphisms of monographs through a coproduct of functions. It is easy to see that \(\mathsf{M}\) is an equivalence of categories. It is customary in Algebraic Graph Transformation to call _typed graphs_ the objects of **Graphs\(\backslash G\)**, where \(G\) is a graph called _type graph_, see e.g. [2]. We will extend this terminology to monographs and refer to the objects of **Monogr\(\backslash T\)** as the _monographs typed by \(T\)_ and \(T\) as a _type monograph_. ## 4 Limits and Colimits The colimits of monographs follow the standard constructions of colimits in **Sets** and **Graphs**. **Lemma 4.1**.: _Every pair \((A,B)\) of monographs has a coproduct \((A+B,\mu_{1},\mu_{2})\) such that \(\operatorname{tr}(A+B)=\operatorname{tr}(A)\cup\operatorname{tr}(B)\) and if \(A\) and \(B\) are finite (resp. standard) then so is \(A+B\)._ Proof.: Let \(\alpha\) be an ordinal for \(A\) and \(B\), and \((\mathsf{E}A+\mathsf{E}B,\mu_{1},\mu_{2})\) be the coproduct of \((\mathsf{E}A,\mathsf{E}B)\) in \(\mathbf{Sets}\). Since every element of \(\mathsf{E}A+\mathsf{E}B\) is either a \(\mu_{1}(x)\) or a \(\mu_{2}(y)\) for some \(x\in\mathsf{E}A\), \(y\in\mathsf{E}B\), we can define a monograph \(C\) by taking \(\mathsf{E}C\stackrel{{\mbox{\tiny def}}}{{=}}\mathsf{E}A+ \mathsf{E}B\) with \(C(\mu_{1}(x))\stackrel{{\mbox{\tiny def}}}{{=}}\mu_{1}^{<\alpha} \circ A(x)\) and \(C(\mu_{2}(y))\stackrel{{\mbox{\tiny def}}}{{=}}\mu_{2}^{<\alpha} \circ B(y)\) for all \(x\in\mathsf{E}A\), \(y\in\mathsf{E}B\), so that \(\mu_{1}:A\to C\) and \(\mu_{2}:B\to C\) are morphisms. It is obvious that \(\operatorname{tr}(C)=\operatorname{tr}(A)\cup\operatorname{tr}(B)\) and if \(A\) and \(B\) are finite (resp. standard) then so is \(C\). Let \(f:A\to D\) and \(g:B\to D\), there exists a unique function \(h\) from \(\mathsf{E}A+\mathsf{E}B=\mathsf{E}C\) to \(\mathsf{E}D\) such that \(f=h\circ\mu_{1}\) and \(g=h\circ\mu_{2}\), hence \[h^{<\alpha}\circ C(\mu_{1}(x))=(h\circ\mu_{1})^{<\alpha}\circ A(x)=f^{<\alpha} \circ A(x)=D\circ f(x)=D\circ h(\mu_{1}(x))\] for all \(x\in\mathsf{E}A\), and similarly \(h^{<\alpha}\circ C(\mu_{2}(y))=D\circ h(\mu_{2}(y))\) for all \(y\in\mathsf{E}B\), hence \(h^{<\alpha}\circ C=D\circ h\), i.e., \(h:C\to D\) is a morphism. **Lemma 4.2**.: _Every pair of parallel morphisms \(f,g:A\to B\) has a coequalizer \((Q,c)\) such that \(\operatorname{tr}(Q)=\operatorname{tr}(B)\) and if \(B\) is finite (resp. standard) then so is \(Q\)._ Proof.: Let \(\alpha\) be an ordinal for \(B\) and \(\sim\) be the smallest equivalence relation on \(\mathsf{E}B\) that contains \(R=\{(f(x),g(x))\mid x\in\mathsf{E}A\}\) and \(c:\mathsf{E}B\to\mathsf{E}B/\!\sim\) be the canonical surjection, so that \(c\circ f=c\circ g\). We thus have for all \(x\in\mathsf{E}A\) that \[c^{<\alpha}\circ B\circ f(x)=(c\circ f)^{<\alpha}\circ A(x)=(c\circ g)^{< \alpha}\circ A(x)=c^{<\alpha}\circ B\circ g(x).\] For all \(y,y^{\prime}\in\mathsf{E}B\) such that \(c(y)=c(y^{\prime})\), i.e., \(y\sim y^{\prime}\), there is a finite sequence \(y_{0},\dots,y_{n}\) of elements of \(\mathsf{E}B\) such that \(y_{0}=y\), \(y_{n}=y^{\prime}\) and \(y_{i}\)\(R\)\(y_{i+1}\) or \(y_{i+1}\)\(R\)\(y_{i}\) for all \(0\leqslant i<n\), hence \(c^{<\alpha}\circ B(y_{i})=c^{<\alpha}\circ B(y_{i+1})\), and therefore \(c^{<\alpha}\circ B(y)=c^{<\alpha}\circ B(y^{\prime})\). We can now define a monograph \(Q\) by taking \(\mathsf{E}Q=\mathsf{E}B/\!\sim\) with \(Q(c(y))\stackrel{{\mbox{\tiny def}}}{{=}}c^{<\alpha}\circ B(y)\), so that \(c:B\to Q\) is a morphism. Since \(c\) is surjective then \(\operatorname{tr}(Q)=\operatorname{tr}(B)\) and if \(B\) is finite (resp. standard) then so is \(Q\). Let \(d:B\to D\) such that \(d\circ f=d\circ g\), there exists a unique function \(h\) from \(\mathsf{E}Q\) to \(\mathsf{E}D\) such that \(d=h\circ c\), and \(h:Q\to D\) is a morphism since for all \(y\in\mathsf{E}B\), \[D\circ h(c(y))=D\circ d(y)=d^{<\alpha}\circ B(y)=h^{<\alpha}\circ c^{<\alpha} \circ B(y)=h^{<\alpha}\circ Q(c(y)).\] **Corollary 4.3**.: _The epimorphisms in \(\mathbf{Monogr}\) are the surjective morphisms._ Proof.: Assume \(f:A\to B\) is an epimorphism. Let \((B+B,\mu_{1},\mu_{2})\) be a coproduct of \((B,B)\) and \((Q,c)\) be the coequalizer of \(\mu_{1}\circ f,\mu_{2}\circ f:A\to B+B\) constructed in the proof of Lemma 4.2, then \(c\circ\mu_{1}\circ f=c\circ\mu_{2}\circ f\), hence \(c\circ\mu_{1}=c\circ\mu_{2}\). For all \(y\in\mathsf{E}B\) we thus have \(\mu_{1}(y)\sim\mu_{2}(y)\), and since \(\mu_{1}(y)\neq\mu_{2}(y)\) then \(\mu_{1}(y)\) must be related by \(R\) to some element of \(\mathsf{E}(B+B)\), hence there is an \(x\in\mathsf{E}A\) such that \(\mu_{1}(y)=\mu_{1}\circ f(x)\), thus \(y=f(x)\) since \(\mu_{1}\) is injective; this proves that \(f\) is surjective. The converse is obvious. A well-known consequence of Lemmas 4.1, 4.2 and that \(\varnothing\) is the initial monograph is that all finite diagrams have colimits. **Theorem 4.4**.: _The categories of Definition 3.5 are finitely co-complete._ We next investigate the limits in categories of monographs. Products of monographs are more difficult to build than products of graphs. This is due to the fact that edges of identical length may be adjacent to edges of different lengths. **Lemma 4.5**.: _Every pair \((A,B)\) of monographs has a product \((A\times B,\pi_{1}^{\prime},\pi_{2}^{\prime})\) such that \(A\times B\) is finite whenever \(A\) and \(B\) are finite._ Proof.: Let \(\alpha\) be an ordinal for \(A\) and \(B\), let \((\mathsf{E}A\times\mathsf{E}B,\pi_{1},\pi_{2})\) be the product of \((\mathsf{E}A,\mathsf{E}B)\) in \(\mathbf{Sets}\), we consider the set of subsets \(H\) of \(\{(x,y)\in\mathsf{E}A\times\mathsf{E}B\mid|x|=|y|\}\) such that \((x,y)\in H\) entails \((x_{\iota},y_{\iota})\in H\) for all \(\iota<|x|\). This set contains \(\varnothing\) and is closed under union, hence it has a greatest element \(\mathsf{E}P\), and we let \(P(x,y)\stackrel{{\mbox{\tiny def}}}{{=}}\langle A(x),B(y)\rangle\) for all \((x,y)\in\mathsf{E}P\); this is obviously an \(\mathsf{E}P\)-sequence, hence \(P\) is a monograph. Let \(\pi_{1}^{\prime}\stackrel{{\mbox{\tiny def}}}{{=}}\pi_{1}|_{ \mathsf{E}P}\) and \(\pi_{2}^{\prime}\stackrel{{\mbox{\tiny def}}}{{=}}\pi_{2}|_{ \mathsf{E}P}\), we have \[\pi_{1}^{\prime<\alpha}\circ P(x,y)=A(x)=A\circ\pi_{1}^{\prime}(x,y)\] for all \((x,y)\in\mathsf{E}P\), hence \(\pi_{1}^{\prime}:P\to A\) and similarly \(\pi_{2}^{\prime}:P\to B\) are morphisms. Let \(f:C\to A\) and \(g:C\to B\), then \(\langle f,g\rangle:\mathsf{E}C\to\mathsf{E}A\times\mathsf{E}B\) and for all \(z\in\mathsf{E}C\) we have \(|f(z)|=|z|=|g(z)|\) hence \(\langle f,g\rangle[\mathsf{E}C]\subseteq\{(x,y)\in\mathsf{E}A\times\mathsf{E }B\mid|x|=|y|\}\). Assume that \((x,y)\in\langle f,g\rangle[\mathsf{E}C]\), then there exists a \(z\in\mathsf{E}C\) such that \(x=f(z)\) and \(y=g(z)\), hence \(|x|=|y|\), \(f(z_{\iota})=f(z)_{\iota}=x_{i}\) and \(g(z_{\iota})=g(z)_{\iota}=y_{\iota}\) for all \(\iota<|x|\), hence \((x_{\iota},y_{\iota})\in\langle f,g\rangle[\mathsf{E}C]\). Thus \(\langle f,g\rangle[\mathsf{E}C]\subseteq\mathsf{E}P\) and we let \(h\stackrel{{\text{\tiny def}}}{{=}}\langle f,g\rangle[\mathsf{E }P]\), then \(h\) is the unique function such that \(\pi^{\prime}_{1}\circ h=f\) and \(\pi^{\prime}_{2}\circ h=g\), and \(h:C\to P\) is a morphism since for all \(z\in\mathsf{E}C\), \[P\circ h(z) = P(f(z),g(z))\] \[= \langle A\circ f(z),B\circ g(z)\rangle\] \[= \langle f^{<\alpha}\circ C(z),g^{<\alpha}\circ C(z)\rangle\] \[= h^{<\alpha}\circ C(z).\] We therefore see that \(\mathsf{E}(A\times B)\) is only a subset of \(\mathsf{E}A\times\mathsf{E}B\). **Lemma 4.6**.: _Every pair of parallel morphisms \(f,g:A\to B\) has an equalizer \((E,e)\) such that \(E\) is finite whenever \(A\) is finite._ Proof.: Let \(\alpha\) be an ordinal for \(A\), \(\mathsf{E}E\stackrel{{\text{\tiny def}}}{{=}}\{x\in\mathsf{E}A \mid f(x)=g(x)\}\), \(e:\mathsf{E}E\hookrightarrow\mathsf{E}A\) be the canonical injection and \(E(x)\stackrel{{\text{\tiny def}}}{{=}}A(x)\) for all \(x\in\mathsf{E}E\). Since \[f^{<\alpha}\circ A(x)=B\circ f(x)=B\circ g(x)=g^{<\alpha}\circ A(x)\] then \(E(x)\) is an \(\mathsf{E}E\)-sequence, hence \(E\) is a monograph. Besides \(e^{<\alpha}\circ E(x)=A(x)=A\circ e(x)\), hence \(e:E\to A\) is a morphism such that \(f\circ e=g\circ e\). For any \(d:D\to A\) such that \(f\circ d=g\circ d\), we have \(d(y)\in\mathsf{E}E\) for all \(y\in\mathsf{E}D\), hence \(h\stackrel{{\text{\tiny def}}}{{=}}d|_{\mathsf{E}D}^{\mathsf{E}E}\) is the unique function such that \(d=e\circ h\). We have \[e^{<\alpha}\circ h^{<\alpha}\circ D=d^{<\alpha}\circ D=A\circ d=A\circ e\circ h =e^{<\alpha}\circ E\circ h\] and \(e^{<\alpha}:(\mathsf{E}E)^{<\alpha}\hookrightarrow(\mathsf{E}A)^{<\alpha}\) is the canonical injection, hence \(h^{<\alpha}\circ D=E\circ h\) and \(h:D\to E\) is a morphism. **Corollary 4.7**.: _The monomorphisms in \(\mathbf{Monogr}\) are the injective morphisms._ Proof.: Assume \(f:A\to B\) is a monomorphism. Let \((A\times A,\pi_{1},\pi_{2})\) be a product of \((A,A)\) and \((E,e)\) be the equalizer of \(f\circ\pi_{1},f\circ\pi_{2}:A\times A\to B\) constructed in the proof of Lemma 4.6, then \(f\circ\pi_{1}\circ e=f\circ\pi_{2}\circ e\), hence \(\pi_{1}\circ e=\pi_{2}\circ e\). For all \(x,y\in\mathsf{E}A\), if \(f(x)=f(y)\) then \(f\circ\pi_{1}(x,y)=f\circ\pi_{2}(x,y)\) hence \((x,y)\in\mathsf{E}E\) and therefore \(x=\pi_{1}\circ e(x,y)=\pi_{2}\circ e(x,y)=y\), hence \(f\) is injective. The converse is obvious. A well-known consequence of Lemmas 4.5 and 4.6 is that all non-empty finite diagrams in **Monogr** have limits. Since a limit of \(O\)-monographs (resp. standard monographs) is an \(O\)-monograph (resp. standard), this holds for all categories of Definition 3.5. In particular they all have pullbacks. We shall now investigate the limits of the empty diagram in these categories, i.e., their possible terminal objects. **Definition 4.8**.: _For any set of ordinals \(O\), let_ \[\mathrm{T}_{O}=\left\{\begin{array}{ll}\{(\lambda,0\mathord{\uparrow} \lambda)\mid\lambda\in O\}&\mbox{if $0\in O$}\\ \varnothing&\mbox{otherwise.}\end{array}\right.\] If \(0\in O\) then \(0\) is a node of \(\mathrm{T}_{O}\) and obviously \(\mathsf{ET}_{O}=\mathrm{tr}(\mathrm{T}_{O})=O\). Hence in all cases \(\mathrm{T}_{O}\) is a standard \(O\)-monograph. **Lemma 4.9**.: \(\mathrm{T}_{O}\) _is terminal in \(O\)-\(\mathbf{SMonogr}\)._ Proof.: If \(0\notin O\) then \(\varnothing=\mathrm{T}_{O}\) is the only standard \(O\)-monograph, hence it is terminal. Otherwise let \(A\) be any standard \(O\)-monograph, \(\alpha\) an ordinal for \(A\) and \(\ell:\mathsf{E}A\to O\) be the function that maps every edge \(x\in\mathsf{E}A\) to its length \(|x|\). Since \(A\) is standard then \((\ell^{<\alpha}\circ A(x))_{\iota}=|A(x)_{\iota}|=0\) for all \(\iota<|x|\), hence \(\ell^{<\alpha}\circ A(x)=0\mathord{\uparrow}|x|=\mathrm{T}_{O}\circ\ell(x)\), so that \(\ell:A\to\mathrm{T}_{O}\) is a morphism. Since morphisms preserve the length of edges and there is exactly one edge of each length in \(\mathrm{T}_{O}\), then \(\ell\) is unique. We now use the fact that every ordinal is a set of ordinals. **Lemma 4.10**.: _For any monograph \(T\) and morphism \(f:\mathrm{T}_{\alpha}\to T\), any ordinal for \(T\) is equal to or greater than \(\alpha\)._ Proof.: Let \(\beta\) be an ordinal for \(T\), then by the existence of \(f\) we have \(\alpha=\mathrm{tr}(\mathrm{T}_{\alpha})\subseteq\mathrm{tr}(T)\subseteq\beta\), hence \(\alpha\leq\beta\). **Lemma 4.11**.: **Monogr**_, \(\mathbf{SMonogr}\) and \(\mathbf{FMonogr}\) have no terminal object._ Proof.: Suppose that \(T\) is a terminal monograph, then there is an ordinal \(\beta\) for \(T\) and there is a morphism from \(\mathrm{T}_{\beta+1}\) to \(T\); by Lemma 4.10 this implies that \(\beta+1\leq\beta\), a contradiction. This still holds if \(T\) is standard since \(\mathrm{T}_{\beta+1}\) is standard. And it also holds if \(T\) is a finite \(\omega\)-monograph, since then \(\beta\) can be chosen finite, and then \(\mathrm{T}_{\beta+1}\) is also a finite \(\omega\)-monograph. Since terminal objects are limits of empty diagrams obviously these categories are not finitely complete. **Theorem 4.12**.: \(O\)_-\(\mathbf{SMonogr}\) is finitely complete for every set of ordinals \(O\). The categories \(\mathbf{Monogr}\), \(\mathbf{SMonogr}\) and \(\mathbf{FMonogr}\) are not finitely complete._ Proof.: By Lemmas 4.5, 4.6, 4.9 and 4.11. The category \(\mathbf{Graphs}\) is also known to be adhesive, a property of pushouts and pullbacks that has important consequences on algebraic transformations (see [8]) and that we shall therefore investigate. **Definition 4.13** (van Kampen squares, adhesive categories).: _A pushout square \((A,B,C,D)\) is a van Kampen square if for any commutative cube_ _where the back faces \((A^{\prime},A,B^{\prime},B)\) and \((A^{\prime},A,C^{\prime},C)\) are pullbacks, it is the case that the top face \((A^{\prime},B^{\prime},C^{\prime},D^{\prime})\) is a pushout iff the front faces \((B^{\prime},B,D^{\prime},D)\) and \((C^{\prime},C,D^{\prime},D)\) are both pullbacks._ _A category has pushouts along monomorphisms if all sources \((A,f,g)\) have pushouts whenever \(f\) or \(g\) is a monomorphism._ _A category is adhesive if it has pullbacks, pushouts along monomorphisms and all such pushouts are van Kampen squares._ As in the proof that **Graphs** is adhesive, we will use the fact that the category **Sets** is adhesive. **Lemma 4.14**.: \(\mathsf{E}\) _reflects isomorphisms._ Proof.: Let \(f:A\to B\) such that \(f\) is bijective, then it has an inverse \(f^{-1}:\mathsf{E}B\to\mathsf{E}A\). For all \(y\in\mathsf{E}B\) and all \(\iota<|y|\), let \(x=f^{-1}(y)\), we have \[f^{-1}(y_{\iota})=f^{-1}(f(x)_{\iota})=f^{-1}(f(x_{\iota}))=x_{\iota}=f^{-1}( y)_{\iota}\] hence \(f^{-1}:B\to A\) is a morphism, and \(f\) is therefore an isomorphism. A side consequence is that **Monogr** is balanced, i.e., if \(f\) is both a monomorphism and an epimorphism, then by Corollaries 4.3 and 4.7\(f\) is bijective, hence is an isomorphism. More important is that we can use [7, Theorem 24.7], i.e., that a faithful and isomorphism reflecting functor from a category that has some limits or colimits and preserves them, also reflects them. **Lemma 4.15**.: \(\mathsf{E}\) _preserves and reflects finite colimits._ Proof.: It is easy to see from the proofs of Lemmas 4.1 and 4.2 that \(\mathsf{E}\) preserves both coproducts and coequalizers, so that \(\mathsf{E}\) preserves all finite co-limits and hence also reflects them. This is particularly true for pushouts. The situation for pullbacks is more complicated since \(\mathsf{E}\) does not preserve products. **Lemma 4.16**.: \(\mathsf{E}\) _preserves and reflects pullbacks._ Proof.: We first prove that \(\mathsf{E}\) preserves pullbacks. Let \(f:A\to C\), \(g:B\to C\) and \(\alpha\) be an ordinal for \(A\) and \(B\), we assume w.l.o.g. a canonical pullback \((E,h,k)\) of \((f,g,C)\), i.e., let \((A\times B,\pi_{1}^{\prime},\pi_{2}^{\prime})\) be the product of \((A,B)\) and \((E,e)\) be the equalizer of \((f\circ\pi_{1}^{\prime},g\circ\pi_{2}^{\prime})\) with \(h=\pi_{1}^{\prime}\circ e\) and \(k=\pi_{2}^{\prime}\circ e\). Let \((\mathsf{E}A\times\mathsf{E}B,\pi_{1},\pi_{2})\) be the product of \((\mathsf{E}A,\mathsf{E}B)\) in **Sets**, we have by the proof of Lemma 4.5 that \(\mathsf{E}(A\times B)\subseteq\mathsf{E}A\times\mathsf{E}B\), \(\pi_{1}^{\prime}=\pi_{1}|_{\mathsf{E}(A\times B)}\) and \(\pi_{2}^{\prime}=\pi_{2}|_{\mathsf{E}(A\times B)}\). Let \(H\stackrel{{\text{\tiny def}}}{{=}}\{(x,y)\in\mathsf{E}A\times \mathsf{E}B\mid f(x)=g(y)\}\) and \(j:H\hookrightarrow\mathsf{E}A\times\mathsf{E}B\) be the canonical injection. By canonical construction \((H,\pi_{1}\circ j,\pi_{2}\circ j)\) is a pullback of \((f,g,\mathsf{E}C)\) in **Sets**; we next prove that it is the image by \(\mathsf{E}\) of the pullback \((E,h,k)\) of \((f,g,C)\) in **Monogr**. By the construction of \(E\) in Lemma 4.6 we have \(\mathsf{E}E=\{(x,y)\in\mathsf{E}(A\times B)\mid f(x)=g(y)\}\subseteq H\) and \(e:\mathsf{E}E\hookrightarrow\mathsf{E}(A\times B)\) is the canonical injection. For all \((x,y)\in H\) we have \(|x|=|f(x)|=|g(y)|=|y|\), and for all \(\iota<|x|\) we have \(f(x_{\iota})=f(x)_{\iota}=g(y)_{\iota}=g(y_{\iota})\) so that \((x_{\iota},y_{\iota})\in H\) and therefore \(H\subseteq\mathsf{E}(A\times B)\) by the construction of \(A\times B\) in Lemma 4.5. We thus have \(H=\mathsf{E}E\) hence \(\pi_{1}\circ j=\pi_{1}^{\prime}\circ e=h\) and \(\pi_{2}\circ j=\pi_{2}^{\prime}\circ e=k\), so that \(\mathsf{E}\) preserves pullbacks and hence as above \(\mathsf{E}\) also reflects them. **Theorem 4.17**.: _The categories of Definition 3.5 are adhesive._ Proof.: The existence of pullbacks and pushouts is already established. In any of these categories a commutative cube built on a pushout along a monomorphism as bottom face and with pullbacks as back faces, has an underlying cube in **Sets** that has the same properties by Corollary 4.7, Lemmas 4.15 and 4.16. Since **Sets** is an adhesive category (see [8]) the underlying bottom face is a van Kampen square, hence such is the bottom face of the initial cube by Lemmas 4.15 and 4.16. ## 5 Drawing Monographs Obviously we may endeavour to draw a monograph \(A\) only if \(\mathsf{E}A\) is finite and if its edges have finite lengths, i.e., if \(A\) is a finite \(\omega\)-monograph. If we require that any monograph \(\mathsf{M}G\) should be drawn as the graph \(G\), then a node should be represented by a bullet \(\bullet\) and an edge of length \(2\) by an arrow joining its two adjacent nodes. But generally the adjacent edges may not be nodes and there might be more than \(2\) of them, hence we adopt the following convention: an edge \(e\) of length at least \(2\) is represented as a sequence of connected arrows with an increasing number of tips (where \(A(e)=x_{0}x_{1}x_{2}x_{3}\cdots\)) and such that any arrow should enter \(x_{i}\) at the same angle as the next arrow leaves \(x_{i}\). For the sake of clarity we represent symmetric adjacencies by a pair of crossings rather than a single one, e.g., if \(A(e)=xe^{\prime}y\) and \(A(e^{\prime})=xey\), where \(x\) and \(y\) are nodes, the drawing may be but not It is sometimes necessary to name the edges in a drawing. We may then adopt the convention sometimes used for drawing diagrams in a category: the bullets are replaced by the names of the corresponding nodes, and arrows are interrupted to write their name at a place free from crossing, as in Note that no confusion is possible between the names of nodes and those of other edges, e.g., in it is clear that \(x\) and \(z\) are nodes since arrow tips point to them, and that \(y\) is the name of an edge of length \(3\). As is the case of graphs, monographs may not be planar and drawing them may require crossing edges that are not adjacent; in this case no arrow tip is present at the crossing and no confusion is possible with the adjacency crossings. However, it may seem preferable in such cases to erase one arrow in the proximity of the other, as in There remains to represent the edges of length \(1\). Since \(A(e)=x\) is standardly written \(A:e\mapsto x\), the edge \(e\) will be drawn as In order to avoid confusion there should be only one arrow out of the thick dash, e.g., if \(A(e)=e^{\prime}\) and \(A(e^{\prime})=ex\) where \(x\) is a node, the drawing may be since this last drawing may be interpreted as the monograph \(A(e^{\prime})=x\) and \(A(e)=e^{\prime}e^{\prime}\), that is not isomorphic to the intended monograph. Other conventions may be more appropriate depending on the context or on specific monographs. Consider for instance a monograph with one node \(x\) and two edges \(x\mathord{\uparrow}3\) and \(x\mathord{\uparrow}4\). The concentration of many arrow tips on a single bullet would make things confused unless it is sufficiently large. One possibility is to replace the bullet by a circle and treat it as a standard edge without tips. This monograph could then be drawn as These conventions are designed so that it is only possible to read a drawing of any finite \(\omega\)-monograph \(A\) as the monograph \(A\) itself if all edges are named in the drawing, or as some monograph isomorphic to \(A\) otherwise. This would not be possible if a monograph \(A\) was a function rather than a functional relation, since then its codomain \((\mathsf{E}A)^{<\alpha}\) would not be pictured. It would of course be possible to add the ordinal \(\alpha\) to the drawing, but then would it still qualify as a drawing? Note that the drawing of a graph or of a standard \(\{0,2\}\)-monograph can be read either as a graph \(G\) or as a monograph \(A\), and then \(\mathsf{M}G\simeq A\). One particularity of monographs is that edges can be adjacent to themselves, as in We may also draw typed monographs, then every edge \(e\in\mathsf{E}A\) has a type \(a(e)\) that can be written at the proximity of \(e\). For instance, a monograph typed by \(T=\{(u,\,v),\,(v,\,u)\}\) is drawn with labels \(u\) and \(v\) as in Of course, knowing that \(a\) is a morphism sometimes allows to deduce the type of an edge, possibly from the types of adjacent edges. In the present case, indicating a single type would have been enough to deduce all the others. In particular applications it may be convenient to adopt completely different ways of drawing (typed) monographs. **Example 5.1**.: _In [9] term graphs are defined from structures \((V,E,lab,att)\) where \(V\) is a set of nodes, \(E\) a set of hyperedges, \(att:E\to V^{<\omega}\) defines the adjacencies and \(lab:E\to\Omega\) such that \(|att(e)|\) is 1 plus the arity of \(lab(e)\) for all \(e\in E\) (for the sake of simplicity, we consider only ground terms of a signature \(\Sigma:\Omega\to S^{<\omega}\) such that \(\Omega\cap S=\varnothing\)). The first element of the sequence \(att(e)\) is considered as the result node of \(e\) and the others as its argument nodes, so that \(e\) determines paths from its result node to all its argument nodes._ Term graphs _are those structures such that paths do not cycle, every node is reachable from a root node and is the result node of a unique hyperedge. This definition is given for unsorted signatures but can easily be generalized, as we do now._ _We consider the type monograph \(\mathrm{T}_{\Sigma}\) defined by \(\mathsf{ET}_{\Sigma}\stackrel{{\mathrm{\tiny def}}}{{=}}S\cup\Omega\), and_ \[\mathrm{T}_{\Sigma}(s) \stackrel{{\mathrm{\tiny def}}}{{=}}\varepsilon\text{ for all }s\in S,\] \[\mathrm{T}_{\Sigma}(o) \stackrel{{\mathrm{\tiny def}}}{{=}}\Sigma(o)\text{ for all }o\in\Omega.\] _Note that \(\mathrm{T}_{\Sigma}\) is a standard \(\omega\)-monograph, and indeed that any standard \(\omega\)-monograph has this form for a suitable \(\Sigma\)._ _Any typed monograph \(a:A\to\mathrm{T}_{\Sigma}\) corresponds to a structure \((V,E,lab,att)\) where \(V=\mathrm{N}_{A}\), \(E=\mathsf{E}A\backslash\mathrm{N}_{A}\), \(lab(e)=a(e)\) and \(att(e)=A(e)\) for all \(e\in E\). The only difference (due to our definition of signatures) is that the result node of \(e\) is now the last node of the sequence \(A(e)\)._ _We now consider the signature \(\Sigma\) with two sorts \(\mathsf{s}\), \(\mathsf{s}^{\prime}\), a binary function symbol \(\mathsf{f}\) with \(\Sigma(\mathsf{f})=\mathsf{s}^{\prime}\,\mathsf{s}^{\prime}\,\mathsf{s}\) and a constant symbol \(\mathsf{c}\) with \(\Sigma(\mathsf{c})=\mathsf{s}^{\prime}\). We represent the term graph \(\mathsf{f}(\mathsf{c},\mathsf{c})\), where the two occurrences of \(\mathsf{c}\) are shared, as a typed monograph \(a:A\to\mathrm{T}_{\Sigma}\). We need two edges \(e\), \(e^{\prime}\) and their result nodes \(x\), \(x^{\prime}\), the first for \(\mathsf{f}\) and the second for \(\mathsf{c}\). Thus \(A\) is defined by_ \[\mathsf{E}A=\{x,\,x^{\prime},\,e,\,e^{\prime}\},\ A(x)=A(x^{\prime})= \varepsilon,\ A(e)=x^{\prime}\,x^{\prime}\,x\text{ and }A(e^{\prime})=x^{\prime}.\] _The typing morphism \(a:A\to\mathrm{T}_{\Sigma}\) is given by_ \[a(x)=\mathsf{s},\ a(x^{\prime})=\mathsf{s}^{\prime},\ a(e)=\mathsf{f}\text{ and }a(e^{\prime})=\mathsf{c}.\] _We give below the standard drawing of the monograph \(A\) typed by \(a\) and the (clearly preferable) standard depiction of the corresponding term graph._ Graph Structures and Typed Monographs The procedure of reading the drawing of a graph as a \(\Gamma_{\mathrm{g}}\)-algebra \(\mathcal{G}\), where \(\Gamma_{\mathrm{g}}\) is the signature of graphs given in Section 1, is rather simple: every bullet is interpreted as an element of \(\mathcal{G}_{\mathtt{nodes}}\), every arrow as an element of \(\mathcal{G}_{\mathtt{edges}}\) and the images of this element by the functions \(\mathtt{src}^{\mathcal{G}}\) and \(\mathtt{tgt}^{\mathcal{G}}\) are defined according to geometric proximity in the drawing. A procedure for reading E-graphs would be similar, except that bullets may be interpreted either as \(\mathtt{nodes}\) or \(\mathtt{values}\), and this typing information should therefore be indicated in the drawing. Since the drawing of a graph is nothing else than the drawing of a standard \(\{0,2\}\)-monograph, we may skip the drawing step and directly transform a standard \(\{0,2\}\)-monograph \(A\) as a \(\Gamma_{\mathrm{g}}\)-algebra \(\mathcal{G}\). Then \[\mathcal{G}_{\mathtt{nodes}}=\mathrm{N}_{A},\ \mathcal{G}_{\mathtt{edges}}= \{x\in\mathsf{E}A\mid|x|=2\},\ \mathtt{src}^{\mathcal{G}}(x)=x_{0}\ \text{and}\ \mathtt{tgt}^{\mathcal{G}}(x)=x_{1}\] for all \(x\in\mathcal{G}_{\mathtt{edges}}\). Thus every node of \(A\) is typed by \(\mathtt{nodes}\) and all other edges are typed by \(\mathtt{edges}\). This typing is obviously a morphism from \(A\) to the monograph \(\{(\mathtt{nodes},\ \varepsilon),\ (\mathtt{edges},\mathtt{nodes})\}\) that is isomorphic to the terminal object of \(\{0,2\}\)-**SMonogr** (see Lemma 4.9). More generally, for any given graph structure \(\Gamma\) we may ask which monographs, equipped with a suitable morphism to a type monograph \(T\), can be interpreted in this way as \(\Gamma\)-algebras. As above, the edges of \(T\) should be the sorts of \(\Gamma\). But this is not sufficient since there is no canonical way of linking adjacencies in \(T\) (such as \(\mathtt{edges}_{0}=\mathtt{nodes}\) and \(\mathtt{edges}_{1}=\mathtt{nodes}\)) with the operator names of \(\Gamma\) (such as \(\mathtt{src}\) and \(\mathtt{tgt}\)). We will therefore use a notion of morphism between signatures in order to rename operators, and we also rename sorts in order to account for functoriality in \(T\). **Definition 6.1** (categories **Sig**, **GrStruct**, **Sig\({}_{\mathrm{srt}}\)**).: _A morphism \(r\) from \(\Sigma:\Omega\to S^{<\omega}\) to \(\Sigma^{\prime}:\Omega^{\prime}\to S^{\prime<\omega}\) is a pair \((r_{\mathrm{opn}},r_{\mathrm{srt}})\) of functions \(r_{\mathrm{opn}}:\Omega\to\Omega^{\prime}\) and \(r_{\mathrm{srt}}:S\to S^{\prime}\) such that_ \[r_{\mathrm{srt}}^{<\omega}\circ\Sigma=\Sigma^{\prime}\circ r_{\mathrm{opn}}.\] _For any morphism \(r^{\prime}:\Sigma^{\prime}\to\Sigma^{\prime\prime}\) let \(r^{\prime}\circ r\stackrel{{\mathrm{def}}}{{=}}(r^{\prime}_{ \mathrm{opn}}\circ r_{\mathrm{opn}},r^{\prime}_{\mathrm{srt}}\circ r_{\mathrm{ srt}}):\Sigma\to\Sigma^{\prime\prime}\), \(1_{\Sigma}\stackrel{{\mathrm{def}}}{{=}}(\mathrm{Id}_{\Omega}, \mathrm{Id}_{\Omega})\), and **Sig** be the category of signatures and their morphisms. Let **GrStruct** be the full subcategory of graph structures._ _Let \(\mathbf{Sig}_{\mathrm{srt}}\) be the subcategory of \(\mathbf{Sig}\) restricted to morphisms of the form \((r_{\mathrm{opn}},j)\) where \(j\) is a canonical injection. We write \(\dot{\simeq}\) for the isomorphism relation between objects in \(\mathbf{Sig}_{\mathrm{srt}}\)._ The question is therefore to elucidate the link between \(T\) and \(\Gamma\). As explained above, the edges of \(T\) correspond to the sorts of \(\Gamma\). We also see that every adjacency in \(T\) corresponds to an operator name in \(\Gamma\), e.g., an edge \(e\) of length \(2\) adjacent to \(e_{0}\) and \(e_{1}\) (i.e. such that \(T(e)=e_{0}\,e_{1}\)) corresponds to two operator names, say \(\mathtt{src}_{e}\) and \(\mathtt{tgt}_{e}\), of domain sort \(e\) and range sort \(e_{0}\) and \(e_{1}\) respectively. Since edges may have length greater than \(2\), we create canonical operator names of the form [\(e\cdot\iota\)] for the \(\iota^{\mathrm{th}}\) adjacency of the edge \(e\) for every \(\iota<|e|\) (hence we favor [\(e\cdot\)0] and [\(e\cdot\)1] over \(\mathtt{src}_{e}\) and \(\mathtt{tgt}_{e}\)). **Definition 6.2** (functor \(\mathsf{S}:\mathbf{Monogr}\to\mathsf{GrStruct}\)).: _To every monograph \(T\) we associate the set of operator names \(\Omega_{T}\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\leftleft}}}}}}}}}}}}}} }{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{} \{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{} \{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{} \{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{} \{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{} \{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{\}\{}\{}\{\}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{\}\{}\{\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{\}\{}\{}\{}\{}\{}\{\}\{}\{}\{}\{}\{\}\{}\{\}\{}\{\}\{\}\{}\{}\{\}\{}\{}\{\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{\}\{}\{}\{\}\{}\{}\{}\{\}\{}\{}\{}\{\}\{}\{}\{\}\{\}\{}\{\}\{}\{}\{\}\{}\{}\{\}\{}\{}\{\}\{}\{}\{\}\{}\{}\{\}\{}\{}\{}\{\}\{\}\{}\{\}\{}\{}\{\}\{}\{\}\{}\{\}\{}\{\}\{}\{}\{\}\{\}\{}\{\}\{}\{}\{\}\{}\{\}\{\}\{}\{\}\{\}\{\}\{}\{\}\{\}\{\}\{\}{\{}\{}\{}\{}\{}\{}\{\}\{}\{}\{\}\{}\{}\{}\{\}\{}\{\}\{}\{\}\{}\{\}\{}\{\}\{}\{\}\{}\{\}\{\}\{}\{\}\{\}\{}\{\}\{\}\{\}\{\}\{\}\{}\{}\{\}\{\}\{\}\{\}\{}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}{\}\{\}\{\}\{\}\{\}\{\}\{\}{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}{\}\{\}\{\}\{\}{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\{} The reason why monographs require edges of ordinal length now becomes apparent: the length of an edge \(s\) is the cardinality of \(\Omega_{s}\), i.e., the number of operator names whose domain sort is \(s\), and no restriction on this cardinality is ascribed to graph structures. The bijections \(\nu_{s}\) provide linear orderings of the sets \(\Omega_{s}\). Since \(T(s)\) depends on \(\nu_{s}\) the monograph \(T\) such that \(\mathsf{S}T\doteq\Gamma\) may not be unique, even though \(\mathsf{S}\) is injective on objects, as we now show. **Theorem 6.4**.: \(\mathsf{S}\) _is an isomorphism-dense embedding of_ **Monogr** _into_ **GrStruct**_._ Proof.: It is trivial by Lemma 6.3 that \(\mathsf{S}\) is isomorphism-dense since \(\mathsf{S}T\doteq\Gamma\) entails \(\mathsf{S}T\simeq\Gamma\). Assume that \(\mathsf{S}T=\mathsf{S}T^{\prime}\) then \(\mathsf{E}T=\mathsf{E}T^{\prime}\) and \(\Omega_{T}=\Omega_{T^{\prime}}\), hence \(|T(e)|=|T^{\prime}(e)|\) for all \(e\in\mathsf{E}T\), and \(T(e)_{\iota}=(\mathsf{S}T(\{\mathsf{E}\cdot\iota\}))_{1}=(\mathsf{S}T^{ \prime}(\{\mathsf{E}\cdot\iota\}))_{1}=T^{\prime}(e)_{\iota}\) for all \(\iota<|e|\), thus \(T=T^{\prime}\). It is therefore clear that if \(\mathsf{S}\) were full it would be an equivalence of categories, but this is not the case as we now illustrate on graphs. **Example 6.5**.: _We consider the graphs structure \(\Gamma_{\mathsf{g}}\). We have \(\Omega_{\texttt{nodes}}=\varnothing\) and \(\Omega_{\texttt{edges}}=\{\texttt{src},\texttt{tgt}\}\), hence \(\lambda_{\texttt{edges}}=2\). Let \(\nu_{\texttt{edges}}:2\to\Omega_{\texttt{edges}}\) be the bijection defined by \(\nu_{\texttt{edges}}:0\mapsto\texttt{src},1\mapsto\texttt{tgt}\), the corresponding monograph is \(\mathrm{T}_{\mathrm{g}}\stackrel{{\texttt{def}}}{{=}}\{(\texttt{ nodes},\varepsilon),(\texttt{edges},\texttt{nodes})\}\), and we easily check that \(\mathsf{ST}_{\mathrm{g}}\doteq\Gamma_{\mathsf{g}}\). However, the only automorphism of \(\mathrm{T}_{\mathrm{g}}\) is \(1_{\mathrm{T}_{\mathrm{g}}}\), while \(\Gamma_{\mathsf{g}}\) has a non trivial automorphism \(m=((\texttt{src}\texttt{tgt}),\mathrm{Id}_{\{\texttt{nodes},\texttt{edges} \}})\) (in cycle notation), hence \(\mathsf{S}\) is not surjective on morphisms._ This automorphism reflects the fact that a graph structure does not define an order between its operator names. Directing edges as arrows from src to tgt or the other way round is a matter of convention that is reflected in the choice of \(\nu_{\texttt{edges}}\) in Example 6.5. This contrasts with monographs where edges are inherently directed by ordinals, and also with the structure of graphs where the source function comes first. In the translation from **Monogr** to **GrStruct** the direction of edges are necessarily lost, hence these categories are not equivalent. **Example 6.6**.: _The signature \(\Gamma_{\mathsf{e}}\) of E-graphs from [2] has five sorts edges, nv-edges, ev-edges, nodes, values and six operator names src\({}_{\mathsf{e}}\), tgt\({}_{\texttt{e}}\), src\({}_{\texttt{nv}}\), tgt\({}_{\texttt{nw}}\), src\({}_{\texttt{ev}}\), tgt\({}_{\texttt{ev}}\) whose domain and range sorts are defined as in Section 1. We have \(\Omega_{\texttt{nodes}}=\Omega_{\texttt{values}}=\varnothing\), \(\Omega_{\texttt{edges}}=\{\texttt{src}_{\mathsf{e}},\texttt{tgt}_{\texttt{e}}\}\), \(\Omega_{\texttt{nv-edges}}=\{\texttt{src}_{\texttt{nv}},\texttt{tgt}_{\texttt{ nv}}\}\) and \(\Omega_{\texttt{ev-edges}}=\{\texttt{src}_{\texttt{ev}},\texttt{tgt}_{ \texttt{ev}}\}\). There are four possible monographs \(T\) such that \(\mathsf{S}T\doteq\Gamma_{\mathsf{e}}\) given by_ \[T(\texttt{nodes})=T(\texttt{values})=\varepsilon, T(\texttt{nv-edges})=\texttt{nodes}\texttt{values}\ \texttt{or}\ \texttt{values}\ \texttt{nodes},\] \[T(\texttt{edges})=\texttt{nodes}\texttt{nodes}, T(\texttt{ev-edges})=\texttt{edges}\texttt{values}\ \texttt{or}\ \texttt{values}\ \texttt{edges}.\] _These four monographs are depicted below._ \(T_{1}\)\(T_{2}\)\(T_{3}\)\(T_{4}\)\(T_{5}\)\(T_{6}\)\(T_{7}\)\(T_{8}\)\(T_{9}\)\(T_{1}\)\(T_{2}\)\(T_{3}\)\(T_{4}\) _The type indicated by the syntax (and consistent with the drawings of E-graphs in [2]) is of course \(T_{1}\)._ The restrictions of \(\mathsf{S}\) to the categories of Definition 3.5 are isomorphism-dense embeddings into full subcategories of \(\mathbf{GrStruct}\) that are easy to define. The \(O\)-monographs correspond to graph structures \(\Gamma:\Omega\to S^{<\omega}\) such that \(|\Omega_{s}|\in O\) for all \(s\in S\), and the standard monographs to \(\Omega_{\mathrm{Rng}(o)}=\varnothing\) for all \(o\in\Omega\). The finite monographs correspond to finite \(S\), hence \(\mathbf{FMonogr}\) corresponds to finite signatures. We can now describe precisely how a monograph \(A\) typed by \(T\) through \(a:A\to T\) can be read as an \(\mathsf{S}T\)-algebra \(\mathcal{A}\). As mentioned above, every edge \(x\) of \(A\) is typed by \(a(x)\in\mathsf{E}T\) and should therefore be interpreted as an element of \(\mathcal{A}_{a(x)}\), hence \(\mathcal{A}_{a(x)}\) is the set of all edges \(x\in\mathsf{E}A\) that are typed by \(a(x)\). Then, for every \(\iota<|x|=|a(x)|\), the \(\iota^{\mathrm{th}}\) adjacent edge \(x_{\iota}\) of \(x\) is the image of \(x\) by the \(\iota^{\mathrm{th}}\) operator name for this type of edge, that is \([\![a(x)\cdot\iota]\!]\). Note that the sort of this image is \(a(x_{\iota})=a(x)_{\iota}\) that is precisely the range sort of the operator name \([\![a(x)\cdot\iota]\!]\) in \(\mathsf{S}T\) (see Definition 6.2), so that \(\mathcal{A}\) is indeed an \(\mathsf{S}T\)-algebra. This leads to the following definition. **Definition 6.7** (functor \(\mathsf{A}_{T}:\mathbf{Monogr}\backslash T\to\mathsf{S}T\)-\(\mathbf{Alg}\)).: _Given a monograph \(T\), we define the function \(\mathsf{A}_{T}\) that maps every object \(a:A\to T\) of \(\mathbf{Monogr}\backslash T\) to the \(\mathsf{S}T\)-algebra \(\mathcal{A}=\mathsf{A}_{T}a\) defined by_ * \(\mathcal{A}_{e}\stackrel{{\mathrm{\tiny def}}}{{=}}a^{-1}[e]\) _for all_ \(e\in\mathsf{E}T\)_, and_ * \([\![e\cdot\iota]\!]^{\mathcal{A}}(x)\stackrel{{\mathrm{\tiny def }}}{{=}}x_{\iota}\) _for all_ \([\![e\cdot\iota]\!]\in\Omega_{T}\) _and_ \(x\in\mathcal{A}_{e}\)_._ _Besides, \(\mathsf{A}_{T}\) also maps every morphism \(f:a\to b\), where \(b:B\to T\), to the \(\mathsf{S}T\)-homomorphism \(\mathsf{A}_{T}f\) from \(\mathcal{A}\) to \(\mathcal{B}=\mathsf{A}_{T}b\) defined by_ \[(\mathsf{A}_{T}f)_{e}\stackrel{{\mathrm{\tiny def}}}{{=}}f|_{ \mathcal{A}_{e}}^{\mathcal{B}_{e}}\text{ for all }e\in\mathsf{E}T.\] The \(\mathsf{S}T\)-algebra \(\mathcal{A}\) can be pictured as in Figure 1. The carrier sets \(\mathcal{A}_{e}\) form a partition of \(\mathsf{E}A\). Since \(f:a\to b\) (not pictured) is a function \(f:\mathsf{E}A\to\mathsf{E}B\) such that \(b\circ f=a\), then \(b\circ f[\mathcal{A}_{e}]=a[a^{-1}[e]]\subseteq\{e\}\) hence \(f[\mathcal{A}_{e}]\subseteq b^{-1}[e]=\mathcal{B}_{e}\), so that \(f|_{\mathcal{A}_{e}}^{\mathcal{B}_{e}}\) is well-defined. We also see that \(h=\mathsf{A}_{T}f\) is an \(\mathsf{S}T\)-homomorphism from \(\mathcal{A}\) to \(\mathcal{B}\) since for every operator name \(\llbracket e\cdot\iota\rrbracket\in\Omega_{T}\) we have \(\operatorname{Dom}(\llbracket e\cdot\iota\rrbracket)=e\), \(\operatorname{Rng}(\llbracket e\cdot\iota\rrbracket)=e_{\iota}\) and \[\llbracket e\cdot\iota\rrbracket^{\mathcal{B}}\circ h_{e}(x)=\llbracket e\cdot \iota\rrbracket^{\mathcal{B}}(f(x))=f(x)_{\iota}=f(x_{\iota})=f(\llbracket e \cdot\iota\rrbracket^{\mathcal{A}}(x))=h_{e_{\iota}}\circ\llbracket e\cdot \iota\rrbracket^{\mathcal{A}}(x)\] for all \(x\in\mathcal{A}_{e}\). It is obvious from Definition 6.7 that \(\mathsf{A}_{T}\) preserves identities and composition of morphisms, hence that it is indeed a functor. **Theorem 6.8**.: _For every monograph \(T\), \(\mathsf{A}_{T}\) is an equivalence._ Proof.: Let \(a:A\to T\) and \(b:B\to T\) be objects of \(\mathbf{Monogr}\backslash T\) and \(\mathcal{A}\stackrel{{\text{\tiny def}}}{{=}}\mathsf{A}_{T}a\), \(\mathcal{B}\stackrel{{\text{\tiny def}}}{{=}}\mathsf{A}_{T}b\). It is trivial that \(\mathsf{A}_{T}\) is faithful. \(\mathsf{A}_{T}\) _is full._ For any \(\mathsf{S}T\)-homomorphism \(h:\mathcal{A}\to\mathcal{B}\), let \(f:\mathsf{E}A\to\mathsf{E}B\) be the function defined by \(f(x)\stackrel{{\text{\tiny def}}}{{=}}h_{a(x)}(x)\) for all \(x\in\mathsf{E}A\). Let \(e=a(x)\) so that \(x\in\mathcal{A}_{e}\), since \(h_{e}(x)\in\mathcal{B}_{e}=b^{-1}\llbracket e\rrbracket\) then \(b\circ f(x)=b(h_{e}(x))=e\), hence \(b\circ f=a\) and \(|f(x)|=|b(f(x))|=|a(x)|=|x|\). For all \(\iota<|x|\) we have \(a(x_{\iota})=a(x)_{\iota}=e_{\iota}\) and since \(h\) is an \(\mathsf{S}T\)-homomorphism then \[f(x_{\iota})=h_{e_{\iota}}(\llbracket e\cdot\iota\rrbracket^{\mathcal{A}}(x))= \llbracket e\cdot\iota\rrbracket^{\mathcal{B}}(h_{e}(x))=f(x)_{\iota}\] hence \(f:a\to b\) is a morphism. Since \((\mathsf{A}_{T}f)_{e}(x)=f|_{\mathcal{A}_{e}}^{\mathcal{B}_{e}}(x)=h_{e}(x)\) for all \(e\in\mathsf{E}T\) and all \(x\in\mathcal{A}_{e}\), then \(\mathsf{A}_{T}f=h\). \(\mathsf{A}_{T}\) _is isomorphism-dense._ For any \(\mathsf{S}T\)-algebra \(\mathcal{C}\), let \[\mathsf{E}C\stackrel{{\text{\tiny def}}}{{=}}\bigcup_{e\in \mathsf{E}T}\mathcal{C}_{e}\times\{e\}\ \text{ and }\ (C(x,e))_{\iota}\stackrel{{\text{\tiny def}}}{{=}}(\llbracket e \cdot\iota\rrbracket^{\mathcal{C}}(x),e_{\iota})\] for all \((x,e)\in\mathsf{E}C\) and \(\iota<|e|\). Since \(\operatorname{Rng}(\llbracket e\cdot\iota\rrbracket)=e_{\iota}\) then \(\llbracket e\cdot\iota\rrbracket^{\mathcal{C}}(x)\in\mathcal{C}_{e_{\iota}}\) hence \((C(x,e))_{\iota}\in\mathsf{E}C\), so that \(C\) is a monograph such that \(|(x,e)|=|e|\). Let \(c:\mathsf{E}C\to\mathsf{E}T\) be defined by \(c(x,e)\stackrel{{\text{\tiny def}}}{{=}}e\), we have \[c((x,e)_{\iota})=c(\llbracket e\cdot\iota\rrbracket^{\mathcal{C}}(x),e_{ \iota})=e_{\iota}=(c(x,e))_{\iota},\] hence \(c:C\to T\) is a morphism. For all \(e\in\mathsf{E}T\) we have \((\mathsf{A}_{T}c)_{e}=c^{-1}\llbracket e\rrbracket=\mathcal{C}_{e}\times\{e\}\), and we let \(h_{e}:\mathcal{C}_{e}\to(\mathsf{A}_{T}c)_{e}\) be defined by \(h_{e}(x)\stackrel{{\text{\tiny def}}}{{=}}(x,e)\) for all \(x\in\mathcal{C}_{e}\). The functions \(h_{e}\) are bijective and \(h\stackrel{{\text{\tiny def}}}{{=}}(h_{e})_{e\in\mathsf{E}T}\) is an \(\mathsf{S}T\)-homomorphism since \[\llbracket e\cdot\iota\rrbracket^{\mathsf{A}_{T}c}\circ h_{e}(x)=\llbracket e \cdot\iota\rrbracket^{\mathsf{A}_{T}c}(x,e)=(x,e)_{\iota}=(\llbracket e \cdot\iota\rrbracket^{\mathcal{C}}(x),e_{\iota})=h_{e_{\iota}}\circ\llbracket e \cdot\iota\rrbracket^{\mathcal{C}}(x),\] for all \(\llbracket e\cdot\iota\rrbracket\in\Omega_{T}\) and \(x\in\mathcal{C}_{e}\), hence \(\mathcal{C}\simeq\mathsf{A}_{T}c\). It is easy to see that for any two signatures \(\Sigma\) and \(\Sigma^{\prime}\), if \(\Sigma\simeq\Sigma^{\prime}\) then \(\Sigma\)-\(\mathbf{Alg}\simeq\Sigma^{\prime}\)-\(\mathbf{Alg}\). We conclude that all graph structured algebras can be represented as typed monographs. **Corollary 6.9**.: _For every graph structure \(\Gamma\) there exists a monograph \(T\) such that \(\Gamma\)-\(\mathbf{Alg}\approx\mathbf{Monogr}\backslash T\)._ Proof.: By Lemma 6.3 there exists \(T\) such that \(\Gamma\simeq\mathsf{S}T\), hence \(\mathbf{Monogr}\backslash T\approx\mathsf{S}T\)-\(\mathbf{Alg}\simeq\Gamma\)-\(\mathbf{Alg}\). **Example 6.10**.: _Following [10], an \(\infty\)-\(\operatorname{graph}\)\(\mathcal{G}\) is given by a diagram of sets_ _such that, for every \(n\in\omega\), the following equations hold:_ \[s_{n}\circ s_{n+1}=s_{n}\circ t_{n+1},\qquad t_{n}\circ s_{n+1}=t_{n}\circ t_{n +1}.\] _This means that every element \(x\) of \(\mathcal{G}_{n+2}\) is an edge whose source \(x_{0}\) and target \(x_{1}\) are edges of \(\mathcal{G}_{n}\) that are parallel, i.e., that have same source \((x_{0})_{0}=(x_{1})_{0}\) and same target \((x_{0})_{1}=(x_{1})_{1}\). Graphically:_ _This is known as the globular condition. We consider the type monograph \(\operatorname{T}_{\infty}\) defined by \(\operatorname{\mathsf{ET}}_{\infty}=\omega\),\(\operatorname{T}_{\infty}(0)=\varepsilon\) and \(\operatorname{T}_{\infty}(n+1)=n\,n\) for all \(n\in\omega\). This is an infinite non-standard \(\{0,2\}\)-monograph that can be pictured as_ _We express the globular condition on typed monographs \(g:G\to\operatorname{T}_{\infty}\) as:_ _for all \(x\in\mathsf{E}G,\) if \(g(x)\geq 2\) then \(G(x_{0})=G(x_{1}).\)_ _We rapidly check that this is equivalent to the globular condition on the \(\operatorname{\mathsf{ST}}_{\infty}\)-algebra \(\mathcal{G}=\operatorname{\mathsf{A}}_{\operatorname{T}_{\infty}}g\). The set of sorts of \(\operatorname{\mathsf{ST}}_{\infty}\) is \(\omega\) and its operator names are \(\llbracket n+1\!\cdot\!\!\mathsf{J}\rrbracket\) and \(\llbracket n+1\!\cdot\!\!\mathsf{J}\rrbracket\) with domain sort \(n+1\) and range sort \(n\), for all \(n\in\omega\). We let \(s_{n}\stackrel{{\mbox{\tiny{\rm{\tiny def}}}}}{{=}}\llbracket n +1\!\cdot\!\!\mathsf{J}^{\mathcal{G}}\) and \(t_{n}\stackrel{{\mbox{\tiny{\rm{\tiny def}}}}}{{=}}\llbracket n +1\!\cdot\!\!\mathsf{J}^{\mathcal{G}}\), that are functions from \(\mathcal{G}_{n+1}\) to \(\mathcal{G}_{n}\) as in the diagram of \(\infty\)-graphs._ _By Definition 6.7 we have for all \(x\in\mathcal{G}_{n+2}=g^{-1}[n+2]\) and all \(i,j\in 2\) that_ \[\llbracket n+1\!\cdot\!\!\mathsf{J}^{\mathcal{G}}\circ\llbracket n+2\!\cdot\! \!\mathsf{J}^{\mathcal{G}}(x)=\llbracket n+1\!\cdot\!\!\mathsf{J}^{\mathcal{G}} (x_{i})=(x_{i})_{j}\] _hence_ \[G(x_{0})=G(x_{1}) \text{ iff }(x_{0})_{0}=(x_{1})_{0}\text{ and }(x_{0})_{1}=(x_{1})_{1}\] \[\text{ iff }\llbracket n+1\!\cdot\!\!\mathsf{J}^{\mathcal{G}} \circ\llbracket n+2\!\cdot\!\!\mathsf{J}^{\mathcal{G}}(x)=\llbracket n+1\!\cdot \!\!\mathsf{J}^{\mathcal{G}}\circ\llbracket n+2\!\cdot\!\!\mathsf{J}^{ \mathcal{G}}(x)\] \[\text{ and }\llbracket n+1\!\cdot\!\!\mathsf{J}^{\mathcal{G}} \circ\llbracket n+2\!\cdot\!\!\mathsf{J}^{\mathcal{G}}(x)=\llbracket n+1\! \cdot\!\!\mathsf{J}^{\mathcal{G}}\circ\llbracket n+2\!\cdot\!\!\mathsf{J}^{ \mathcal{G}}(x)\] \[\text{ iff }s_{n}\circ s_{n+1}(x)=s_{n}\circ t_{n+1}(x)\text{ and }t_{n}\circ s_{n+1}(x)=t_{n}\circ t_{n+1}(x).\] **Example 6.11**.: _The signature \(\Gamma_{\mathrm{h}}\) of hypergraphs (see [3, Example 3.4]) is defined by the set of sorts \(\operatorname{S_{h}}\stackrel{{\mbox{\tiny{\rm{\tiny def}}}}}{{=}}\{\forall \}\cup\{\textit{H}_{n,m}\mid n,m\in\omega\}\) and for all \(n,m\in\omega\) by \(n\) operator names \(\textsf{src}^{n,m}_{i}\) and \(m\) operator names \(\textsf{tgt}^{n,m}_{j}\) with domain sort \(\textsf{H}_{n,m}\) and range sort V for all \(1\leq i\leq n\) and \(1\leq j\leq m\). Hence there are \(n+m\) operator names of domain \(\textsf{H}_{n,m}\), and \((n+m)!\) bijections from the ordinal \(n+m\) to this set of operator names. But since they all have the same range sort V, the type monograph \(\mathrm{T}_{\mathrm{h}}\) does not depend on these bijections (one for every pair \((n,m)\)). It is defined by \(\mathsf{ET}_{\mathrm{h}}\stackrel{{\mathrm{\tiny def}}}{{=}} \mathrm{S}_{\mathrm{h}}\) and_ \[\mathrm{T}_{\mathrm{h}}(\textsf{V}) =\varepsilon\] \[\mathrm{T}_{\mathrm{h}}(\textsf{H}_{n,m}) =\textsf{V}\!\upharpoonright(n+m)\text{ for all }n,m\in\omega.\] _This is a standard \(\omega\)-monograph. It is easy to see that any standard \(\omega\)-monograph can by typed by \(\mathrm{T}_{\mathrm{h}}\), though not in a unique way. Every edge of length \(l>0\) can be typed by any sort \(\textsf{H}_{n,m}\) such that \(n+m=l\), and every node can be typed by V (or by \(\textsf{H}_{0,0}\) if it is not adjacent to any edge). To any such typing corresponds an \(\mathsf{ST}_{\mathrm{h}}\)-algebra by the equivalence \(\textsf{A}_{\mathrm{T}_{\mathrm{h}}}\), and then to a hypergraph (a \(\Gamma_{\mathrm{h}}\)-algebra) since \(\Gamma_{\mathrm{h}}\simeq\mathsf{ST}_{\mathrm{h}}\)._ _But to know which hypergraph \(\mathcal{H}\) corresponds exactly to a typed monograph we need to be more specific, since there are infinitely many isomorphisms between \(\Gamma_{\mathrm{h}}\) and \(\mathsf{ST}_{\mathrm{h}}\). The natural isomorphism stems from the obvious orderings \(\textsf{src}^{n,m}_{1}<\cdots<\textsf{src}^{n,m}_{n}<\textsf{tgt}^{n,m}_{1}< \cdots<\textsf{tgt}^{n,m}_{m}\) for all \(n,m\in\omega\). In this isomorphism the canonical operator name \(\textsf{[H}_{n,m}\cdot i\mathcal{I}\) for all \(i<n+m\) corresponds to \(\textsf{src}^{n,m}_{i+1}\) if \(i<n\), and to \(\textsf{tgt}^{n,m}_{i+1-n}\) if \(i\geq n\). Thus an edge \(x\), say of length 3 typed by \(\textsf{H}_{2,1}\), must be interpreted as an hyperedge \(x\in\mathcal{H}_{\textsf{H}_{2,1}}\) with \((\textsf{src}^{2,1}_{1})^{\mathcal{H}}(x)=x_{0}\), \((\textsf{src}^{2,1}_{2})^{\mathcal{H}}(x)=x_{1}\), \((\textsf{tgt}^{2,1}_{1})^{\mathcal{H}}(x)=x_{2}\) and \(x_{0},x_{1},x_{2}\in\mathcal{H}_{\textsf{V}}\)._ The results of this section apply in particular to typed graphs. It is easy to see that \(\mathsf{S}\circ\textsf{M}\) is an isomorphism-dense embedding of **Graphs** into the full subcategory of graph structures \(\Gamma:\Omega\to S^{<\omega}\) such that for every operator name \(o\in\Omega\) we have \(|\Omega_{\mathrm{Dom}(o)}|=2\) and \(\Omega_{\mathrm{Rng}(o)}=\varnothing\). Hence for every such \(\Gamma\) there exists a graph \(G\) such that \(\textbf{Graphs}\backslash G\approx\textbf{Monogr}\backslash\textsf{M}G\approx \Gamma\text{-}\textbf{Alg}\). The type graph \(G\) is determined only up to the orientation of its edges. ## 7 Submonographs and Partial Morphisms Graph structures have been characterized in [3] as the signatures that allow the transformation of the corresponding algebras by the single pushout method. This method is based on the construction of pushouts in categories of partial homomorphisms, defined as standard homomorphisms from subalgebras of their domain algebra, just as partial functions are standard functions from subsets of their domain (in the categorical theoretic sense of the word _domain_). The results of Section 6 suggest that a similar approach can be followed with monographs. We first need a notion of submonograph, their (inverse) image by morphisms and restrictions of morphisms to submonographs. **Definition 7.1** (submonographs and their images, restricted morphisms).: _A monograph \(A\) is a submonograph of a monograph \(M\) if \(A\subseteq M\). For any monograph \(N\) and morphism \(f:M\to N\), let \(f(A)\stackrel{{\mbox{\tiny def}}}{{=}}\{(f(x),N\circ f(x))\mid x\in \mathsf{E}A\}\). For any submonograph \(C\subseteq N\), let \(f^{-1}(C)\stackrel{{\mbox{\tiny def}}}{{=}}\{(x,M(x))\mid x\in f^{ -1}[\mathsf{E}C]\}\). If \(f(A)\subseteq C\), let \(f|_{A}^{C}:A\to C\) be the morphism whose underlying function is \(f|_{\mathsf{E}A}^{EC}\)._ In the sequel we will use the following obvious facts without explicit reference. \(f(A)\) and \(f^{-1}(C)\) are submonographs of \(N\) and \(M\) respectively. If \(A\) and \(B\) are submonographs of \(M\) then so are \(A\cup B\) and \(A\cap B\). We have \(f(A\cup B)=f(A)\cup f(B)\) thus \(A\subseteq B\) entails \(f(A)\subseteq f(B)\). If \(C\) and \(D\) are submonographs of \(N\) we have similarly \(f^{-1}(C\cup D)=f^{-1}(C)\cup f^{-1}(D)\) and \(C\subseteq D\) entails \(f^{-1}(C)\subseteq f^{-1}(D)\). We also have \(A\subseteq f^{-1}(f(A))\) and \(f(f^{-1}(C))=C\cap f(M)\). For any \(g:N\to P\) and submonograph \(E\) of \(P\), \((g\circ f)^{-1}(E)=f^{-1}(g^{-1}(E))\). If \((A+B,\mu_{1},\mu_{2})\) is the coproduct of \((A,B)\) and \(C\) is a submonograph of \(A+B\) then \(C=\mu_{1}^{-1}(C)+\mu_{2}^{-1}(C)\). We may now define the notion of partial morphisms of monographs, with a special notation in order to distinguish them from standard morphisms, and their composition. **Definition 7.2** (categories of partial morphisms of monographs).: _A partial morphism\(\lceil f\rceil:A\to B\) is a morphism \(f:A^{\prime}\to B\) where \(A^{\prime}\) is a submonograph of \(A\). \(f\) is called the underlying morphism of \(\lceil f\rceil\). If the domain of \(f\) is not otherwise specified, we write \(\lceil f\rceil:A\hookhook A^{\prime}\to B\). If the domain \(A^{\prime}\) of \(f\) is specified but not the domain of \(\lceil f\rceil\) then they are assumed to be identical, i.e., \(\lceil f\rceil:A^{\prime}\hookhook A^{\prime}\to B\). For any \(\lceil g\rceil:B\hook B^{\prime}\to C\) we define the composition of partial morphisms as_ \[\lceil g\rceil\circ\lceil f\rceil\ \stackrel{{\mbox{\tiny def}}}{{=}}\ \left\lceil g\circ f \right|_{f^{-1}(B^{\prime})}^{B^{\prime}}\Big{\rceil}:A\hook f^{-1}(B^{ \prime})\to C.\] _Let \(\mathbf{Monogr^{P}}\) be the category of monographs and partial morphisms. Let \(\mathbf{SMonogr^{P}}\) be its full subcategory of standard monographs. For any set \(O\) of ordinals, let \(O\)-\(\mathbf{Monogr^{P}}\) (resp. \(O\)-\(\mathbf{SMonogr^{P}}\)) be its full subcategory of \(O\)-monographs (resp. standard \(O\)-monographs). Let \(\mathbf{FMonogr^{P}}\) be its full subcategory of finite \(\omega\)-monographs._ Note that \((f^{-1}(B^{\prime}),\,f|_{f^{-1}(B^{\prime})}^{B^{\prime}}:f^{-1}(B^{\prime}) \to B^{\prime},\,j^{\prime}:f^{-1}(B^{\prime})\hookrightarrow A^{\prime})\) is a pullback of \((j:B^{\prime}\hookrightarrow B,\,f:A^{\prime}\to B,\,B)\) and is therefore an inverse image (i.e., a pullback along a monomorphism, see [7]), and it is therefore easy to see that composition of partial morphisms is associative, see [11]. (Note however that \(\mathbf{Monogr^{P}}\) is not a category of partial maps in the sense of [11], since partial maps are defined modulo isomorphic variations of \(A^{\prime}\).) We now see how these inverse images allow to formulate a sufficient condition ensuring that restrictions of coequalizers are again coequalizers. **Lemma 7.3** (coequalizer restriction).: _Let \(A^{\prime}\) and \(B^{\prime}\) be submonographs of \(A\) and \(B\) respectively and \(f,g:A\to B\) be parallel morphisms such that_ \[f^{-1}(B^{\prime})=A^{\prime}=g^{-1}(B^{\prime}),\] _if \((Q,c)\) is a coequalizer of \((f,g)\) then \((Q^{\prime},c^{\prime})\) is a coequalizer of \((f|_{A^{\prime}}^{B^{\prime}},g|_{A^{\prime}}^{B^{\prime}})\), where \(Q^{\prime}=c(B^{\prime})\), \(c^{\prime}=c|_{B^{\prime}}^{Q^{\prime}}\) and \(c^{-1}(Q^{\prime})=B^{\prime}\)._ Proof.: We assume w.l.o.g. that \((Q,c)\) is the coequalizer of \((f,g)\) constructed in Lemma 4.2 with \(\sim\) being the equivalence relation generated by \(R=\{(f(x),g(x))\mid x\in\mathsf{E}A\}\), and we let \((Q^{\prime},c^{\prime})\) be the coequalizer of \((f|_{A^{\prime}}^{B^{\prime}},g|_{A^{\prime}}^{B^{\prime}})\) constructed similarly with the equivalence relation \(\approx\) generated by \(R^{\prime}=\{(f|_{A^{\prime}}^{B^{\prime}}(x),g|_{A^{\prime}}^{B^{\prime}}(x ))\mid x\in\mathsf{E}A^{\prime}\}\). By the properties of \(f\) and \(g\) we have that \[f(x)\in\mathsf{E}B^{\prime}\text{ iff }x\in f^{-1}[\mathsf{E}B^{\prime}]\text{ iff }x\in\mathsf{E}A^{\prime}\text{ iff }x\in g^{-1}[\mathsf{E}B^{\prime}]\text{ iff }g(x)\in\mathsf{E}B^{\prime}\] for all \(x\in\mathsf{E}A\), hence for all \(y,y^{\prime}\in\mathsf{E}B\) we have that \(y\)\(R^{\prime}\)\(y^{\prime}\) iff \(y\)\(R\)\(y^{\prime}\) and at least one of \(y,y^{\prime}\) is in \(\mathsf{E}B^{\prime}\). By an easy induction we see that \(y\approx y^{\prime}\) iff \(y\sim y^{\prime}\) and \(y^{\prime}\in\mathsf{E}B^{\prime}\), hence the \(\approx\)-classes are the \(\sim\)-classes of the elements of \(\mathsf{E}B^{\prime}\), i.e., \(\mathsf{E}Q^{\prime}=c[\mathsf{E}B^{\prime}]\). It follows trivially that \(Q^{\prime}=c(B^{\prime})\), \(c^{\prime}=c|_{B^{\prime}}^{Q^{\prime}}\) and \(c^{-1}(Q^{\prime})=B^{\prime}\). It is then easy to obtain a similar result on pushouts. **Lemma 7.4** (pushout restriction).: _Let \(A^{\prime}\), \(B^{\prime}\), \(C^{\prime}\) be submonographs of \(A\), \(B\), \(C\) respectively and \(f:A\to B\), \(g:A\to C\) be morphisms such that_ \[f^{-1}(B^{\prime})=A^{\prime}=g^{-1}(C^{\prime}),\] _if \((h,k,Q)\) is a pushout of \((A,f,g)\), let \(Q^{\prime}=h(B^{\prime})\cup k(C^{\prime})\), \(h^{-1}(Q^{\prime})=B^{\prime}\) and \(k^{-1}(Q^{\prime})=C^{\prime}\), then \((h|_{B^{\prime}}^{Q^{\prime}},k|_{C^{\prime}}^{Q^{\prime}},Q^{\prime})\) is a pushout of \((A^{\prime},f|_{A^{\prime}}^{B^{\prime}},g|_{A^{\prime}}^{C^{\prime}})\). Proof.: We assume w.l.o.g. that \((h,k,Q)\) is obtained by the canonical construction of pushouts, i.e., that \(h=c\circ\mu_{1}\) and \(k=c\circ\mu_{2}\) where \((Q,c)\) is a coequalizer of \((\mu_{1}\circ f,\mu_{2}\circ g)\) and \((B+C,\mu_{1},\mu_{2})\) is the coproduct of \((B,C)\). Let \((B^{\prime}+C^{\prime},\mu_{1}^{\prime},\mu_{2}^{\prime})\) be the coproduct of \((B^{\prime},C^{\prime})\), then obviously \(B^{\prime}+C^{\prime}\subseteq B+C\), \(\mu_{1}^{\prime}=\mu_{1}|_{B^{\prime}}^{B^{\prime}+C^{\prime}}\) and \(\mu_{2}^{\prime}=\mu_{2}|_{C^{\prime}}^{B^{\prime}+C^{\prime}}\). Since \[(\mu_{1}\circ f)^{-1}(B^{\prime}+C^{\prime})=f^{-1}(B^{\prime})=A^{\prime}=g^ {-1}(C^{\prime})=(\mu_{2}\circ g)^{-1}(B^{\prime}+C^{\prime})\] then by Lemma 7.3\((Q^{\prime},c^{\prime})\) is a coequalizer of \[((\mu_{1}\circ f)|_{A^{\prime}}^{B^{\prime}+C^{\prime}},(\mu_{2}\circ g)|_{A^{ \prime}}^{B^{\prime}+C^{\prime}})=(\mu_{1}^{\prime}\circ f|_{A^{\prime}}^{B^{ \prime}},\mu_{2}^{\prime}\circ g|_{A^{\prime}}^{C^{\prime}})\] where \(Q^{\prime}=c(B^{\prime}+C^{\prime})\), \(c^{\prime}=c|_{B^{\prime}+C^{\prime}}^{Q^{\prime}}\) and \(c^{-1}(Q^{\prime})=B^{\prime}+C^{\prime}\). We thus have \(h^{-1}(Q^{\prime})=(c\circ\mu_{1})^{-1}(Q^{\prime})=\mu_{1}^{-1}(B^{\prime}+C ^{\prime})=B^{\prime}\) and similarly \(k^{-1}(Q^{\prime})=C^{\prime}\). We also have \(h|_{B^{\prime}}^{Q^{\prime}}=(c\circ\mu_{1})|_{B^{\prime}}^{Q^{\prime}}=c^{ \prime}\circ\mu_{1}^{\prime}\) and \(k|_{C^{\prime}}^{Q^{\prime}}=(c\circ\mu_{2})|_{C^{\prime}}^{Q^{\prime}}=c^{ \prime}\circ\mu_{2}^{\prime}\), hence \((h|_{B^{\prime}}^{Q^{\prime}},k|_{C^{\prime}}^{Q^{\prime}},Q^{\prime})\) is the canonical pushout of \((A^{\prime},f|_{A^{\prime}}^{B^{\prime}},g|_{A^{\prime}}^{B^{\prime}})\), and therefore \(Q^{\prime}=h|_{B^{\prime}}^{Q^{\prime}}(B^{\prime})\cup k|_{C^{\prime}}^{Q^{ \prime}}(C^{\prime})=h(B^{\prime})\cup k(C^{\prime})\). We can now show that categories of partial morphisms of monographs have pushouts. The following construction is inspired by [3, Construction 2.6, Theorem 2.7] though the proof uses pushout restriction. **Theorem 7.5**.: _The categories of Definition 7.2 have pushouts._ Proof.: Let \([f]:A\hook A_{1}\to B\) and \([g]:A\hook A_{2}\to C\). The set of submonographs \(J\subseteq A_{1}\cap A_{2}\) such that \(f^{-1}(f(J))=J\) and \(g^{-1}(g(J))=J\) contains \(\varnothing\) and is closed under union, hence has a greatest element denoted \(I\). There is also a greatest submonograph \(X\subseteq B\) such that \(f^{-1}(X)\subseteq I\), that must therefore be greater than \(f(I)\), i.e., we have \(f(I)\subseteq X\) hence \(f^{-1}(f(I))\subseteq f^{-1}(X)\) and this yields \(f^{-1}(X)=I\). Similarly, there is a greatest submonograph \(Y\subseteq C\) such that \(g^{-1}(Y)\subseteq I\), so that \(g(I)\subseteq Y\) and \(g^{-1}(Y)=I\). Let \(f^{\prime}=f|_{I}^{X}\), \(g^{\prime}=g|_{I}^{Y}\) and \((h,k,Q)\) be a pushout of \((I,f^{\prime},g^{\prime})\) in \(\mathbf{Monogr}\), we claim that \(([h]\,,[k]\,,Q)\) is a pushout of \((A,[f]\,,[g])\) in \(\mathbf{Monogr}^{\mathbf{P}}\), where obviously \([h]:B\hook X\to Q\) and \([k]:C\hook Y\to Q\). We first see that \[[h]\circ[f]=\left[h\circ f|_{f^{-1}(X)}^{X}\right]=\left[h\circ f^{\prime} \right]=\left[k\circ g^{\prime}\right]=\left[k\circ g|_{g^{-1}(Y)}^{Y}\right]= \left[k]\circ[g]\,.\] We now consider any pair of partial morphisms \([v]:B\hook B^{\prime}\to U\) and \([w]:C\hook C^{\prime}\to U\) such that \([v]\circ[f]=[w]\circ[g]\), hence \(v\circ f|_{J}^{B^{\prime}}=w\circ g|_{J}^{C^{\prime}}\) where \(J\stackrel{{\text{\tiny def}}}{{=}}f^{-1}(B^{\prime})=g^{-1}(C^{ \prime})\). Since \(f(J)=f(f^{-1}(B^{\prime}))\subseteq B^{\prime}\) then \(J\subseteq f^{-1}(f(J))\subseteq f^{-1}(B^{\prime})=J\), hence \(f^{-1}(f(J))=J\) and similarly \(g^{-1}(g(J))=J\), so that \(J\subseteq I\). This can be written \(f^{-1}(B^{\prime})\subseteq I\) and thus entails \(B^{\prime}\subseteq X\) and similarly \(C^{\prime}\subseteq Y\), hence \(f^{\prime-1}(B^{\prime})=J=g^{\prime-1}(C^{\prime})\). We can therefore apply Lemma 7.4 and get that \((h|_{B^{\prime}}^{Q^{\prime}},k|_{C^{\prime}}^{Q^{\prime}},Q^{\prime})\) is a pushout of \((J,f^{\prime}|_{J}^{B^{\prime}},g^{\prime}|_{J}^{C^{\prime}})\) where \(Q^{\prime}=h(B^{\prime})\cup k(C^{\prime})\), \(h^{-1}(Q^{\prime})=B^{\prime}\) and \(k^{-1}(Q^{\prime})=C^{\prime}\). Since \(v\circ f^{\prime}|_{J}^{B^{\prime}}=v\circ f|_{J}^{B^{\prime}}=w\circ g|_{J}^{C^ {\prime}}=w\circ g^{\prime}|_{J}^{C^{\prime}}\) there exists a unique \(u:Q^{\prime}\to U\) such that \(u\circ h|_{B^{\prime}}^{Q^{\prime}}=v\) and \(w=u\circ k|_{C^{\prime}}^{Q^{\prime}}\). We thus have a partial morphism \(\lceil u\rceil:Q\hookleftarrow Q^{\prime}\to U\) such that \[\lceil u\rceil\circ\lceil h\rceil=\left\lceil u\circ h|_{h^{-1}(Q^{\prime})}^{Q ^{\prime}}\right\rceil=\left\lceil u\circ h|_{B^{\prime}}^{Q^{\prime}}\right\rceil =\lceil v\rceil\] and similarly \(\lceil u\rceil\circ\lceil k\rceil=\lceil w\rceil\). Suppose there is a \(\lceil u^{\prime}\rceil:Q\hookleftarrow D\to U\) such that \(\lceil u^{\prime}\rceil\circ\lceil h\rceil=\lceil v\rceil\) and \(\lceil u^{\prime}\rceil\circ\lceil k\rceil=\lceil w\rceil\), then \(u^{\prime}\circ h|_{h^{-1}(D)}^{D}=v\) hence \(h^{-1}(D)=B^{\prime}\) and similarly \(k^{-1}(D)=C^{\prime}\). Since \(D\subseteq Q=h(X)\cup k(Y)\) then \[D=(D\cap h(X))\cup(D\cap k(Y))=h(h^{-1}(D))\cup k(k^{-1}(D))=h(B^{\prime})\cup k (C^{\prime})=Q^{\prime}\] and we get \(\lceil u^{\prime}\rceil=\lceil u\rceil\) by the unicity of \(u\). If \(B\) and \(C\) are finite (resp. standard, resp. \(O\)-monographs) then so are \(X\) and \(Y\), hence so is \(Q\) by Theorem 4.4. One important feature of this construction is illustrated below. **Example 7.6**.: _Suppose there are edges \(x\) of \(A_{1}\cap A_{2}\) and \(y\in\mathsf{E}A_{2}\backslash\mathsf{E}A_{1}\) such that \(g(x)=g(y)\). If \(x\) is an edge of \(I=g^{-1}(g(I))\) then so is \(y\), which is impossible since \(I\subseteq A_{1}\cap A_{2}\). Hence \(x\) is not an edge of \(I=f^{-1}(X)\) and therefore \(f(x)\notin\mathsf{E}X\). Since \(y\) is not an edge of \(I=g^{-1}(Y)\) then similarly \(g(x)=g(y)\notin\mathsf{E}Y\). This means that even though \(x\) has images by both \(f\) and \(g\), none of these has an image (by \(h\) or \(k\)) in \(Q\), i.e., they are "deleted" from the pushout._ The result of the present section can be replicated by replacing every monograph, say \(A\), by a typed monograph with a fixed type \(T\), say \(a:A\to T\). But then expressions like \(A\subseteq B\) are replaced by \(a\subseteq b\), which ought to be interpreted as \(A\subseteq B\)_and_\(a=b|_{A}\), so that \(\mathsf{A}_{T}a\) is then a subalgebra of \(\mathsf{A}_{T}b\). In this way the results of [3] on categories of partial homomorphisms could be deduced from Corollary 6.9. They cannot be obtained directly from Theorem 7.5. ## 8 Algebraic Transformations of Monographs Rule-based transformations of graphs are conceived as substitutions of subgraphs (image of a left hand side of a rule) by subgraphs (image of its right hand side). Substitutions are themselves designed as an operation of deletion (of nodes or edges) followed by an operation of addition. This last operation is conveniently represented as a pushout, especially when edges are added between existing nodes (otherwise a coproduct would be sufficient). The operation of deletion is however more difficult to represent in category theory, since there is no categorical notion of a complement. This is a central and active issue in the field of Algebraic Graph Transformation, and many definitions have been proposed, see [12, 13, 14, 15]. The most common and natural one, known as the double pushout method [16, 17, 18], assumes the operation of deletion as the inverse of the operation of addition. More precisely, in the following pushout diagram we understand \(M\) as the result of adding edges to \(D\) as specified by \(l\) and \(k\). Images of edges of \(K\) are present in both \(D\) and \(L\), and therefore also in \(M\), without duplications (since \(f\circ k=m\circ l\)). The edges that are added to \(D\) are therefore the images by \(m\) of the edges of \(L\) that do not occur in \(l(K)\). We may then inverse this operation and understand \(D\) as the result of removing these edges from \(M\). The monograph \(M\) and the morphisms \(m\), \(l\) then appear as the input of the operation, and the monograph \(D\) and morphisms \(k\), \(f\) as its output. The problem of course is that the pushout operation is not generally bijective, hence it cannot always be inverted. We first analyze the conditions of existence of \(D\). **Definition 8.1** (pushout complement, gluing condition).: _A pushout complement of morphisms \(l:K\to L\) and \(m:L\to M\) is a monograph \(D\) and a pair of morphisms \(k:K\to D\) and \(f:D\to M\) such that \((m,f,M)\) is a pushout of \((K,l,k)\)._ _The morphisms \(l:K\to L\) and \(m:L\to M\) satisfy the gluing condition (\(\operatorname{GC}(l,m)\) for short) if, for \(L^{\prime}=\mathsf{E}L\backslash l[\mathsf{E}K]\),_ 1. _for all_ \(x,x^{\prime}\in\mathsf{E}L\)_,_ \(m(x)=m(x^{\prime})\) _and_ \(x\in L^{\prime}\) _entail_ \(x=x^{\prime}\)_, and_ 2. _for all_ \(e,e^{\prime}\in\mathsf{E}M\)_,_ \(e\mid M(e^{\prime})\) _and_ \(e\in m[L^{\prime}]\) _entail_ \(e^{\prime}\in m[L^{\prime}]\)_._ The edges of \(M\) that should be removed from \(M\) to obtain \(D\) are the elements of \(m[L^{\prime}]\). We may say that an edge \(m(x)\) of \(M\) is _marked for removal_ if \(x\in L^{\prime}\) and _marked for preservation_ if \(x\in l[\mathsf{E}K]\). Condition (1) of the gluing condition states that the restriction of \(m\) to \(m^{-1}[m[L^{\prime}]]\) should be injective, or in other words that an edge can be deleted if it is marked for removal once, and not marked for preservation. Condition (2) states that an edge can be deleted only if all the edges that are adjacent to it are also deleted (otherwise these edges would be adjacent to a non existent edge). It is obvious that this gluing condition reduces to the standard one known on graphs, when applied to standard \(\{0,2\}\)-monographs. We now prove that it characterizes the existence of pushout complements (note that \(l\) is not assumed to be injective). **Lemma 8.2**.: _The morphisms \(l:K\to L\) and \(m:L\to M\) have a pushout complement iff they satisfy the gluing condition._ Proof.: _Necessary condition._ We assume w.l.o.g. that the pushout \((m,f,M)\) of \((K,l,k)\) is obtained by canonical construction, i.e., let \((L+D,\mu_{1},\mu_{2})\) be the coproduct of \((L,D)\), \((M,c)\) bet the coequalizer of \((\mu_{1}\circ l,\mu_{2}\circ k)\), \(m=c\circ\mu_{1}\) and \(f=c\circ\mu_{2}\). Thus \(\mathsf{E}M\) is the quotient of \(\mathsf{E}L+\mathsf{E}D\) by the equivalence relation \(\sim\) generated by \(R=\{(\mu_{1}\circ l(z),\mu_{2}\circ k(z))\mid z\in\mathsf{E}K\}\). Let \(L^{\prime}=\mathsf{E}L\backslash l[\mathsf{E}K]\), we first prove (1) and then (2). For all \(x,x^{\prime}\in\mathsf{E}L\), if \(x\in L^{\prime}\) then \(x\notin l[\mathsf{E}K]\), hence \(\mu_{1}(x)\) is not related by \(R\) to any element and is therefore alone in its \(\sim\)-class. Hence2 if \(m(x)=m(x^{\prime})\) then \(\mu_{1}(x)\sim\mu_{1}(x^{\prime})\) and therefore \(x=x^{\prime}\). Footnote 2: Another consequence is that \(\mu_{1}(x)\) is not related by \(\sim\) to any element of \(\mu_{2}[\mathsf{E}D]\), hence that \(m(x)\notin f[\mathsf{E}D]\). For all \(e,e^{\prime}\in\mathsf{E}M\) such that \(e\mid M(e^{\prime})\) and \(e\in m[L^{\prime}]\), let \(x\in L^{\prime}\) such that \(e=m(x)\). Suppose that \(e^{\prime}=f(y^{\prime})\) for some \(y^{\prime}\in\mathsf{E}D\) then \(M(e^{\prime})=f^{<\alpha}\circ D(y^{\prime})\) hence there is a \(y\mid D(y^{\prime})\) such that \(e=f(y)\), hence \(m(x)\in f[\mathsf{E}D]\) which is impossible by note 2. Since \(M=f(D)\cup m(L)\) there must be a \(x^{\prime}\in\mathsf{E}L\) such that \(e^{\prime}=m(x^{\prime})\). Suppose now that \(x^{\prime}=l(z)\) for some \(z\in\mathsf{E}K\) then \(e^{\prime}=m(l(z))=f(k(z))\in f[\mathsf{E}D]\), and we have seen this is impossible. Hence \(x^{\prime}\notin l[\mathsf{E}K]\) and therefore \(e^{\prime}\in m[L^{\prime}]\). _Sufficient condition._ We assume (1) and (2), let \(\alpha\) be an ordinal for \(M\), \(\mathsf{E}D\stackrel{{\mbox{\tiny def}}}{{=}}\mathsf{E}M\backslash m[L^ {\prime}]\) and \(D(e)\stackrel{{\mbox{\tiny def}}}{{=}}M(e)\) for all \(e\in\mathsf{E}D\); by (2) this is an \(\mathsf{E}D\)-sequence, hence \(D\) is a submonograph of \(M\) and the canonical injection \(f:D\hookrightarrow M\) is a morphism. By (1) we have \(m[L^{\prime}]\cap m\circ l[\mathsf{E}K]=\varnothing\), hence \(m\circ l[\mathsf{E}K]\subseteq\mathsf{E}D\) and we let \(k\stackrel{{\mbox{\tiny def}}}{{=}}(m\circ l)|\mathsf{E}L\) so that \(f\circ k=m\circ l\). We have \[k^{<\alpha}\circ K=m^{<\alpha}\circ l^{<\alpha}\circ K=m^{<\alpha}\circ L\circ l =M\circ m\circ l=D\circ k\] hence \(k:K\to D\) is a morphism. To prove that \((m,f,M)\) is a pushout of \((K,l,k)\), let \(m^{\prime}:L\to M^{\prime}\) and \(f^{\prime}:D\to M^{\prime}\) be morphisms such that \(m^{\prime}\circ l=f^{\prime}\circ k\). Since \(\mathsf{E}M=\mathsf{E}D\uplus m[L^{\prime}]\) we define \(h:\mathsf{E}M\to\mathsf{E}M^{\prime}\) as \[h(e)\stackrel{{\mbox{\tiny def}}}{{=}}\left\{\begin{array}{ll}f ^{\prime}(e)&\mbox{if $e\in\mathsf{E}D$}\\ m^{\prime}(x)&\mbox{if $x\in L^{\prime}$ and $e=m(x)$}\end{array}\right.\] since \(x\) is unique by (1). For all \(x\in\mathsf{E}L\), if \(x\in L^{\prime}\) then \(h\circ m(x)=m^{\prime}(x)\), otherwise there is a \(z\in\mathsf{E}K\) such that \(x=l(z)\) and then \[h\circ m(x)=h\circ m\circ l(z)=h\circ f\circ k(z)=f^{\prime}\circ k(z)=m^{ \prime}\circ l(z)=m^{\prime}(x),\] hence \(h\circ m=m^{\prime}\). It is obvious that \(h\circ f=f^{\prime}\) and that these two equations uniquely determine \(h\). Proving that \(h:M\to M^{\prime}\) is a morphism is straightforward. Note that \(D\) is finite whenever \(M\) is finite. This proves that this gluing condition is also valid in \(\mathbf{FMonogr}\), and it is obviously also the case in \(\mathbf{SMonogr}\), \(O\)-\(\mathbf{Monogr}\) and \(O\)-\(\mathbf{SMonogr}\) for every set \(O\) of ordinals. It therefore characterizes the existence of \(D\), but by no means its unicity. It is well known (and easy to see) that in the category of sets one may find pushout complements with non isomorphic sets \(D\), this is therefore also the case for monographs (since \(\mathbf{Sets}\simeq 1\)-\(\mathbf{Monogr}\)). An analysis of the proof of Lemma 8.2 (necessary condition) however yields that \(f[\mathsf{E}D]\) is invariant. **Corollary 8.3**.: _If \(D\), \(k:K\to D\), \(f:D\to M\) is a pushout complement of \(l:K\to L\), \(m:L\to M\) then \(f[\mathsf{E}D]=\mathsf{E}M\backslash m[L^{\prime}]\), where \(L^{\prime}=\mathsf{E}L\backslash l[\mathsf{E}K]\)._ Proof.: Since \(m[\![\mathsf{E}L]\!]\langle(m\circ l)[\![\mathsf{E}K]\!]\subseteq m[\![L^{\prime}]\!]\) then \[m[\![\mathsf{E}L]\!]\backslash m[\![L^{\prime}]\!]\subseteq(m\circ l)[\![ \mathsf{E}K]\!]=(f\circ k)[\![\mathsf{E}K]\!]\subseteq f[\![\mathsf{E}D]\!].\] By property of pushouts we have \(\mathsf{E}M=f[\![\mathsf{E}D]\!]\cup m[\![\mathsf{E}L]\!]\), and by note 2 we have \(m[\![L^{\prime}]\cap f[\![\mathsf{E}D]\!]=\varnothing\), hence \[\mathsf{E}M\backslash m[\![L^{\prime}]\!]=(f[\![\mathsf{E}D]\!]\backslash m[ \![L^{\prime}]\!])\cup(m[\![\mathsf{E}L]\!]\backslash m[\![L^{\prime}]\!])=f[ \![\mathsf{E}D]\!].\] One way of ensuring the unicity of \(D\) (up to isomorphism) is to assume that \(l\) is injective: this is a well-known consequence of Theorem 4.17 (see [8]). However, an analysis of the construction of \(D\) in the proof of Lemma 8.2 (sufficient condition) shows that we can always build \(D\) as a submonograph of \(M\), hence we may as well assume that \(f\) is a canonical injection and avoid restrictions on \(l\). We therefore adopt a restricted notion of double pushout transformation compared to the standard one. **Definition 8.4** (span rules \((l,r)\), matching \(m\), relation \(\stackrel{{(l,r)}}{{\Longrightarrow}}_{m}\)).: _A span rule is a pair \((l,r)\) of morphisms \(l:K\to L\), \(r:K\to R\) with the same domain \(K\). A matching of \((l,r)\) in an object \(M\) is a morphism \(m:L\to M\). For any object \(N\) we write \(M\stackrel{{(l,r)}}{{\Longrightarrow}}_{m}N\) if there exists a double-pushout diagram_ _where \(f\) is a canonical injection._ We easily see that the relation \(\stackrel{{(l,r)}}{{\Longrightarrow}}_{m}\) is deterministic up to isomorphism. **Corollary 8.5**.: \(M\stackrel{{(l,r)}}{{\Longrightarrow}}_{m}N\) _and \(M\stackrel{{(l,r)}}{{\Longrightarrow}}_{m}N^{\prime}\) entail \(N\simeq N^{\prime}\)._ Proof.: We have two pushout complements \(k:K\to D\), \(f:D\hookrightarrow M\) and \(k^{\prime}:K\to D^{\prime}\), \(f^{\prime}:D^{\prime}\hookrightarrow M\) of \(m\), \(l\), hence by Corollary 8.3 \[\mathsf{E}D=f[\![\mathsf{E}D]\!]=\mathsf{E}M\backslash m[\![L^{\prime}]\!]=f^ {\prime}[\![\mathsf{E}D^{\prime}]\!]=\mathsf{E}D^{\prime}\] hence \(D=D^{\prime}\), \(f=f^{\prime}\), \(k=(f\circ k)|_{K}^{D}=(m\circ l)|_{K}^{D^{\prime}}=(f^{\prime}\circ k^{\prime })|_{K}^{D^{\prime}}=k^{\prime}\), and therefore \(N\simeq N^{\prime}\) by general property of pushouts. It is obvious by Theorem 4.4 and by the construction of \(D\) in Lemma 8.2 that, in the categories of Definition 3.5, there exists a \(N\) such that \(M\stackrel{{(l,r)}}{{\Longrightarrow}}_{m}N\) if and only if \(l\) and \(m\) satisfy the gluing condition. This means in particular that an edge \(e\) of \(M\) may be deleted only if it is explicitly marked for removal, i.e., if there is an edge \(x\in L^{\prime}\) such that \(m(x)=e\). All edges that are not marked for removal are guaranteed to be preserved. This conservative semantics for transformation rules is extremely safe but imposes a discipline of programming that may be tedious. As noted in Example 7.6, pushout of partial morphisms have a potential of removing edges. Since such pushouts always exist, they can be used to define transformations that are not restricted by the gluing condition. This is the idea of the single pushout method, that was initiated in [19] and fully developed in [20, 3]. **Definition 8.6** (partial rules \([r]\), relation \(\stackrel{{[r]}}{{\Longrightarrow}}_{m}\), rule \([l,r]\)).: _A partial rule is a partial morphism \([r]:L\hook K\to R\). A matching of \([r]\) in a monograph \(M\) is a morphism \(m:L\to M\). For any monograph \(N\) we write \(M\stackrel{{[r]}}{{\Longrightarrow}}_{m}N\) if there exist partial morphisms \([g]\) and \([n]\) such that \(([n]\,,[g]\,,N)\) is a pushout of \((L,[r]\,,[m])\)._ _To any span rule \((l,r)\) where \(l:K\to L\), \(r:K\to R\) we associate a partial rule \([l,r]\stackrel{{\mathrm{def}}}{{=}}[r^{\prime}]:L\hook l(K)\to R^{\prime}\) such that \((q,r^{\prime},R^{\prime})\) is a pushout of \((K,r,l^{\prime})\) where \(l^{\prime}\stackrel{{\mathrm{def}}}{{=}}l^{l(K)}_{K}\)._ The relation \(\stackrel{{[r]}}{{\Longrightarrow}}_{m}\) is also deterministic up to isomorphism since \(N\) is obtained as a pushout. Obviously a morphism \(m\) is a matching of \((l,r)\) in \(M\) iff it is a matching of \([l,r]\) in \(M\). The partial rule \([l,r]\) is designed to perform the same transformation as the span rule \((l,r)\). We prove that this is indeed the case when the gluing condition holds. **Theorem 8.7**.: _For any span rule \((l,r)\), monographs \(M\), \(N\) and matching \(m\) of \((l,r)\) in \(M\), we have_ \[M\stackrel{{(l,r)}}{{\Longrightarrow}}_{m}N\ \ \text{iff}\ \ M \stackrel{{[l,r]}}{{\Longrightarrow}}_{m}N\text{ and }\mathrm{GC}(l,m).\] Proof.: Let \(R^{\prime}\), \(l^{\prime}\), \(q\) and \(r^{\prime}\) be as in Definition 8.6. We first compute the pushout of \([l,r]\) and \([m]\) according to the construction in Lemma 7.5, by assuming the gluing condition \(\mathrm{GC}(l,m)\) and that \(D\subseteq M\), \(k:K\to D\), \(f:D\hookrightarrow M\) is a pushout complement of \(l\), \(m\). Let \(I\) be the greatest submonograph of \(l(K)\cap L\) such that \(r^{\prime-1}(r^{\prime}(I))=I\) and \(m^{-1}(m(I))=I\). By \(\mathrm{GC}(l,m)\) (1) we have for all \(x\in\mathsf{E}L\) that \(m(x)\in m[l[\mathsf{E}K]]\) entails \(x\notin L^{\prime}=\mathsf{E}L\backslash l[\mathsf{E}K]\), i.e., \(x\in l[\mathsf{E}K]\), hence \(m^{-1}(m(l(K)))\subseteq l(K)\) and since the reverse inclusion is always true we get \(I=l(K)\). Hence the greatest monograph \(X\subseteq R^{\prime}\) such that \(r^{\prime-1}(X)\subseteq I\) is \(R^{\prime}\). Let \(Y\) be the greatest submonograph of \(M\) such that \(m^{-1}(Y)\subseteq l(K)\), this entails \(m^{-1}[\mathsf{E}Y]\cap L^{\prime}=\varnothing\), hence \(\mathsf{E}Y\cap m[L^{\prime}]=\varnothing\) and by Corollary 8.3\(Y\subseteq f(D)=D\). Conversely, for all \(x\in m^{-1}[\mathsf{E}D]=m^{-1}[\mathsf{E}M\backslash m[L^{\prime}]]\) we have \(m(x)\notin m[L^{\prime}]\), hence by \(\operatorname{GC}(l,m)\) (1) \(x\notin L^{\prime}\) and thus \(x\in l[\mathsf{E}K]\), so that \(m^{-1}(D)\subseteq l(K)\). Hence \(D\subseteq Y\) and we get \(Y=D\). The pushout of \([l,r]\) and \([m]\) is therefore obtained from the pushout of \(r^{\prime}\) and \(m^{\prime}\stackrel{{\text{\tiny def}}}{{=}}m|_{l(K)}^{D}\). Besides, we have \(m^{\prime}\circ l^{\prime}=(m\circ l)|_{K}^{D}=(f\circ k)|_{K}^{D}=k\). _Sufficient condition._ We assume \(M\stackrel{{(l,r)}}{{\Longrightarrow}}_{m}N\) and the diagram in Definition 8.4. By Lemma 8.2 we have \(\operatorname{GC}(l,m)\). By the above we get \((g\circ m^{\prime})\circ l^{\prime}=g\circ k=n\circ r\), and since \((q,r^{\prime},R^{\prime})\) is a pushout of \((K,r,l^{\prime})\) then there exists a unique \(n^{\prime}:R^{\prime}\to N\) such that \(n^{\prime}\circ r^{\prime}=g\circ m^{\prime}\) and \(n^{\prime}\circ q=n\). Since \((n,g,N)\) is a pushout of \((K,r,k)\) then by pushout decomposition \((n^{\prime},g,N)\) is a pushout of \((l(K),r^{\prime},m^{\prime})\), hence \(M\stackrel{{[l,r]}}{{\Longrightarrow}}_{m}N\). _Necessary condition._ By \(\operatorname{GC}(l,m)\) and Lemma 8.2 we can build a pushout complement \(D\subseteq M\), \(k:K\to D\), \(f:D\hookrightarrow M\) of \(l\), \(m\). By \(M\stackrel{{[l,r]}}{{\Longrightarrow}}_{m}N\) and the above there is a pushout \((n^{\prime},g,N)\) of \((l(K),r^{\prime},m^{\prime})\), hence by pushout composition \((N,n^{\prime}\circ q,g)\) is a pushout of \((K,r,k)\), hence \(M\stackrel{{(l,r)}}{{\Longrightarrow}}_{m}N\). Note that any partial rule \([r]:L\hookhook K\to R\) can be expressed as \([r]=[j,r]\) where \(j:K\hookrightarrow L\) is the canonical injection. Thus, provided the gluing condition holds, single and double pushout transformations are equivalent. Single pushout transformations are more expressive since they also apply when the gluing condition does not hold, as illustrated in the following example. **Example 8.8**.: _We consider the following "loop removing" rule:_ _and try to apply it to monograph \(\mathrm{T}_{\infty}\) from Example 6.10. There is a unique morphism \(m:L\to\mathrm{T}_{\infty}\) but it does not satisfy the gluing condition. Indeed, we see that condition (2) is breached since \(1\mid\mathrm{T}_{\infty}(2)\) and \(1\in m[L^{\prime}]\) and yet \(2\notin m[L^{\prime}]\). Hence the only way to apply the rule to \(\mathrm{T}_{\infty}\) is through a single pushout transformation._ _For this we first compute the rule \([\![l,r]\!]\). Since \(l\) is the canonical injection of \(l(K)=K\) into \(L\), then \(r^{\prime}=r\) (and \(R^{\prime}=R=K\)) and hence \([\![l,r]\!]=[\![r]\!]:L\hook K\to R\). The monograph \(D\) is the greatest one such that \(D\subseteq\mathrm{T}_{\infty}\) and \(m^{-1}(D)\subseteq l(K)\), hence obviously \(D=\{(0,\varepsilon)\}\). Since \(l(K)\) and \(R\) are both isomorphic to \(D\) then so is the result of the transformation, i.e.,_ \[\raisebox{-14.226378pt}{\includegraphics[]{fig/L- _A morphism \(m\) from \((a,\mathcal{A})\) to an \(\text{ATM}\,(b,\mathcal{B})\) over \(T\), \(\Sigma\) is a pair \((\vec{m},\vec{m})\) of a morphism \(\vec{m}:a\to b\) in \(\mathbf{Monogr}\backslash T\) and a \(\Sigma\)-homomorphism \(\dot{m}:\mathcal{A}\to\mathcal{B}\) such that \(\dot{m}_{s}=(\mathsf{A}_{T}\vec{m})_{s}\) for all \(s\in S\cap\mathsf{E}T\)._ _Let \(1_{(a,\mathcal{A})}\stackrel{{\text{\tiny{def}}}}{{=}}(1_{a},1_ {\mathcal{A}})\) and for any morphism \(m^{\prime}:(b,\mathcal{B})\to(c,\mathcal{C})\) let \(m^{\prime}\circ m\stackrel{{\text{\tiny{def}}}}{{=}}(\vec{m}^{ \prime}\circ\vec{m},\dot{m}^{\prime}\circ\dot{m})\) that is a morphism from \((a,\mathcal{A})\) to \((c,\mathcal{C})\). Let \(\mathbf{ATM}(T,\Sigma)\) be the category of ATMs over \(T\), \(\Sigma\) and their morphisms._ The edges that are considered as attributes are not the nodes of a specific sort as in E-graphs; they are characterized by the fact that they are typed by an edge of \(T\) that happens to be also a sort of the data type signature \(\Sigma\), i.e., an element of \(S\). This is consistent with the typed attributed E-graphs of [2]. We therefore see that the signatures \(\mathsf{S}T\) and \(\Sigma\) share sorts but we shall consider them as otherwise distinct, in particular w.r.t. operator names. To account for this property we need the following construction. **Definition 9.2** (signature \(\Sigma\dot{+}\Sigma^{\prime}\)).: _Given two signatures \(\Sigma:\Omega\to S^{<\omega}\) and \(\Sigma^{\prime}:\Omega^{\prime}\to S^{\prime<\omega}\), let \((\Omega+\Omega^{\prime},\mu_{1},\mu_{2})\) be the coproduct of \((\Omega,\Omega^{\prime})\) in \(\mathbf{Sets}\) and \(j\), \(j^{\prime}\) be the canonical injections of \(S\), \(S^{\prime}\) respectively into \(S\circ S^{\prime}\), let \(\Sigma\dot{+}\Sigma^{\prime}:\Omega+\Omega^{\prime}\to(S\circ S^{\prime})^{<\omega}\) be the unique function such that \((\Sigma\dot{+}\Sigma^{\prime})\circ\mu_{1}=j^{<\omega}\circ\Sigma\) and \((\Sigma\dot{+}\Sigma^{\prime})\circ\mu_{2}=j^{\prime<\omega}\circ\Sigma^{\prime}\)._ We leave it to the reader to check that this construction defines a coproduct in the category \(\mathbf{Sig}_{\text{srt}}\) and therefore that \(\Sigma_{1}\dot{\simeq}\Sigma_{2}\) and \(\Sigma_{1}^{\prime}\dot{\simeq}\Sigma_{2}^{\prime}\) entail \(\Sigma_{1}\dot{+}\Sigma_{1}^{\prime}\dot{\simeq}\Sigma_{2}^{\prime}\). For the sake of simplicity we will assume in the sequel that \(\mathsf{S}T\) and \(\Sigma\) have no operator name in common, thus assimilate \(\Omega_{T}+\Omega\) to \(\Omega_{T}\cup\Omega\) and omit the canonical injections, so that \(\mathsf{S}T=(\mathsf{S}T\dot{+}\Sigma)|_{\Omega_{T}}^{(\mathsf{E}T)^{<\omega}}\) and \(\Sigma=(\mathsf{S}T\dot{+}\Sigma)|_{\Omega}^{S^{<\omega}}\). **Definition 9.3** (functor \(\mathsf{D}:\mathbf{ATM}(T,\Sigma)\to(\mathsf{S}T\dot{+}\Sigma)\)-\(\mathsf{Alg}\)).: _For every signature \(\Sigma:\Omega\to S^{<\omega}\) and monograph \(T\) such that \(\Omega_{T}\cap\Omega=\varnothing\), let \(\Sigma^{\prime}\stackrel{{\text{\tiny{def}}}}{{=}}\mathsf{S}T \dot{+}\Sigma\) and \(\mathsf{D}:\mathbf{ATM}(T,\Sigma)\to\Sigma^{\prime}\)-\(\mathbf{Alg}\) be the functor defined as follows: for every object \((a,\mathcal{A})\) of \(\mathbf{ATM}(T,\Sigma)\) let \(\mathsf{D}(a,\mathcal{A})\) be the \(\Sigma^{\prime}\)-algebra \(\mathcal{A}^{\prime}\) defined by_ * \(\mathcal{A}^{\prime}_{s}\stackrel{{\text{\tiny{def}}}}{{=}}\mathcal{A }_{s}\) _for all_ \(s\in S\) _and_ \(\mathcal{A}^{\prime}_{e}\stackrel{{\text{\tiny{def}}}}{{=}}( \mathsf{A}_{T}a)_{e}\) _for all_ \(e\in\mathsf{E}T\)_,_ * \(\sigma^{\mathcal{A}^{\prime}}\stackrel{{\text{\tiny{def}}}}{{=}}o^{ \mathcal{A}}\) _for all_ \(o\in\Omega\) _and_ \(\text{[e\text{\text{\textminus}}}\iota\mathcal{J}^{\mathcal{A}^{\prime}} \stackrel{{\text{\tiny{def}}}}{{=}}\text{[e\text{\textminus}}\iota \mathcal{I}^{\mathcal{A}_{T}a}\) _for all_ \(\text{[e\text{\textminus}}\iota\mathcal{I}\in\Omega_{T}\)_._ _For every morphism \(m:(a,\mathcal{A})\to(b,\mathcal{B})\), let \((\mathsf{D}m)_{s}\stackrel{{\text{\tiny{def}}}}{{=}}\dot{m}_{s}\) for all \(s\in S\) and \((\mathsf{D}m)_{e}\stackrel{{\text{\tiny{def}}}}{{=}}(\mathsf{A}_{T} \vec{m})_{e}\) for all \(e\in\mathsf{E}T\)._ It is straightforward to check that \(\mathsf{D}m\) is a \(\Sigma^{\prime}\)-homomorphism from \(\mathsf{D}(a,\mathcal{A})\) to \(\mathsf{D}(b,\mathcal{B})\), and hence that \(\mathsf{D}\) is a functor. **Theorem 9.4**.: \(\mathsf{D}\) _is an equivalence from \(\mathbf{ATM}(T,\Sigma)\) to \((\mathsf{S}T+\Sigma)\)-\(\mathbf{Alg}\)._ Proof.: It is easy to see that \(\mathsf{D}\) is full and faithful by the same property of \(\mathsf{A}_{T}\). We prove that \(\mathsf{D}\) is isomorphism-dense. For any \(\Sigma^{\prime}\)-algebra \(\mathcal{B}^{\prime}\), let \(\mathcal{B}\) (resp. \(\mathcal{C}\)) be its restriction to \(\Sigma\) (resp. \(\mathsf{S}T\)). Since \(\mathsf{A}_{T}\) is isomorphism-dense by Theorem 6.8, there exist an object \(a:A\to T\) in \(\mathbf{Monogr}\backslash T\) and an \(\mathsf{S}T\)-isomorphism \(h:\mathsf{A}_{T}a\to\mathcal{C}\). We define simultaneously a set \(\mathcal{A}_{s}\) and a function \(k_{s}:\mathcal{A}_{s}\to\mathcal{B}_{s}\) for all \(s\in S\) by taking \(\mathcal{A}_{s}\stackrel{{\text{\tiny def}}}{{=}}\mathcal{B}_{s}\) and \(k_{s}\stackrel{{\text{\tiny def}}}{{=}}1_{\mathcal{A}_{s}}\) if \(s\in S\backslash\mathsf{E}T\), and \(\mathcal{A}_{s}\stackrel{{\text{\tiny def}}}{{=}}(\mathsf{A}_{T}a) _{s}\) and \(k_{s}\stackrel{{\text{\tiny def}}}{{=}}h_{s}\) if \(s\in S\cap\mathsf{E}T\) (in this case we have \(\mathcal{C}_{s}=\mathcal{B}^{\prime}_{s}=\mathcal{B}_{s}\)). We then define for every \(o\in\Omega\) the function \(\sigma^{\mathcal{A}}\stackrel{{\text{\tiny def}}}{{=}}k_{\text{ Rng}(o)}^{-1}\circ\sigma^{\mathcal{B}}\circ k_{\text{Dom}(o)}:\mathcal{A}_{\text{Dom}(o)} \to\mathcal{A}_{\text{Rng}(o)}\), and the \(\Sigma\)-algebra \(\mathcal{A}\stackrel{{\text{\tiny def}}}{{=}}\bigl{(}(\mathcal{A }_{s})_{s\in S},(\sigma^{\mathcal{A}})_{o\in\Omega}\bigr{)}\). By construction \((a,\mathcal{A})\) is obviously an ATM over \(T,\Sigma\) and \(k\stackrel{{\text{\tiny def}}}{{=}}(k_{s})_{s\in S}\) is a \(\Sigma\)-isomorphism \(k:\mathcal{A}\to\mathcal{B}\). Let \(\mathcal{A}^{\prime}\stackrel{{\text{\tiny def}}}{{=}}\mathsf{D}(a,\mathcal{A})\), \(h^{\prime}_{s}\stackrel{{\text{\tiny def}}}{{=}}k_{s}:\mathcal{A}^ {\prime}_{s}\to\mathcal{B}^{\prime}_{s}\) for all \(s\in S\) and \(h^{\prime}_{e}\stackrel{{\text{\tiny def}}}{{=}}h_{e}:\mathcal{A} ^{\prime}_{e}\to\mathcal{B}^{\prime}_{e}\) for all \(e\in\mathsf{E}T\), since \(h_{s}=k_{s}\) for all \(s\in S\cap\mathsf{E}T\) then \(h^{\prime}\stackrel{{\text{\tiny def}}}{{=}}(h^{\prime}_{s})_{s \in S\cup\mathsf{E}T}\) is well-defined. It is then easy to see that \(h^{\prime}:\mathcal{A}^{\prime}\to\mathcal{B}^{\prime}\) is a \(\Sigma^{\prime}\)-isomorphism, so that \(\mathsf{D}(a,\mathcal{A})\simeq\mathcal{B}^{\prime}\). Theorem 9.4 generalizes3[2, Theorem 11.3] that establishes an isomorphism between the category of attributed E-graphs typed by an attributed E-graph \(ATG\) and the category of algebras of a signature denoted \(\operatorname{AGSIG}(ATG)\). In particular Theorem 11.3 of [2] requires the hypothesis that \(\operatorname{AGSIG}(ATG)\) should be _well-structured_, which means that if there is an operator name of \(\mathsf{S}T\) whose domain sort is \(s\) then \(s\) is not a sort of the data type signature \(\Sigma\). Obviously this is equivalent to requiring that only nodes of \(T\) can be considered as sorts of \(\Sigma\) and is linked to the fact that only values nodes of E-graphs are supposed to hold attributes. Since we are not restricted to E-graphs there is no need to require that attributes should only be nodes. This has an interesting consequence: **Corollary 9.5**.: _For every signatures \(\Sigma\), \(\Sigma^{\prime}\) and graph structure \(\Gamma\) such that \(\Sigma^{\prime}=\Gamma\dotplus\Sigma\) there exists a monograph \(T\) such that \(\Sigma^{\prime}\)-\(\mathbf{Alg}\approx\mathbf{ATM}(T,\Sigma)\)._ Proof.: By Lemma 6.3 there exists a monograph \(T\) such that \(\mathsf{S}T\dot{\simeq}\,\Gamma\), hence \(\mathsf{S}T\dot{+}\Sigma\dot{\simeq}\,\Gamma\dot{+}\Sigma=\Sigma^{\prime}\) and therefore \(\Sigma^{\prime}\)-\(\mathbf{Alg}\simeq(\mathsf{S}T\dot{+}\Sigma)\)-\(\mathbf{Alg}\approx\mathbf{ATM}(T,\Sigma)\). Obviously, any signature \(\Sigma^{\prime}\) can be decomposed as \(\Gamma\dot{+}\Sigma\) by putting some of its monadic operators (and the sorts involved in these) in \(\Gamma\) and all other operators in \(\Sigma\). And then any \(\Sigma^{\prime}\)-algebra can be represented as an ATM over \(T,\Sigma\), where \(\mathsf{S}T\dot{\simeq}\,\Gamma\). This opens the way to applying graph transformations to these algebras, but this requires some care since it is not generally possible to remove or add elements to a \(\Sigma^{\prime}\)-algebra and obtain a \(\Sigma^{\prime}\)-algebra as a result. The approach adopted in [2, Definition 11.5] is to restrict the morphisms used in span rules to a class of monomorphisms that are extensions of \(\Sigma\)-isomorphisms to \((\Gamma\dot{+}\Sigma)\)-homomorphisms. It is then possible to show [2, Theorem 11.11] that categories of typed attributed E-graphs are adhesive HLR categories (a notion that generalizes Definition 4.13, see [24]) w.r.t. this class of monomorphisms. A similar result holds on categories of ATMs. For the sake of simplicity, and since rule-based graph transformations are unlikely to modify attributes such as booleans, integers or strings (and if they do they should probably not be considered as graph transformations), we will only consider morphisms that leave the data type algebra unchanged, element by element. This leaves the possibility to transform the edges whose sort is in \(\Gamma\) but not in \(\Sigma\). **Definition 9.6** (categories \(\mathbf{ATM}(T,\mathcal{A})\), functor \(\mathsf{U}\), \(f\) stabilizes \(\mathcal{A}\)).: _For any \(\Sigma\)-algebra \(\mathcal{A}\) let \(\mathbf{ATM}(T,\mathcal{A})\) be the subcategory of \(\mathbf{ATM}(T,\Sigma)\) restricted to objects \((a,\mathcal{A})\) and morphisms \((f,1_{\mathcal{A}})\)._ _The forgetful functor \(\mathsf{U}:\mathbf{ATM}(T,\mathcal{A})\to\mathbf{Sets}\) is defined by \(\mathsf{U}(a,\mathcal{A})\stackrel{{\mathrm{\tiny def}}}{{=}} \mathsf{E}A\), where \(a:A\to T\) and \(\mathsf{U}(f,1_{\mathcal{A}})\stackrel{{\mathrm{\tiny def}}}{{=}} \mathsf{E}f\) (usually denoted \(f\))._ _By abuse of notation we write \(\mathcal{A}\) for the set \(\bigcup_{s\in S\cap\mathsf{E}T}\mathcal{A}_{s}\). A function \(f\) stabilizes \(\mathcal{A}\) if \(f^{-1}[x]=\{x\}\) for all \(x\in\mathcal{A}\)._ The proof that the categories \(\mathbf{ATM}(T,\mathcal{A})\) are adhesive will only be sketched below. The key point is the following lemma. **Lemma 9.7**.: _For all objects \((a,\mathcal{A})\), \((b,\mathcal{A})\) of \(\mathbf{ATM}(T,\mathcal{A})\) and morphism \(f:a\to b\) of \(\mathbf{Monogr}\backslash T\), we have_ \[(f,1_{\mathcal{A}}):(a,\mathcal{A})\to(b,\mathcal{A})\text{ is a morphism in }\mathbf{ATM}(T,\mathcal{A})\text{ \ iff \ }f\text{ stabilizes }\mathcal{A}.\] Proof.: For all \(s\in S\cap\mathsf{E}T\) we have \(\mathcal{A}_{s}=(\mathsf{A}_{T}a)_{s}=a^{-1}[s]\) and \(\mathcal{A}_{s}=b^{-1}[s]\). Since \(b\circ f=a\) then \(f^{-1}[\mathcal{A}_{s}]=f^{-1}[b^{-1}[s]]=a^{-1}[s]=\mathcal{A}_{s}\), hence \(f^{-1}[\mathcal{A}]=\mathcal{A}\). Thus \(f\) stabilizes \(\mathcal{A}\) iff \(f(x)=x\) for all \(x\in\mathcal{A}\) iff \((\mathsf{A}_{T}f)_{s}=f|_{\mathcal{A}_{s}}^{\mathcal{A}_{s}}=\mathrm{Id}_{ \mathcal{A}_{s}}=(1_{\mathcal{A}})_{s}\) for all \(s\in S\cap\mathsf{E}T\) iff \((f,1_{\mathcal{A}})\) is a morphism in \(\mathbf{ATM}(T,\mathcal{A})\). Hence the property of stabilization characterizes the difference between morphisms in \(\mathbf{Monogr}\backslash T\) and morphisms in \(\mathbf{ATM}(T,\mathcal{A})\). Besides, it is well-known how pushouts and pullbacks in \(\mathbf{Monogr}\backslash T\) can be constructed from those in \(\mathbf{Monogr}\), and we have seen that these can be constructed from those in \(\mathbf{Sets}\). But then it is quite obvious that in \(\mathbf{Sets}\), starting from a span of functions that stabilize \(\mathcal{A}\), it is always possible to find as pushout a cospan of functions that stabilize \(\mathcal{A}\). Hence not only does \(\mathbf{ATM}(T,\mathcal{A})\) have pushouts, but these are preserved by the functor \(\mathsf{U}\). A similar result holds for pullbacks, and a construction similar to Corollary 4.7 yields that \(\mathsf{U}\) also preserves monomorphisms. Finally, we see that \(\mathsf{U}\) reflects isomorphisms since \(f^{-1}\) stabilizes \(\mathcal{A}\) whenever \(f\) does. We conclude as in Theorem 4.17. **Theorem 9.8**.: \(\mathbf{ATM}(T,\mathcal{A})\) _is adhesive._ This result does not mean that all edges that are not attributes can be freely transformed. Their adjacencies to or from attributes may impose constraints that only few morphisms are able to satisfy. **Example 9.9**.: _Let \(\Sigma\) be the signature with no operation name and one sort \(\boldsymbol{s}\), and \(\mathcal{A}\) be the \(\Sigma\)-algebra defined by \(\mathcal{A}_{s}=\{a,b\}\). We consider the type monograph \(T=\{(e,\boldsymbol{s}),(\boldsymbol{s},e)\}\). A monograph typed by \(T\) has any number (but at least one) of edges typed by \(e\) that must be adjacent either to \(a\) or \(b\), and two edges typed by \(\boldsymbol{s}\), namely \(a\) and \(b\), that must be adjacent to either the same edge \(x\) typed by \(e\), which yields two classes of monographs_ _(to which may be added any number of edges typed by \(e\) and adjacent to either \(a\) or \(b\)), or \(a\) and \(b\) are adjacent to \(y\) and \(z\) respectively, and we get four more classes:_ _The function \(y,z\mapsto x\) is a morphism from these last two monographs to the two monographs above (respectively). There are no other morphisms between monographs from distinct classes. We therefore see that in the category \(\mathbf{ATM}(T,\mathcal{A})\) it is possible to add or remove edges typed by \(e\) to which \(a\) or \(b\) are not adjacent, but there is no way to remove the edges \(y\) and \(z\) (because this would require a rule with a left morphism from an ATM without \(y\) and \(z\) to an ATM with \(y\) and \(z\), and there is no such morphism), though they are not attributes._ _Besides, we see that this category has no initial object, no terminal object, no products nor coproducts._ Conclusion Monographs generalize standard notions of directed graphs by allowing edges of any length with free adjacencies. An edge of length zero represents a node, and if it has greater length it can be adjacent to any edge, including itself. In "monograph" the prefix mono- is justified by this unified view of nodes as edges and of edges with unrestricted adjacencies that provide formal conciseness (morphisms are functions characterized by a single equation); the suffix -graph is justified by the correspondence (up to isomorphism) between finite \(\omega\)-monographs and their drawings. Monographs are universal with respect to graph structures and the corresponding algebras, in the sense that monographs are equivalent to graph structures extended with suitable ordering conventions on their operator names, and that categories of typed monographs are equivalent to the corresponding categories of algebras. Since many standard or exotic notions of directed graphs can be represented as monadic algebras, they can also be represented as typed monographs, but these have two advantages over graph structures: they provide an orientation of edges and they (consequently) dispense with operator names. Algebraic transformations of monographs are similar to those of standard graphs. Typed monographs may therefore be simpler to handle than graph structured algebras, as illustrated by the results of Section 9. The representation of oriented edges as sequences seems more natural than their standard representation as unstructured objects that have images by a bunch of functions. Thus type monographs emerge as a natural way of specifying graph structures.
2308.03452
Complex-plane singularity dynamics for blow up in a nonlinear heat equation: analysis and computation
Blow-up solutions to a heat equation with spatial periodicity and a quadratic nonlinearity are studied through asymptotic analyses and a variety of numerical methods. The focus is on the dynamics of the singularities in the complexified space domain. Blow up in finite time is caused by these singularities eventually reaching the real axis. The analysis provides a distinction between small and large nonlinear effects, as well as insight into the various time scales on which blow up is approached. It is shown that an ordinary differential equation with quadratic nonlinearity plays a central role in the asymptotic analysis. This equation is studied in detail, including its numerical computation on multiple Riemann sheets, and the far-field solutions are shown to be given at leading order by a Weierstrass elliptic function.
M. Fasondini, J. R. King, J. A. C. Weideman
2023-08-07T10:14:10Z
http://arxiv.org/abs/2308.03452v1
Complex-plane singularity dynamics for blow up in a nonlinear heat equation: analysis and computation ###### Abstract Blow-up solutions to a heat equation with spatial periodicity and a quadratic nonlinearity are studied through asymptotic analyses and a variety of numerical methods. The focus is on the dynamics of the singularities in the complexified space domain. Blow up in finite time is caused by these singularities eventually reaching the real axis. The analysis provides a distinction between small and large nonlinear effects, as well as insight into the various time scales on which blow up is approached. It is shown that an ordinary differential equation with quadratic nonlinearity plays a central role in the asymptotic analysis. This equation is studied in detail, including its numerical computation on multiple Riemann sheets, and the far-field solutions are shown to be given at leading order by a Weierstrass elliptic function. _Keywords_: Nonlinear blow up; Complex singularities; Matched asymptotic expansions; Fourier spectral methods; Pade approximation AMS classification scheme numbers: 35B44, 32S99, 35C20, 41A21, 65N99 ## 1 Introduction The nonlinear heat equation (NLH) \[\frac{\partial u}{\partial t}=\frac{\partial^{2}u}{\partial x^{2}}+u^{2}, \tag{1}\] is known to exhibit finite-time blow up; see [1, 2, 3, 4, 5] for reviews and Figure 1 for typical solution profiles. The NLH is a model for reaction-diffusion processes and has appeared in numerous applications including fluid dynamics [6, 7, 8], chemical kinetics [9, 10, 11] and biology [12]. The blow-up behaviour of solutions to the NLH and more general nonlinear parabolic PDEs has been studied extensively on the real line; see the previously mentioned review papers as well as [13, 14, 15]. The novel approach of the present paper is to investigate in detail how the blow up relates to singularity dynamics of the solution when it is viewed as an analytic function in the complex \(z\) plane, with \(z=x+iy\) (only the spatial variable is complexified here). We adopt a combination of asymptotic and numerical methods with the goal of obtaining a comprehensive description of the complex-plane behaviour leading to blow up. The complex-analytic viewpoint for the NLH was introduced in [16], which was inspired by similar approaches to the Burgers equation [17, 18, 19, 20] and the Korteweg-de Vries equation [21]. The particular example of [16] involved a real-valued, \(2\pi\)-periodic solution in \(x\), associated with the initial condition \[u(x,0)=\alpha\cos x, \tag{2}\] with only \(\alpha=1\) considered in that paper. Here, we consider all \(\alpha>0\) and will thus be able to distinguish between initially small and large nonlinear effects. Viewed in the complex \(z\) plane, the initial condition (2) is an entire function. For small \(t>0\), singularities are born at infinity and the ones closest to the real axis, which we locate at \(y=\mbox{Im}\;z=\pm\sigma(t)\), rapidly move along the imaginary axis towards the real axis, with \(u\) real on the imaginary axis for \(|y|<\sigma(t)\). Figure 2 shows the position of the closest singularity on the positive imaginary axis for the solutions in Figure 1. Since the solution in the upper and lower half planes are equal up to complex conjugation (\(u(\overline{z},t)=\overline{u(z,t)}\), \(z=x+iy\)), we shall consider the solution only in the upper half plane. The singularity dynamics in Figure 2 exemplify the competing effects of diffusion and nonlinearity in the NLH. For large enough \(\alpha\), as in the top left frames of Figures 1 and 2, the focussing nonlinear term dominates diffusion in the sense that the singularity approaches the real axis monotonically, albeit not at constant speed. The solution profile on the real axis steepens as the singularity approaches the real axis, with point blow up occurring when the two closest singularities in the upper and lower half planes collide on the real axis. If \(\alpha\) is small enough, as in the bottom row of Figures 1 and 2, diffusion dominates initially and we see that the solution profile flattens while the singularities reverse direction after zooming in from infinity to move away from the real Figure 1: Solution profiles for blow up in the PDE (1) for decreasing values of \(\alpha\) in the initial condition (2). (Note that the scales on the time axes are different in each case.) In the \(\alpha=0.5\) case blow up appears to be uniform, however, in section 6 we shall show that in fact point blow up occurs at \(x=0\). axis. However, even as the solution flattens, its mean increases1 and thus eventually nonlinearity reasserts itself, the solution profile becomes steeper, the singularity changes direction again and rapidly moves towards the real axis and point blow up ensues. For the case \(\alpha\approx 1.2\) in the top-right frame of Figure 2 there is a balance between nonlinearity and diffusion in which the singularity is near stationary for a while after zooming in from infinity and before zooming in again as point blow-up occurs. Footnote 1: The mean of the solution, \(\langle u\rangle\), satisfies \(\frac{d}{dt}\langle u\rangle=\langle u^{2}\rangle\), as can be derived from (1) using integration by parts. With the initial data (2), the solution has a point blow up at \(x=0\) for any \(\alpha>0\). The blow up may occur at other points, however, or not at all. For example, note that \(\alpha<0\) corresponds to a translation of the solution \(x\mapsto x\pm\pi\) and thus shifts the blow-up location from the maximum of the initial data (2) at \(x=0\) to \(x=\pm\pi\). On the other hand, we shall also consider \[u(x,0)=\alpha\cos x+\beta, \tag{3}\] briefly in an appendix. If \(\beta\) is sufficiently large negative for a fixed \(\alpha\), then the solution does not blow up but extinguishes ('heat death' occurs) according to \(u\sim-1/t\), \(t\to\infty\); see the left frame of Figure 3. Otherwise, point blow up occurs at (i) \(x=0\) if \(\alpha>0\) and at (ii) \(x=\pm\pi\) if \(\alpha<0\). For an example of (i), we perturb the initial data of the heat-death solution in Figure 3 and obtain the solution in the right frame of Figure 3. For the heat-death solution, the singularities zoom in from infinity, change direction and then move back (linearly in time, as we shall show) to infinity, see Figure 4. This figure also shows that when perturbing the heat-death solution, the singularities can switch direction a second time and move towards the real axis, which leads to blow up. In our analysis of the solution in the blow-up limit, we shall also briefly consider the blow-up scenarios, and associated singularity dynamics, for initial data with two local maxima. Figure 5 shows two possibilities: blow up occurs at two distinct points for even initial data with two sufficiently separated and concentrated peaks (left frame) and (right frame) two maxima are sufficiently close to diffuse and combine into a single maximum, and then blow up occurs at a single point. We shall also consider the non-generic blow up that occu Figure 2: Singularity locations at \(\pm iy\) as functions of time, for the solutions shown in Figure 1. combine precisely at the blow-up time, which represents the borderline between the cases shown in Figure 5. Other complex-plane studies of the NLH have been reported in [22, 23]. In both papers the equation was studied numerically in the complex \(t\)-plane. Related studies reported in [24] focused on the case of nearly flat initial data \[u(x,0)=\frac{1}{\alpha-\epsilon\cos x},\hskip 28.452756pt0<\epsilon\ll\alpha. \tag{4}\] The properties of blow up were investigated but singularity dynamics in the complex \(x\)-plane were only mentioned briefly. The paper consists of seven sections and six appendices, the latter containing the bulk of the more technical analyses. In section 2 we review two numerical approaches based on Fourier analysis for locating and classifying the singularities alluded to above. In section 3 we complement the numerical investigation by a local analysis that characterises the type of singularities admitted by the equation. Sections 4-6 are devoted to the analysis and numerical verification of the dynamics of the singularities and the relation of the NLH solution in the neighbourhood of the singularities to certain nonlinear ODE solutions. The ODE solutions are studied in detail in Appendix B. Section 4 and Appendix A concern the small-time limit \(t\to 0^{+}\) for all amplitudes \(\alpha>0\). Section 5 and Figure 4: Singularity location of the solution in the left frame (blue curve) and right frame (red curve) of Figure 3. Figure 3: Solutions to the NLH for the initial data (3) with \(\beta=-5\), \(\alpha=7.856\) (left) and \(\beta=-5\), \(\alpha=7.892\) (right). On the left, heat death occurs, but with a small perturbation in the initial condition the solution blows up, same as in Figure 1. Appendix C consider the large-amplitude limit, \(\alpha\to\infty\), and section 6 and Appendix D treat the small amplitude limit, \(\alpha\to 0\). As Figures 1 and 2 indicate, the small-amplitude singularity dynamics are the most complicated. This will be borne out by the asymptotic analysis, in which a complete description requires five distinct time scales. To complete our study, in section 7 and Appendix E we consider the solution in the blow-up limit. Sections 4-7 focus on NLH solutions subject to the initial data (2) but the final appendix, Appendix F, considers NLH solutions corresponding to the more general initial condition (3). Throughout we shall reuse symbols with different meanings in different sections. ## 2 Numerical method and singularity tracking The numerical solutions reported in this paper were computed by a Fourier spectral method. Considering solutions \(2\pi\)-periodic in space, the approximation is based on the Fourier series \[u(x,t)=\sum_{k=-\infty}^{\infty}c_{k}(t)e^{ikx}. \tag{5}\] When this is substituted into (1) an infinite dynamical system for the evolution of the coefficients \(c_{k}(t)\) is obtained. Upon truncation to modes \(|k|\leq N\), a finite-dimensional system is obtained, which we integrated in time with the adaptive time-step functions ode45 and ode15s available in MATLAB. The main advantage of these integrators is the error control that they provide. By experimentation, the number of modes, \(2N+1\), was chosen sufficiently large so that all results presented here have fully converged. For computing solutions close to blow up, a powerful strategy has been suggested in [15] and used in [24]. Namely, the substitution \(u=1/v\) converts (1) into an equivalent PDE whose solution approaches zero rather than infinity at the blow-up point. This method can maintain high accuracy for blow-up solutions for values on the order of \(u=1/v=\mathcal{O}(1/\varepsilon)\), where \(\varepsilon\) here denotes the machine precision, which can be made arbitrarily small with variable precision arithmetic but which is approximately \(10^{-16}\) for IEEE double precision. In the present paper, we use the Fourier spectral method in the variable \(v\) whenever \(u\) is strictly positive, otherwise, if \(u\) changes sign on \([-\pi,\pi]\), we solve the PDE in the \(u\) variable. The latter method is accurate for solution values only up to approximately \(10^{8}\) in double precision. If \(u\) changes sign and arbitrarily large solution values are required then more complicated numerical methods such as domain decomposition, dynamical rescaling and/or adaptive mesh refinement are needed; see [13, 14]. As for singularity tracking, there are two main numerical techniques. The first is based on the Figure 5: NLH solutions for two-peaked initial data defined by \(u(x,0)=\alpha\exp(\mu\cos(x+\delta)-\mu)+\alpha\exp(\mu\cos(x+\delta)-\mu)\). For both solutions, \(\alpha=6\), \(\mu=50\), while on the left, \(\delta=\pi/2\) and on the right, \(\delta=0.4\pi\). The red dots in the right frame indicate the maxima of the solution. examination of the rate of decay of the Fourier coefficients in (5). The other is based on Fourier-Pade methods for numerical analytic continuation. The first of these methods is described in the well-known paper by Sulem et al. [25]. Suppose at a fixed time \(t\) the coefficients \(c_{k}\) of the series (5) are available. If the singularity closest to the real axis is at \(z_{*}=x_{*}+iy_{*}\) and \[u(z)\sim C(z-z_{*})^{-\mu},\qquad z\to z_{*}, \tag{6}\] for some constant \(C\), then [25] \[|c_{k}|\sim|C|k^{\mu-1}e^{-ky_{*}},\qquad k\to+\infty. \tag{7}\] Given values of \(c_{k}\) for a range \(k\gg 1\), the values of \(\mu\) and \(y_{*}\) can be estimated by a linear least squares fit, after taking logarithms in (7). The singularity locations shown in Figure 2 were computed by this method. In the next section we proceed with a theoretical analysis of the singularities of (1), but we assume for now an expression of the form (6). The least-squares procedure then produces the estimates for the exponent \(\mu\) shown in Figure 6. These results suggest that the singularities could be poles of order two, except in an intermediate asymptotic sense for initial times and as blow up is approached. The asymptotic analyses in the next sections will clarify these numerical estimates. The other method for singularity tracking is based on Fourier-Pade methods as considered for the NLH (and other PDEs) in [16]. The Fourier series (5) is converted to rational trigonometric form, which can be continued into the complex plane to some strip around the real axis. The advantage of this method over the method (6)-(7) is that it often gives information further into the complex plane, beyond the singularities closest to the real axis. In a further improvement this method was recently extended to quadratic Fourier-Pade, which incorporates a square-root singularity into the approximant in an attempt to capture branch point singularities more accurately [26]. In Figure 7 we show phase plots for the case \(\alpha=1\), computed by this quadratic Fourier-Pade method. (It provides a different view of the singularity dynamics shown in the third frame of Figure 2.) This improves on the figure given in [16, Fig. 5.2], which was computed with the standard Fourier-Pade approach. By examining the phase plots, it again appears that the dominant singularity is a pole of order two, as was evident in Figure 6 as well. As the singularity approaches the real axis, however, the characteristics of a branch point become evident. The analysis of the next section will clarify this. Figure 6: Estimated values of \(\mu\) as defined in (7). Only two of the cases of Figures 1 and 2 are displayed here, the others being qualitatively similar. The numerical results single out the value \(\mu=2\), which suggests poles of order 2 as the nearest singularities to the real axis. The true nature of the singularities is more complicated, involving a logarithmic branch point, as the analysis of section 3 will show. ## 3 Local analysis of the singularities In this section we examine more closely the nature of the singularities described in sections 1 and 2. Locating a singularity in the complex plane at \(x=i\sigma(t)\) and setting \(x=i\sigma(t)+\zeta\) gives \[\frac{\partial u}{\partial t}-i\dot{\sigma}\frac{\partial u}{\partial\zeta}= \frac{\partial^{2}u}{\partial\zeta^{2}}+u^{2}, \tag{8}\] where the dot represents differentiation with respect to \(t\). The right-hand side dominates the local behaviour, so that as \(\zeta\to 0\), \[u\sim-\frac{6}{\zeta^{2}}.\] Figure 7: Approximate complex phase plots of the \(\alpha=1\) solution when continued from the real line into the upper half of the complex \(z\)-plane. The colours indicate the phase \(\phi(z,t)\in[-\pi,\pi)\) of the solution (where \(u(z,t)=|u(z,t)|\exp i\phi(z,t)\)) according to the colour wheel at the top (taken from [27]). In the first frame the singularity travels down the imaginary axis, until it momentarily stops and turns around at the location in the second frame. It then moves away from the real axis until it reaches the position of the third frame, at which time it turns around once more and rushes onto the real axis and blow up occurs. The fact that the colours go twice around the colour wheel when encircling the singularity suggests that the dominant contribution is a pole of order two. In the last frame the discontinuity in the phase on the imaginary axis is suggestive of a branch point, however, as will be discussed in the next section. (The interpretation of complex phase plots is discussed in [28] and plotting software can be found at [29].) Setting \[u=-\frac{6}{\zeta^{2}}+V(\zeta,t),\] and linearising, at leading order the 'complementary function' \(V\) satisfies the Euler equation \[\frac{\partial^{2}V}{\partial\zeta^{2}}-\frac{12}{\zeta^{2}}V=0,\] so that \[V\sim A(t)\zeta^{-3}+B(t)\zeta^{4}.\] Self-consistency requires \(A=0\) (this contribution being associated with the \(\zeta\)-translation-invariance of (8)), but \(\sigma(t)\) and \(B(t)\) are arbitrary in terms of the local analysis, representing the two degrees of freedom expected of a generic singularity in this second-order problem. Reinstating the intervening terms in the local expansion about the singularity in (8), one finds that \[u\sim-\frac{6}{\zeta^{2}}+\frac{6\,i\dot{\sigma}}{5\,\zeta}-\frac{1}{50}\dot{ \sigma}^{2}+a(t)\zeta+b(t)\zeta^{2}+c(t)\zeta^{3}+d(t)\zeta^{4}\log\zeta+B(t) \zeta^{4},\qquad\zeta\to 0, \tag{9}\] where \[a(t) =-\frac{i}{250}\dot{\sigma}^{3}-\frac{i}{10}\ddot{\sigma}, b(t) =\frac{\dot{\sigma}\left(7\,\dot{\sigma}^{3}+190\,\ddot{\sigma} \right)}{5000},\] \[c(t) =\frac{79i}{75000}\dot{\sigma}^{5}+\frac{229i}{7500}\dot{\sigma} ^{2}\ddot{\sigma}+\frac{i}{60}\ddot{\sigma}, d(t) =\frac{18}{21875}\dot{\sigma}^{6}+\frac{108}{4375}\dot{\sigma}^{3} \ddot{\sigma}+\frac{16}{875}\dot{\sigma}\ddot{\sigma}+\frac{6}{875}\ddot{ \sigma}^{2}.\] Note that the presence of the \(d(t)\zeta^{4}\log\zeta\) term implies that the singularity is actually a branch point. However, the double-pole nature of the dominant term in (9) significantly precedes this in the local expansion. This explains why both singularity tracking methods in section 2 singled out this type of singularity. The expansion might also explain why the method (6)-(7) suggests simple pole singularities near \(t=0\) and near blow up; see Figure 6. This is because the second term on the right in (9) has a residue proportional to \(\dot{\sigma}\), and this value is large initially and again near blow up, as can be confirmed in Figure 2. A more detailed asymptotic analysis is required fully to clarify the matter; see below. The method based on the quadratic Fourier-Pade method is able, at least ultimately, to classify the singularity as a branch point; see the last frame of Figure 7. The approximant is a two-valued function, so it is unable to capture the logarithmic singularity perfectly, but at least the presence of a branch point is indicated strongly. ## 4 Small-time limit: singularity dynamics and singularity structure In the first of our appendices, Appendix A, it is shown that for the initial condition (2) and for small values of \(t\), the NLH solution on the real axis is \[u(x,t)=\alpha\cos x+t\Big{(}-\alpha\cos x+\frac{1}{2}\alpha^{2}(1+\cos 2x) \Big{)}+\mathcal{O}(t^{2}),\qquad t\to 0.\] Furthermore, it is shown that the singularity closest to the real axis on the positive imaginary axis is located at \(z=i\sigma(t)\sim i\log(2/(\alpha t))\), \(t\to 0\). In the vicinity of \(i\sigma(t)\), the solution to the NLH in the complex plane is given by \[u(z,t)\sim\frac{\phi(\zeta)}{t^{2}},\qquad z=i\Big{(}\log(2/(\alpha t))+2t \log(1/t)-(\zeta+1)t\Big{)},\qquad t\to 0,\qquad\zeta t=o(1), \tag{10}\] where \(\phi\) is the solution to the nonlinear ODE \[\frac{d^{2}\phi}{d\zeta^{2}}-\frac{d\phi}{d\zeta}=\phi^{2}, \tag{11}\] subject to the asymptotic condition \[\phi=\frac{1}{\zeta}+\frac{2\log(\zeta)}{\zeta^{2}}+o\left(\zeta^{-2}\right), \qquad\zeta\to\infty. \tag{12}\] Let \(\zeta_{*}\) be the first singularity that is encountered on the real axis as the ODE problem (11)-(12) is integrated from \(\infty\), then it follows from (10) that a small-time approximation to the singularity location is given by \(z=i\sigma(t)\), where \[\sigma(t)\sim\log(2/(\alpha t))+2t\log(1/t)-(\zeta_{*}+1)t. \tag{13}\] In subsequent sections, we shall find that the ODE (11) subject to (12) or variations thereof arises repeatedly in our analysis of NLH solutions in the complex plane. Consequently, a detailed analysis of these ODE solutions is given in B. Figure 8 shows the solution to (11)-(12), revealing that \(\zeta_{*}\approx 0.05695\). In B, higher-order terms in the asymptotic expansion (12) are derived in order to compute the initial condition to an accuracy on the order of machine precision. The solution to the ODE (11) is then computed in the complex \(\zeta\)-plane using the 'pole field solver,' a method based on an adaptive Pade one-step method [30, 31]. Figure 9 shows the estimate (13) compared to the numerical estimate of the singularity location described by (6)-(7). Both the graphical and the numerical comparisons confirm that the asymptotic and numerical estimates match well for small \(t\). As to be expected, the accuracy of the asymptotic estimate deteriorates as \(t\) increases. The asymptotic analysis in B.1 provides another perspective on why the singularity exponent in Figure 6 increases from roughly 1 to 2 as \(t\) increases from small to intermediate values. It is shown in (A.5) that in the small-time limit, the leading-order behaviour of the singularity (after a single rescaling) is that of a simple pole. It is this leading-order simple pole nature of the singularity that is detected by the numerical method. It is only after a second rescaling that one arrives at an equation, viz. (A.8), whose singularity type matches that of the NLH. The type of singularities admitted by the ODE (11) are (unsurprisingly) of the same type as those of the NLH since if one assumes a singularity of the ODE is located at \(\zeta_{*}\), Figure 8: The solution to (11) satisfying (12). The modulus is shown as the height, and the phase is represented according to the color wheel at the top of Figure 7. The plot reveals a branch cut along the negative real axis originating from a singularity at \(\zeta_{*}=0.05695\). The initial conditions used for this solution are given in (B.13) and were computed with more than 800 terms of the asymptotic expansion (12), which are shown in Figure B2. takes the form \[\phi(\zeta+\zeta_{*})\sim\frac{6}{\zeta^{2}}+\frac{6}{5\zeta}-\frac{1}{50}+\frac{ \zeta}{250}-\frac{7\zeta^{2}}{5000}+\frac{79\zeta^{3}}{75000}+\frac{18}{21875} \zeta^{4}\log\zeta+b\zeta^{4},\qquad\zeta\to 0, \tag{14}\] where \(b\) is an arbitrary constant. This shows that the ODE (11) does not possess the Painleve property since it has branch point singularities whose locations are dependent on the initial conditions (i.e., movable branch points). This also confirms that the essential singularity that is present in the initial condition at \(\infty\), and which (9) indicates to be incompatible with the NLH, is transformed for any \(t>0\) into singularities of the compatible form (9). Figure 8 suggests that the branch point singularity at \(\zeta_{*}\) is the only singularity of \(\phi(\zeta)\) on the principal Riemann sheet of the solution to (11)-(12). It is also of interest to examine if, and where, other singularities might occur. Integrating clockwise around \(\zeta_{*}\) onto the next Riemann sheet, we obtain the solution shown in Figure 10, which reveals the presence of a multitude of singularities. Such singularities could, at least in principle, subsequently move onto the principal Riemann sheet and influence the real-time behaviour; however, such effects seem not to be of importance in the current context. Appendix B.4 shows that, to leading order, the solution for \(\zeta\to\infty\) on the second Riemann sheet is expressible in terms of the (equianharmonic) Weierstrass elliptic function in the variable \(\xi=e^{\zeta/5}\) (see (B.8) and (B.11)). Hence, in the bottom frame of Figure 8, we find that in the \(\xi\)-plane the far-field singularities on the second Riemann sheet lie approximately on the same lattice as the singularities of the Weierstrass elliptic function. The asymptotic result (10) and Figure 10 suggest that the essential singularity at \(\infty\) that is present in the initial condition is instantaneously transformed into infinitely many singularities of the form (9) for \(t>0\) that lie on a non-compact (infinitely sheeted) Riemann surface. ## 5 Large-amplitude initial conditions In Appendix C it is shown that for the initial condition (2) with large amplitude, a leading-order approximation to the solution on the real line is, for \(t=\mathcal{O}(1/\alpha)\) \[u\sim\frac{\alpha\cos x}{1-\alpha t\cos x},\qquad\alpha\to\infty. \tag{15}\] Hence, to leading order, the singularity locations are \(x=\pm iy=\pm i\sigma(t)\), where \[\sigma(t)\sim\cosh^{-1}(1/(\alpha t))=\log(1/(\alpha t))+\log(1+(1-\alpha^{2} t^{2})^{1/2}),\qquad\alpha\to\infty. \tag{16}\] Figure 9: The singularity locations \(\pm iy\) as functions of time as estimated by the procedure of section 2, which is based on eq. (7) (solid line) and the estimate of eq. (13) (dashed line). The formula on the right makes it clear that as \(t\to 0\), this estimate is consistent with the result of the small-time analysis (13). The singularities move along the imaginary axis and collide with the real axis at \(x=0\) for \(t=t_{c}\sim 1/\alpha\). The motion of the singularities is monotonically towards the real axis, albeit not at constant speed. Graphs of the singularity locations as functions of time are shown in Figure 11. The singularities of (15) are simple poles and therefore (15) ceases to be valid close to the singularities. It is shown in C that in a neighbourhood of the closest singularities, \[u(z,t)\sim\frac{\phi(\zeta)}{t(1-\alpha^{2}t^{2})},\hskip 28.452756pt\alpha \rightarrow\infty, \tag{17}\] where \(\phi\) is the ODE solution defined by (11)-(12) and the formula \[z=i\left[\cosh^{-1}(1/(\alpha t))+t\sqrt{1-\alpha^{2}t^{2}}\left(2\log(\alpha) -\frac{1-2\alpha^{2}t^{2}}{1-\alpha^{2}t^{2}}-2\log(\alpha t)-2\log(1-\alpha^{ 2}t^{2})-\zeta\right)\right], \tag{18}\] defines \(\zeta\). Therefore \[\sigma(t)\sim\cosh^{-1}(1/(\alpha t))+t\sqrt{1-\alpha^{2}t^{2}}\left(2\log( \alpha)-\frac{1-2\alpha^{2}t^{2}}{1-\alpha^{2}t^{2}}-2\log(\alpha t)-2\log(1- \alpha^{2}t^{2})-\zeta_{*}\right), \tag{19}\] where \(\zeta_{*}\) is the location of the first singularity of \(\phi\) on the real axis, which, as stated in section 4, is \(\zeta_{*}\approx 0.05695\). Figure 10: Top and bottom-left: Modulus plots of the solution in Figure 8 integrated clockwise around the branch point at \(\zeta_{*}=0.05695\), hence the solution on the lower half-planes in these figures is on the principal Riemann sheet (the sheet shown in Figure 8) and the upper half-plane lies on the next Riemann sheet. Bottom: Comparison of the pole locations (B.12) of the Weierstrass function (with \(\alpha=r\exp(i\theta)\), \(r\approx 0.2087\), \(\theta\approx 0.2524\) and \(\zeta_{0}\approx-0.5113+0.03149i\)) and the singularity locations on the upper half-plane in the top frame of the figure after mapping to the \(\xi\)-plane (\(\xi=e^{\zeta/5}\)). The bottom-left figure illustrates how the singularities first arise in the neighbourhood of the anti-Stokes line identified in B.4. ### Approach to blow up As the singularities approach the real axis, i.e. in the double limit \(x\to 0\), \(t\to 1/\alpha\), it follows from (15) that \[u\sim\frac{\alpha}{1-\alpha t+x^{2}/2},\] holds. This might suggest that as blow up is approached, the solution takes the self-similar form, \[u=\frac{1}{t_{c}-t}f\left(\frac{x}{\sqrt{t_{c}-t}}\right), \tag{20}\] this representing a classical similarity reduction to the NLH. As is well known, there were early conjectures in related problems that blow-up solutions indeed take such a self-similar form (see e.g., [32, 33]), although in [8] the appropriate logarithmic corrections had already been established for a specific form of nonlinearity; see also [9]. Indeed, it is known that no suitable solution to the resulting ODE in fact exists, see [34, 35]. In E we apply appropriate modifications to existing approaches (pioneered in [8] for a closely related PDE) to analyse the approach to blow up. The analysis is valid for all the types of initial data mentioned in this paper (namely, (2)-(4)). ## 6 Small-amplitude initial conditions In D, the NLH solution is analysed for the initial data (2) with \(0<\alpha=\epsilon\ll 1\) and it is found that five different time scales are relevant for the asymptotic analysis. Here we summarise the results on the different time scales. The bottom right frames of Figures 1 and 2 show, respectively, the qualitative behaviour of the solution profile and the singularity dynamics. Figure 12 illustrates that the \(\epsilon=0.5\) solution in Figure 1 appears to blow up uniformly but in fact it blows up at the point \(x=0\). Figure 11: The singularity locations \(\pm i\sigma\) as a function of time, as estimated by the numerical procedure of section 2 (least squares) and the asymptotic estimates (16) and (19). As can be expected, the higher-order estimate (19) loses accuracy close to the blow-up time, as it should do; however, in the right frame it is clear that this estimate is more accurate for intermediate times away from blow up than the leading-order approximation (16). ### \(t=\mathcal{O}(1)\) On the first timescale, the solution on the real axis is given by \[u(x,t) \sim\epsilon\rme^{-t}\cos x+\frac{\epsilon^{2}}{4}\left[1-\,\rme^{-2\,t}+ \,\left(\rme^{-2\,t}-\rme^{-4\,t}\right)\cos 2\,x\right] \tag{21}\] \[+\frac{\epsilon^{3}}{48}\left[\left(24t+6\,\rme^{-2\,t}+3\,\rme^ {-4\,t}-9\right)\rme^{-t}\cos x+\left(2-3\,\rme^{-2\,t}+\rme^{-6\,t}\right) \rme^{-3\,t}\cos 3\,x\right].\] This approximation, as well as other asymptotic estimates in the upcoming sections 6.2-6.4, will be compared to numerical results in section 6.6. The closest singularities are at \(z=\pm i\sigma(t)\), where \[\sigma(t)\sim 2t-\log(\sinh t)-\log(\epsilon/2)+\zeta_{*}(t),\qquad\epsilon \to 0, \tag{22}\] and \(\zeta_{*}(t)\) is the location of a singularity of a nonlinear backward diffusion PDE (namely, (D.4)-(D.5)) whose solution is not known explicitly. However, the limiting behaviour of \(\zeta_{*}(t)\) is found to be (i) \(\zeta_{*}(t)\to 0\) as \(t\to 0\) (hence (22) is consistent with (13) as \(t\to 0\)) and (ii) as \(t\) becomes large according to \(t=\mathcal{O}(1/\epsilon)\), \(\zeta_{*}\to\widetilde{\zeta}_{*}-\log 2.\)2 Here \(\widetilde{\zeta}_{*}\) is the location of the first singularity of the following nonlinear ODE problem: Footnote 2: The \(\log 2\) shift arises because (D.4) implies \(U\sim 2e^{\zeta}\) as \(\zeta\to-\infty\), \(t\to\infty\). \[\frac{d^{2}\phi}{d\zeta^{2}}-\frac{d\phi}{d\zeta}=\phi^{2},\qquad\phi\sim e^ {\zeta},\qquad\zeta\to-\infty. \tag{23}\] The limiting behaviour (ii) follows because for \(t=\mathcal{O}(1/\epsilon)\), \(\epsilon\to 0\) the NLH solution in the neighbourhood of the singularity, i.e. for \(\zeta=\mathcal{O}(1)\), is \[u(z,t)\sim\phi(\zeta),\qquad z=i\left(2t-\log(\sinh t)+\log(1/\epsilon)+\zeta \right), \tag{24}\] where \(\phi(\zeta)\) is the solution to (23). The left frame of Figure 13 compares the numerically determined singularity location (as described in section 2) and the approximation (22), but with \(\zeta_{*}(t)\) neglected. The right frame of the figure shows the difference between these two quantities, which gives an estimate for \(\zeta_{*}(t)\). As predicted by the asymptotic analysis, \(\zeta_{*}(t)\) increases from zero and for large \(t\) and \(\epsilon\to 0\), \(\zeta_{*}\to\widetilde{\zeta}_{*}-\log 2\), where \(\widetilde{\zeta}_{*}\approx 1.53767\), see the top frame of Figure 14. The latter figure shows that the far-field condition in (23) leads to multiple singularities on the principal Riemann sheet of the ODE solution, in contrast to the solution in Figure 8 corresponding to the condition (12). This suggests that the NLH solution has multiple singularities that in the small-time limit live off its principal Riemann Figure 12: NLH solution for the initial data \(u(x,0)=\epsilon\cos x\), with \(\epsilon=\alpha=0.5\). This is the same solution displayed in the bottom-right frame of Figure 1 but shown here on a different vertical scale. sheet but, for small amplitude and large time, move onto its principal Riemann sheet, a possibility we alluded to above. Appendix B.4 shows that the far-field solution of (23) can also be expressed to leading order in terms of the (equianharmonic) Weierstrass function in the variable \(\xi=e^{\zeta/5}\). Hence the far-field singularities of the solution in the top frame of Figure 14 lie approximately on the same lattice as the Weierstrass function's singularities in \(\xi\)-plane, as shown in the bottom frame of the figure. Figure 14: Top: Modulus of the solution to (23) with initial conditions given by (14), as computed by the procedure described in Appendix B.2. Bottom: Comparison of the pole locations (13) of the Weierstrass function (with \(\alpha=r\exp(i\theta)\), \(r\approx 0.28910\), \(\theta\approx 0.1066\) and \(\zeta_{0}\approx-0.603-0.2574i\)) and the singularity locations on the upper half-plane in the top frame of the figure after mapping to the \(\xi\)-plane (\(\xi=e^{\zeta/5}\)). Figure 13: The left frame shows the numerical (solid lines) and asymptotic (dotted lines) approximations (with \(\zeta_{\star}(t)=0\) in (22)) for the singularity locations \(\pm i\sigma(t)\). The difference between these approximations, shown in the right frame, approaches the predicted limiting value, \(\bar{\zeta}_{\star}-\log 2\approx 0.84452\), (indicated by the dotted line) for large \(t\) as \(\epsilon\to 0\). ### \(t=\mathcal{O}(\epsilon^{-2})\) On the second timescale, the solution on the real axis is \[u\sim\frac{1}{t_{c}-t}+\frac{16\,\rme^{-t}}{\epsilon^{3}(t_{c}-t)^{2}}\cos x+ \left(\frac{128\,e^{-4t}}{\epsilon^{6}(t_{c}-t)^{2}}\int_{-\infty}^{t}\frac{e^ {2s}}{(t_{c}-s)^{2}}ds\right)\cos 2x \tag{25}\] where the leading order estimate of the blow-up time is \(t_{c}\sim 4/\epsilon^{2}\) (see D). For the solution in Figure 12 with \(\epsilon=1/2\), this gives the estimate \(t_{c}\approx 16\), while the numerically computed blow-up time is \(t_{c}\approx 15.53\). Solutions with initial data (2) have a maximum at \(x=0\) (see Figure 1) and, as shown in B of [24], the peak-to-trough height of the maximum satisfies \[u(0,t)-u(\pm\pi,t)\approx 4c_{1}(t):=h(t)\] for even solutions, provided \(t\) is not close to the blow-up time. Here \(c_{1}(t)\) is a Fourier coefficient of the solution; see (5). From (25) it follows that \[h(t)=\frac{32\,e^{-t}}{\epsilon^{3}(t_{c}-t)^{2}},\quad\Rightarrow\quad\min h (t)=h(t_{c}-2)=\frac{8\,e^{2}}{\epsilon^{3}}e^{-t_{c}}\sim\frac{8\,e^{2}}{ \epsilon^{3}}e^{-4/\epsilon^{2}}, \tag{26}\] which implies the minimum peak-to-trough height of the solution decreases exponentially with \(\epsilon\). With \(\epsilon=0.3\) it follows that \(\min h(t)\approx 10^{-16}\), which means that the solution is completely flat in double-precision arithmetic and a uniform blow up is computed. This is incorrect since, as we shall see, point blow up eventually occurs for any \(\epsilon>0\). Consequently, higher precision or rescaling methods would be required to compute the solution for \(\epsilon<0.3\). Figure 15 compares the asymptotic estimate (26) to the numerically computed peak-to-trough height and confirms the validity of the estimate away from the blow-up time. In the neighbourhood of the singularity, i.e. for \(\zeta=\mathcal{O}(1)\), the NLH solution is given by \[u(z,t)\sim\phi(\zeta),\hskip 28.452756ptz=i\Big{(}t+2\log\left(t_{c}-t\right)+3 \log\left(\epsilon/2\right)+\zeta\Big{)},\] where \(\phi\) is again defined as the solution to (23). Hence an estimate of the singularity location is \[\sigma=t+2\log\left(t_{c}-t\right)+3\log\left(\epsilon/2\right)+\zeta_{*}, \tag{27}\] where \(\zeta_{*}\) is the first singularity of (23) on the real axis, which was found to be \(\zeta_{*}\approx 1.53767\) Figure 15: The peak-to-trough height of the solution in Figure 12 (solid curve) compared to the approximation (26) (dashed curve). As expected, the approximation (26) breaks down close to the blow-up time. in section 6.1. This estimate for \(\sigma\), as well as the estimates in the upcoming sections, will be compared to numerical results in section 6.6. ### \(t_{c}-t=\mathcal{O}(1)\) On this time scale, the solution on the real axis is given again by (25). The singularity location evolves on the imaginary axis according to (cf. (27)) \[\sigma(t)\sim t+2\log\left(t_{c}-t\right)+3\log\left(\epsilon/2\right)+\zeta_ {*}(t), \tag{28}\] where \(\zeta_{*}(t)\) is the position of the first singularity of another nonlinear backward diffusion PDE initial-value problem ((D.20)-(D.21)) whose solution is not known explicitly. However, the limiting behaviour of \(\zeta_{*}(t)\) is shown in D.3 to be as follows: for \(1\ll t_{c}-t\ll 1/\epsilon^{2}\) with \(\epsilon\to 0\), \(\zeta_{*}(t)\to\zeta_{*}\), where \(\zeta_{*}\) is the first singularity of (23) on the real axis, i.e. \(\zeta_{*}\approx 1.53767\) and therefore (28) tends to (27). For \(t\to t_{c}\), to leading order \(\zeta_{*}(t)\sim-\log(t_{c}-t)\) and thus \[\sigma(t)\sim t+\log\left(t_{c}-t\right)+3\log\left(\epsilon/2\right),\qquad t \to t_{c}. \tag{29}\] ### Fourth time scale The fourth time scale, which is valid exponentially close to the blow-up time, is defined via \[t_{c}-t=\frac{s}{\epsilon^{3}e^{4/\epsilon^{2}}}\] with \(s=\mathcal{O}(1)\), and to leading order \[u(x,t)\sim\frac{\epsilon^{3}e^{4/\epsilon^{2}}}{s-16\cos x}. \tag{30}\] Therefore blow up occurs at \(s\sim 16\) and the modification to the algebraic expansion for \(t_{c}(\epsilon)\) resulting from the previous timescales is the exponentially small, and hence in practice irrelevant quantity \[-16e^{-4/\epsilon^{2}}/\epsilon^{3}.\] An approximation for the locations of the closest singularities (which will be used in section 6.6) is \(z=\pm i\sigma(t)\) with \[\sigma\sim\cosh^{-1}(s/16)=\log(s+(s^{2}-256)^{1/2})-4\log 2. \tag{31}\] ### Fifth time scale As blow up is approached, \(s\to 16\), \(x\to 0\) apply in (30) and therefore \[u\sim\frac{\epsilon^{3}e^{4/\epsilon^{2}}}{s-16+8x^{2}}.\] This suggests the self-similar form \[u=\frac{\epsilon^{3}e^{4/\epsilon^{2}}}{s-16}f\left(\frac{x}{(s-16)^{1/2}} \right),\] cf. (20). Again, the appropriate ODE solution fails to exist, which necessitates the introduction of a final (doubly exponentially short) time variable to capture the logarithmic corrections. The approach to blow up is analysed in more detail in E. ### Comparisons to numerical results Having derived a large number of estimates for the small amplitude case, we now compare some of them to numerical approximations. Figure 16 shows the accuracy of the asymptotic approximations to the solution on the real axis given in (21), (25) and (30) for the case \(\alpha=\epsilon=0.5\) We note that the need for a comparatively large value of \(\epsilon\) reflects the asymptotic structure whereby the small quantity \(\exp(-4/\epsilon^{2})\) is prominent: the extent to which \(u\) becomes near uniform is a striking feature of the analysis. In (25), we choose \(t_{c}\) to be the numerical blow-up time and in (30) we choose \(t_{c}\) such that the blow-up time of (30), namely \(s=16\) (i.e., at \(t=t_{c}-16\epsilon^{-3}e^{-4/\epsilon^{2}}\)), coincides with the numerical blow-up time. Figure 17 compares the asymptotic estimates of the closest singularities at \(z=\pm i\sigma(t)\) (see (22), (27), (28), (29) and (31)) to the numerically computed singularity position. For all these asymptotic approximations, we let \(t_{c}\) be the numerically computed blow-up time. For the estimate (27), we use the value of \(\zeta_{*}=1.53767\), while for (28), since the function \(\zeta_{*}(t)\) is not known explicitly, we replace \(t_{c}\) and \(\zeta_{*}(t)\) with constants4\(\hat{t}_{c}\) and \(\hat{\zeta}_{*}\) such that the local maximum of the resulting estimate matches that of the numerical singularity position. Footnote 4: In Figure 17, these constants are \(\hat{t}_{c}=15.65\) and \(\hat{\zeta}_{*}=1.78\). Figure 16: The error of the approximations (21) (first timescale), (25) (second and third timescales) and (30) (fourth timescale) compared to the numerical solution of the NLH with \(u(x,0)=\epsilon\cos x\), \(\epsilon=0.5\). The error is calculated at every time step as the minimum of the absolute and relative errors on \(x\in[-\pi,\pi]\). In the right frame, we ‘zoom in’ on the error close to the blow-up time by showing the error for the last 1000 steps of the numerical time integrator, which corresponds to the time interval \(t\in[t_{1},t_{c}]\), with \(t_{1}=14.87\ldots\) and \(t_{c}=15.53\ldots\). Figure 17: The position of the closest singularity on the positive real axis of the small-amplitude NLH solution with \(u(x,0)=\epsilon\cos x\) with \(\epsilon=0.5\) compared to the asymptotic approximations. ## 7 Blow-up limit Figure 18 shows a small-amplitude NLH solution at times approaching the blow-up time. Since the solution is even, it is shown only for \(x>0\) with \(x\in[10^{-8},\pi]\). The figure illustrates the well-known fact that, as the blow-up time is approached, the solution is flat for \(\eta=\mathcal{O}(1)\), where \(\eta=x/\sqrt{t_{c}-t}\); see A.1. That is, for a fixed \(t\) with \(0<t_{c}-t\ll 1\), the solution is flat with \(u\sim(t_{c}-t)^{-1}\) for \(x\) sufficiently small. More precisely, as shown in E.2, for \(x=\mathcal{O}((t_{c}-t)\log(t_{c}-t)^{1/2})\), \[u(x,t)\sim\left[t_{c}-t+\frac{x^{2}}{C-8\log(t_{c}-t)}\right]^{-1},\qquad t \to t_{c}, \tag{32}\] where \(C\) is a constant that depends on the initial data. This approximation is shown as dotted curves in Figure 18 and matches the numerical solution well for sufficiently small \(x\). As is clear from Figure 18, the solution is asymptotically flat on an interval whose width shrinks to zero as \(t\to t_{c}^{-}\) and, as shown in E.3, the solution acquires the blow-up profile, \[u(x,t_{c})\sim\frac{8}{x^{2}}\left(2\log(1/x)+\log(\log(1/x))+4\log 2+C/8+8 \beta_{1}\right),\quad x\to 0^{+}, \tag{33}\] where the constant \(\beta_{1}\) is defined in E. This profile is shown as a dashed curve in the right frame of Figure 18, which matches the non-flat part of the numerical solution well for sufficiently small \(x\). We note that, before the blow-up time, the closest singularities are second-order poles to leading order with a logarithmic term at fourth order (see (9)); however, in the blow-up limit, the leading-order behaviour for \(x\to 0\), namely \(u\sim 16\log(1/|x|)/x^{2}\), acquires a logarithmic contribution. Unlike the asymptotic estimates (32) and (33), the asymptotics of the blow-up profile in [24] are valid on the entire interval \([-\pi,\pi]\) (they are \(2\pi\)-periodic) and the constants are expressed explicitly in terms of the initial data considered in that paper, namely (4). For comparison purposes, we restate the analogue of (32) from [24]: as \(t\to t_{c}\) \[u(x,t)\sim\left[t_{c}-t+2\epsilon\,e^{-\alpha}\sin^{2}(x/2)+2\epsilon^{2}\log \epsilon\,e^{-2\alpha}\sin^{2}x+\epsilon(t-t_{c})e^{-\alpha}\cos x\right.\] Figure 18: Left: The five solid curves are NLH solutions with initial data \(u(x,0)=\alpha\cos x\), \(\alpha=0.5\) corresponding to (from bottom to top) \(t_{c}-t=10^{-3},10^{-4},10^{-5},10^{-6},10^{-8}\). The dashed curves show the estimate (30) and the single dotted curve is the estimate (32). Note the logarithmic scale on the \(x\)-axis. Right: The solid curves show the NLH solution (with the same initial data as in the left frame) with \(t_{c}-t=10^{-8},10^{-10},10^{-12},10^{-14},10^{-15}\). The dotted curves show the estimate (30) and the single dashed curve is the blow-up profile (33). For the asymptotic estimates, we use the numerically determined value of \(t_{c}=1.530458826185942\) and the values \(C=92000\) and \(\beta_{1}=-3/32\), which were chosen to fit the numerical solution. \[+2\epsilon^{2}\sin^{2}x\biggl{(}e^{-2\alpha}\log\left(\frac{t_{c}-t}{ \epsilon}+2e^{-\alpha}\sin^{2}(x/2)\right)+C_{1}+C_{2}\biggr{)}\biggr{]}^{-1}, \tag{34}\] where \(\alpha\) and \(\epsilon\) are parameters in the initial data (4) and \[C_{1}=e^{-2\alpha}\log\alpha,\hskip 28.452756ptC_{2}=e^{-4\alpha}\int_{0}^{ \alpha}\frac{e^{2t}-e^{2\alpha}}{\alpha-t}dt.\] Setting \(t=t_{c}\) in (34), we obtain \[u(x,t_{c})\sim\Bigl{[}2\epsilon\,e^{-\alpha}\sin^{2}(x/2)+2\epsilon^{2}\sin^{2 }x\Bigl{(}e^{-2\alpha}\log\left(2\epsilon\,e^{-\alpha}\sin^{2}(x/2)\right)+C_{ 1}+C_{2}\Bigr{)}\Bigr{]}^{-1}. \tag{35}\] As shown in [24], at the blow-up time and for \(x\) exponentially small with respect to \(\epsilon\), the following analogue of (33) holds, \[u(x,t_{c})\sim\frac{8}{x^{2}}\left(2\log(1/x)+(4\epsilon\,e^{-\alpha})^{-1} \right),\hskip 28.452756ptx\to 0. \tag{36}\] The left frame of Figure 19 shows the accuracy of the asymptotic approximations (34)-(36) and the right frame compares the small-amplitude NLH solution with \(t_{c}-t=10^{-15}\) in Figure 18 with another NLH solution with \(t_{c}-t=10^{-15}\) but subject to the initial data (4). As discussed in the introduction, for an even initial condition with two local maxima, blow up of the type we have discussed here (generic blow-up) can occur simultaneously at two points (see the left frame of Figure 5) or at a single point (right frame of Figure 5). In the former case, two singularities collide on the real axis at both blow-up points and in the latter case (see Figure 20), a pair of singularities coalesces in each of the upper and lower half-planes before the resulting singularities collide on the real axis at blow-up. The borderline case, in which two singularities from the upper half-plane and two singularities from the lower half-plane collide on the real axis at the same point at the blow-up time (see Figure 21) is a type of non-generic blow up for which the leading-order behaviour at blow up is \(u\sim C/x^{4}\), \(x\to 0\), where \(C\) depends on the initial data. (See E.5 and Figure 22.) ## 8 Conclusion For the NLH we have given asymptotic descriptions (along with numerical confirmation) of: its branch-point-type singularities, the solution in the neighbourhood of the closest singularities in Figure 19: Left: The solid curves show NLH solutions subject to (4) with \(\alpha=1\), \(\epsilon=0.001\) and \(t_{c}-t=10^{-3},10^{-5},\ldots,10^{-13},10^{-15},10^{-16}\); the dotted curves are defined by (34) and the red and pink dashed curves are given by, respectively, (35) and (36). Right: the solution on the left with \(t_{c}-t=10^{-15}\) (red curve) and the small-amplitude solution in the right frame of Figure 18 with \(t_{c}-t=10^{-15}\) (blue curve). the complex plane, the complex-plane dynamics of the closest singularities and the solution in the small-time, large-amplitude, small-amplitude and blow-up limits. Figure 21: Modulus plot of an NLH solution exhibiting non-generic blow up due to four singularities colliding at the blow-up time. This solution has initial data \(u(x,0)=\alpha\exp(\mu\cos(x+\delta x)-\mu)+\alpha\exp(\mu\cos(x+\delta x)+\mu)\) with \(\alpha=6\), \(\mu=50\) and \(\delta=0.4363\pi\). Note that there is a factor of 10 difference between the colour maps (indicating the modulus of the solution) in this figure and in Figure 20. Figure 20: Modulus plot of the solution in the right frame of Figure 5 in the upper half-plane. In the latter figure, the maxima (shown as red dots) coalesce at \(t\approx 0.855\), whereas the singularities above collide \(t\approx 1.006\). Numerous generalisations suggest themselves; perhaps the most immediate is the quasilinear power-law case \[\frac{\partial u}{\partial t}=\frac{\partial}{\partial x}\Big{(}u^{m}\frac{ \partial u}{\partial x}\Big{)}+u^{p},\] with \(m\neq 0\), \(p>1\), for which the blow-up behaviour is well-known to differ from that for the semilinear case \(m=0\) with which we have been concerned here; indeed, the latter can be viewed as the borderline case between two distinct classes of behaviour, the logarithmic terms that are prevalent in the above being associated with this borderline status. We related the NLH solution to nonlinear ODE solutions (which have interesting properties in their own right) in certain limits and found the singularity locations of the ODE solutions numerically (via the pole field solver) and asymptotically. A possibility for future work would be to develop an analogue of the pole field solver for PDEs. Just as the pole field solver can accurately compute multivalued ODE solutions by using the ODE and adaptive Pade approximation to continue analytically the solution onto multiple Riemann sheets, so a pole field solver for PDEs might be able to compute, for a fixed time \(t\), the NLH solution on multiple Riemann sheets in the complex \(x\) plane. The numerical analytic continuation method for the NLH that we used in this paper (Pade and quadratic Pade approximation using the Fourier expansion of the solution) is accurate in a neighbourhood of the closest singularities of the NLH. However, it rapidly loses accuracy as one moves further away from the real axis. A pole field solver for PDEs would presumably maintain accuracy much further away from the real axis and also onto neighbouring Riemann sheets. ## 9 Acknowledgements The authors would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme _Complex analysis: techniques, applications and computations_ when work on this paper was undertaken. This programme was supported by EPSRC grant number EP/R014604/1. The work of the first author was also supported by the Leverhulme Trust Research Project Grant RPG-2019-144. The second author gratefully acknowledges a Royal Society Leverhulme Trust Senior Fellowship. The third author acknowledges a grant from the H.B. Thom Foundation of Stellenbosch University that enabled participation in the above mentioned programme. Figure 22: The (non-generic) blow-up profile of the solution in the bottom-right frame of Figure 21 (solid line) and a curve that grows as \(\mathcal{O}(x^{-4})\), \(x\to 0\) (dotted line).
2310.19640
Mixed coordinate Node link Visualization for Co_authorship Hypergraph Networks
We present an algorithmic technique for visualizing the co-authorship networks and other networks modeled with hypergraphs (set systems). As more than two researchers can co-author a paper, a direct representation of the interaction of researchers through their joint works cannot be adequately modeled with direct links between the author-nodes. A hypergraph representation of a co-authorship network treats researchers/authors as nodes and papers as hyperedges (sets of authors). The visualization algorithm that we propose is based on one of the well-studied approaches representing both authors and papers as nodes of different classes. Our approach resembles some known ones like anchored maps but introduces some special techniques for optimizing the vertex positioning. The algorithm involves both continuous (force-directed) optimization and discrete optimization for determining the node coordinates. Moreover, one of the novelties of this work is classifying nodes and links using different colors. This usage has a meaningful purpose that helps the viewer to obtain valuable information from the visualization and increases the readability of the layout. The algorithm is tuned to enable the viewer to answer questions specific to co-authorship network studies.
Mohsen Nafar, Hamed Azami Zenouzagh
2023-10-30T15:28:31Z
http://arxiv.org/abs/2310.19640v1
# Mixed-coordinate Node-link Visualization for Co-authorship Hypergraph Networks ###### Abstract We present an algorithmic technique for visualizing the co-authorship networks and other networks modeled with hypergraphs (set systems). As more than two researchers can co-author a paper, a direct representation of the interaction of researchers through their joint works cannot be adequately modeled with direct links between the author-nodes. A hypergraph representation of a co-authorship network treats researchers/authors as nodes and papers as hyperedges (sets of authors). The visualization algorithm that we propose is based on one of the well-studied approaches representing both authors and papers as nodes of different classes. Our approach resembles some known ones like anchored maps but introduces some special techniques for optimizing the vertex positioning. The algorithm involves both continuous (force-directed) optimization and discrete optimization for determining the node coordinates. Moreover, one of the novelties of this work is classifying nodes and links using different colors. This usage has a meaningful purpose that helps the viewer to obtain valuable information from the visualization and increases the readability of the layout. The algorithm is tuned to enable the viewer to answer questions specific to co-authorship network studies. Hypergraph visualization Graph drawing Co-authorship network visualization ## 1 Introduction The theory of Complex networks is a well-established branch of Computer Science and Mathematics, typically utilizing graphs as models of the real-world systems formed by interacting entities. Those interactions are not always limited to pairwise interactions; thus, graphs, that naturally encode just pairwise interactions, are not always the best way to model these systems. In the case of interactions involving three and more entities, _hypergraphs_ seem to be a more adequate mathematical abstraction. Indeed, lately, hypergraphs have found their way into the publication stream in network science [11, 4, 7, 24, 12]. As real-life systems vary significantly by the number of subjects in the groups and other parameters, hypergraphs, that correspond to such systems, are also highly diverse. Thus, it cannot be expected that a single visual presentation or a visualization algorithm could be equally suitable for all the use-cases. Therefore, in the current project we restrict ourselves to specific networks, namely the _co-authorship networks_, in scientific disciplines where typical papers involve no more than a dozen authors. The dataset we use in our example is the second-largest connected component of the Mathematics directory on arXiv ([https://arxiv.org/](https://arxiv.org/)), modeled by a hypergraph with 33 vertices corresponding to authors, and 48 hyperedges corresponding to papers. The sizes of the hyperedges range from 1 to 4. We believe that our proposed algorithm generally produces competitive results for networks with larger number of smaller hyperedges, i.e. tens to hundreds of nodes with hyperedges of sizes one to ten, not necessarily co-authorship networks. In this paper, we visualize the hypergraph representing the mentioned dataset. For this specific case, we do not have any constraints for placing the hypergraph's vertices. However, there are several objectives that we seek to reach in our visualization. Among the things we would like to be able to visually track in the network are: authors with the largest number of publications, the most actively collaborating authors, the size of the largest team of authors of a single paper, the most frequent size of a team of authors of a single paper, the most frequent publication number for an author, the connection between different authors and papers, the number of authors who have a specific number of papers, and the appearance of the most active author in different papers. However, we believe that for other networks with similar parameters (modeled with a hypergraph and/or a bipartite graph), our technique is able to address similar questions regarding the relations between different types of entities in the network. In this work, each paper-nodes and its adjacent edges are colored according to the number of co-authors of a paper-node. Using different classes of colors for nodes with similar properties assists the viewer in distinguishing some of the graph theoretical properties of the nodes. ### Approaches to hypergraph visualization As hypergraphs are essentially synonymous to set systems, any set visualization approaches can be applied for visualizing hypergraphs. According to the survey [2], set visualization approaches fall into five main categories but in what follows, we combine the first two categories into one. #### 1.1.1 Overlays and Euler diagrams This approach treats hyperedges as closed curves that contains vertices. This class of techniques is similar to that of Euler and Venn diagrams which are vastly being used to visualize sets and their relations and are the most popular members of set visualization techniques [6]. An example of this approach is _Bubble Sets_ visualization in which Collins et al., 2009, [8], used isocontours to reveal the relation between hyperedges. Similar to those of overlays are Euler diagrams where a set is represented by a closed curve and set relations can be depicted by relation between curves [20]. In 2009, Simonetto et al., [25], developed an algorithm that layouts an Euler-like diagram that is suitable for hypergraphs with medium number of hyperedges of medium size. Dinkla et al., 2012, [9], proposed a new visualization approach of the class of Kelp diagrams which they call _Point Set Membership_. They developed this approach to visualize hypergraphs in which the position of their nodes are pre-defined. It seems that this approach can be a good candidate also for hypergraphs with large hyperedges. _SimpleHypergraphs.jl_ is a software library that was developed in Julia programming language. It was designed and built by Antelmi et al., 2020, [4], so as to be used for high-performance computing on hypergraphs. _MetroSets_ is the name of an online tool for set systems visualization that is based on a metro map metaphor. This approach was developed and introduced in 2020 by Jacobsen et al., [17], that is also compatible to visualize hypergraphs with large hyperedges. Wallinger et al., 2021, [29], in a study that targeted to compare LineSets, EulerView, and MetroSets as three of the suitable approaches to visualize medium-sized datasets, quoted that "Our results include statistically significant differences, suggesting that MetroSets performs and scales better". #### 1.1.2 Node-link based diagrams: Both hyperedges and nodes of the hypergraph are represented as two levels of a bipartite graph vertices where an edge between two vertices stands for the set membership in the hypergraph. An algorithm to visualize a bipartite representation of a hypergraph using _anchored maps_, is proposed by Misue, 2006, [21]. Anchored maps techniques are a class of visualization methods in which some of the vertices are restricted to be positioned on some pre-defined places while the rest of vertices have freedom to move; the former vertices are called _anchors_ and the latter ones _free vertices_. In this visualization approach, that is the technique for which we have presented an algorithm, one level of vertices are positioned on a circle while the vertices belonging to the other level have freedom to move. A so called _extra-node_ representation of the hypergraph was used by Ouvrard et al., 2017, [22], when they were trying to show the improvements it can make for visualising hypergraphs. In fact, it is a visualization of the _star expansion_ of the hypergraph using an algorithm called _ForceAtlas2_. This algorithm is a force-directed algorithm that was developed by Jacomy et al., 2014, [18]. Another approach is SetCoLa which is a domain-specific language that was designed, created, and contributed by Hoffswell et al., 2018, [15], to layout graphs using constraints. According to the authors "constraints enable flexible graph layout by combining the ease of automatic layout with customizations for a particular domain". _Py3plex_, a Python library that was implemented and introduced by Skrlj et al., 2019, [26]. The main purpose of this library was to visualize and analyze networks of multilayer nature. Huang et al., 2020, [16], developed an algorithm called _PLANET_ that creates a radial layout of the network. One objective of this algorithm is to minimize the edge crossings while trying to distribute vertices uniformly. The layout of our dataset was represented as a bipartite graph according to a free coordinate layout (force-directed algorithm) that is the most basic kind of layout is shown in 1(a). In this picture, the paper-nodes are colored by yellow and the author-nodes are the purple ones. Figure 1(b) shows our dataset in anchored map view where the paper-nodes are placed on the oval (pre-defined positions) and the author-nodes are inside the oval. The use of the anchored maps technique improves the picture as it trivializes the distinction of types of the nodes. Our approach is based on the _anchored maps_ technique, [21], which we call a _mixed-coordinate node-link diagram_. Notably, the similarities between and _free coordinate_ nodes. A free-coordinate type of a node is restricted within a two-dimensional region on the plane, whereas semi-fixed nodes are restricted to a one-dimensional curve or some reasonably large discrete set of positions. On the other hand, we have _fixed coordinate_ nodes, which are not subject to optimization. Moreover, we apply the bundling method in our work. Furthermore, the algorithms developed in the two techniques are totally different. #### 1.1.3 Matrix-based diagrams These types of diagrams are basically the visualization of the incidence matrix for a set system. Also some would refer them as matrix metaphors to show sets and their elements. Rows and columns of the matrix represent sets and elements, or vice versa, and entries of the matrix depict set membership relations. _UpSet_ is a highly interactive tool that can be used to analyze sets from quantitative point of view. It has various abilities and properties such as showing intersections between sets and group based and query based aggregations [19]. As another matrix-based approach, _Bertifier_, a web application, was developed and introduced by Perin et al., 2014, [23]. Its developers quoted in their paper that their web application uses "Jacques Bertin's matrix analysis method, whose goal was to "simplify without destroying" by encoding cell values visually and grouping similar rows and columns". Valdivia et al., 2017, [27], introduced _Hypenet_ which they have designed for dynamic hypergraphs visualization. This technique can also be used for pattern and inconsistency detection. _HYPER-MATRIX_ is a visual analytic tool for temporal hypergraph model exploration that was presented by Fischer et al., 2020, in [13]. The technique contains various features such as a geometric deep learning model and different interactions to be used as combinations. #### 1.1.4 Aggregation-based techniques In these techniques, some elements can contain multiple data-elements. Approaches of this category strive to demonstrate the cardinality of sets through representing the frequency of elements in set-typed data. _Radial Sets_ is an aggregation-based visualization presented by Alsallakh et al., 2013, [1]. The authors designed it for visually analysis of a dataset with many elements for their set membership relations. Wang et al., 2015, [30], developed a software package named _SuperExactTest_ that contains a theoretical framework and a visualization technique for multi-set interactions in programming language R. "A Comprehensive Visualization of Set Intersections" is the title of a work by Alsallakh and Ren, 2016, [3], in which they presented an approach named _PowerSet_. In the paper, it is pointed out that distribution of elements among set intersections can be evaluated. Moreover, it is suitable for exploration and comparison of elements' attributes. _PAOH_, stands for "Parallel Aggregated Ordered Hypergraph", is another visualization approach to layout dynamic hypergraphs. In this technique, vertices are modeled by parallel horizontal bars and vertical lines represent the hyperedges. Valdivia et al., 2019, [28], fill the entries corresponding to a vertex and a hyperedge that it is belong to by dots. ## 2 Proposed visualization technique: mixed-coordinate node-link diagram The approach we are using in the current study belongs to the class of node-link diagrams and deals with the _star expansion_ of the hypergraph. This suits our case well as for hypergraphs with relatively small hyperedges (in our sample dataset, the hyperedges cardinalities range from 1 to 4), the incidence relations and many graph-theoretic parameters (such as graph distances and centralities) are observable on a node-link diagram. We thus are facing the problem of finding a good layout for the star expansion of a hypergraph. Figure 1: a. Free-coordinate and b. anchored maps layout on our dataset In our use-case, it is pretty common that the same group of researchers publish several papers during an extended period of time. In this case, in our hypergraph representation, we might have many hyperedges with high multiplicity (identical as sets). Thus a natural idea is to _bundle_ such hyperedges and represent them with a single visual glyph. As an extra visual attribute to encode the hyperedge multiplicity, we use the size of the glyph. We mentioned that the network contains 33 authors and 48 papers, which means that before bundling the hyperedges, we would have a bipartite graph with 81 nodes in total. After hyperedge bundling, 48 paper-nodes are identified and bundled into 30 nodes that are displayed. Therefore, the total number of the nodes which was 81 is reduced to 63. The number of links to be visualized also decreases. **Input:** A bipartite graph **Output:** Mixed-coordinate layout ``` 1:Initial positioning 2:while (\(E_{T}\geq\) Threshold & \(i\leq\text{Max}_{iteration}\)) do 3: Compute repulsion and attraction forces using a modification of force-directed algorithm 4: Update positions of non-pendant author-nodes 5:end while 6:Compute crossings 7:while (\(i\leq\text{iteration}_{1}\)) do 8: Mixed-Discrete-Continuous 9:end while 10:positioning pendant author-nodes 11:return layout. ``` **Algorithm 1** Mixed-coordinate layout **Input:** A bipartite graph **Output:** Mixed-coordinate layout We propose restricting the coordinates of the nodes according to the three coordinate types: _free_, _fixed_, and _semi-fixed_ coordinates, which we call _mixed-coordinates_ (see Introduction). After hyperedge bundling, some author-nodes may become pendant in addition to those that had already been pendant which we call them _pendant author-nodes_. The rest of the author-nodes are called _non-pendant author-nodes_. We restrict the coordinates of paper-nodes to be semi-fixed and we place them on an oval, whereas the coordinates of non-pendant author-nodes are of the free-coordinate type. The coordinates of the pendant author-nodes are of the fixed-coordinate type; we will position these nodes near the paper-node they belong to and outside of the oval. Note that by referring to these nodes as fixed-coordinate nodes, we mean that their positions are fixed relatively to paper-nodes, i.e. only depend on the position of the paper-node they are connected to. The positions of the pendant author-nodes are computed at the end of the algorithm when the positions of all the other nodes are not subject to any change. Furthermore, this new approach enables us to use continuous and discrete optimization methods to compute the layout that simulates the system of forces to find the best configuration. Moreover, we classify the paper-nodes by the number of their co-authors (cardinality of the original hyperedge) and to make them distinguishable in the picture we encode the number of co-authors of a paper-node with node color. This visual attribute enables the viewer to read out the node degrees with more immediacy, therefore, improves the readability of the layout. To improve the layout further, we color links by the same color as the paper-node they are incident to. As for the semi-fixed nodes, the primary vehicle for improving the layout with respect to these nodes is changing the order (permutation) of these nodes on the curve they belong to. This is where the discrete optimization of the algorithm comes in. We have two objectives for the optimization here: minimizing the system's energy and minimizing the number of crossings. A continuous optimization algorithm is used to compute the coordinates of the free nodes. The objective of this part is to minimize the energy of the system. The designed algorithm, which employs the discussed approach, can be found in Algorithm 1. The result of the Algorithm 1 without lines 7-9 (without discrete part of the algorithm) is shown in 2(a). All the paper-nodes are now placed on the oval, all non-pendant author-nodes are inside of the oval, and every pendant author-node is placed outside of the oval and is connected to the paper-node, which it belongs to, with a short link. The position of every pendant author-node is on an imaginary circle where its center is the corresponding paper-node, but it is important that the author-node be placed on the part of the circle that is outside of the oval that contains the paper-nodes. All author-nodes are purple. In our example, yellow/light-green/dark-green/blue stand for cardinalities 4, 3, 2, 1, respectively. The sizes of the paper-nodes correspond to different numbers of hyperedges bundled into the particular paper-node; the more hyperedges are in the bundle, the larger is the corresponding paper-node. Below you can find a brief description of the procedures of Algorithm 1. * **Initial positioning.** Position the nodes on two concentric ovals (pendant author-nodes are excluded). * **While loop line 2.** Check the energy decrease (\(E_{T}\)) against a threshold and check if the number of iterations is below the maximum. * **Compute repulsion and attraction forces using a modification of force-directed algorithm.** Apply an iteration of a simple force-directed algorithm which has the complexity of \(O(n^{2})\) per iteration (e.g. _Frachterman-Reingold algorithm_[14]). Note that in this part, forces are computed between three different pairs of nodes: attraction force between a pair of nodes that are connected by an edge, repulsion force between all pairs of non-pendant author-nodes, and repulsion force between pairs of non-pendant author-nodes and paper-nodes. Since all the pendant author-nodes are excluded from this part, we need to modify the force-directed algorithm. However, this part can be done using _Barnes-Hut algorithm_[5] that has complexity of \(O(n\cdot\log n)\) per iteration that needs to be modified for our purpose. * **Update positions of non-pendant author-nodes.** Update positions of non-pendant author-nodes based on total forces acting on them. * **Compute crossings.** Compute the number of total and per paper-node crossings that can be accomplished, e.g., in time complexity of \(O(n^{2}\cdot\log n)\), according to an algorithm in [10]. * **Mixed-Discrete-Continuous.** This algorithm combines discrete and continuous optimization to improve the layout and is shown in Algorithm 2. * **Pendant positioning.** Add pendant author-nodes back to the graph and place them close to their paper-nodes outside of the outer oval, no force acts on these nodes in the whole algorithm. To do this part, we consider an imaginary circle around the corresponding paper-node for which the center is the paper-node and place the pendant author-node on its curve in way that it does not lie inside of the oval on which the paper-nodes are placed. ``` 0: Current state of the system 0: Improved mixed-coordinate layout using Mixed-Discrete-Continuous algorithm 1:while (\(cr^{{}^{\prime}}<cr^{*}\) & \(i\leq\text{iterations}_{2}\)) do 2: Choose nodes 3: Check number of crossings 4:if (pair is a good candidate) then 5: Swap pair 6:while (\(E_{T}\geq\text{Threshold \& }i\leq\text{iterations}_{3}\)) do 7: Compute repulsion and attraction forces using a modification of force-directed algorithm like in algorithm 1 8: endwhile 9:endif 10:endwhile ``` **Algorithm 2** Mixed-Discrete-Continuous As previously discussed, the main idea of Mixed-Discrete-Continuous algorithm is to add another objective (minimizing the number of crossings) to the problem and solve it by combining discrete and continuous optimization. The reader may compare the quality of the layouts of 2(a) (before improvement) with 2(b) that is the result of the improvement made by our algorithm. Note the decrease in the number of crossings when comparing the two layouts. We will briefly describe Algorithm 2 in the following. * **While loop line 1.** Makes sure that the loop runs while the crossing number has not decreased (\(cr^{{}^{\prime}}\) refers to the number of crossings before the loop and \(cr^{*}\) refers to the number of crossings after one iteration) _and_ the limit on the maximum number of iterations has not been reached. * **Choose nodes.** Select two candidates from paper-nodes to swap them. * **Check number of crossings.** Check to see whether swapping the chosen paper-nodes will reduce the number of crossings, we can simply do it in linear time (\(O(n)\)); since we already know the number of crossings every paper-node participates in, we need to compute the number of crossings that these nodes cause in case we replace them with each other. * **Swap pair.** Exchange the position of the chosen paper-nodes. ## 3 Discussion and results Assigning different colors to paper-nodes with different cardinalities makes it easier for the user to visually approximate the number of the papers with certain number of authors, which was one of the aims we introduced in introduction of this work as an objective. Since the color of every edge is the same as the paper-node it is incident to, by looking at an author-node the user can easily track what the sizes of the collaboration groups (papers) that an author participated in are, which is another aim that we set for our technique. Moreover, looking at author-nodes, the user can visually notice those that have published the largest number of papers. Although bundling the paper-nodes decreases the degree of the author-nodes it is connected to, different size of the bundled paper-nodes that depends on the number of the articles in the bundle solves this problem, therefore, the user is able to find the most active authors. Furthermore, the user is able to understand the relation between any two authors by tracking their incident links and since the colors of the links have a particular meaning the user does not need to track the edges that have different colors (e.g. two authors under consideration do not have links with the same color, then they surly have no common paper). In addition to what we discussed so far, the user is able to find the most frequent size of a team of authors of a single paper by just visually tracking the number of paper-nodes that have the same color and comparing for different colors. Therefore, our technique enables the user not only to see the relations between vertices of the original hypergraph (in our case author-nodes), but also the relations between hyperedges and vertices of the original hypergraph (paper-nodes and author-nodes). We believe that the information that our technique is able to reveal can not be revealed by other visualization approaches, unless a combination of approaches get in use. The color coding of the paper-nodes and links also helps in situations when a node is placed close to a link which is not incident to. In current section, we compare our technique with four different visualization methods which belong to different categories of hypergraph visualization techniques. Some of them are the-states-of-the-art in the visualization literature, i.e. SimpleHypergraphs.jl, ForceAtlas2, Bertifier, and Parallel Aggregated Ordered Hypergraph Visualization (PAOH). Figure 3 shows the output of these visualization techniques on the dataset we used in this paper. We evaluated the methods with each other according to these questions: Q1. Which authors do have the largest number of publications? Q2. Who is the most actively collaborating author? Q3. Who are the most actively collaborating team of authors? Q4. What is the size of the largest team of authors of a single paper? Q5. What is the most frequent size of a team of authors of a single paper? Q6. What is the most frequent publication number for an author? Q7. What are the connection between different authors and papers? Q8. What is the number of authors who have a specific number of papers? Q9. what is the appearance of the most active author in different papers? Figure 2: Mixed-coordinate layout with/without Mixed-Discrete-Continuous algorithm We summarized the results of the comparison of the techniques in answering the questions in table 1. In this table, a method obtains check-mark for a question, if a viewer answers the question faster by only looking at the layout and without doing any calculation except comparing values. In other words, additional calculations such as summation of values are not allowed and leads to cross-sign. If the number of an author's publication is much greater than the others, a viewer can answer Q1 by using all of these five methods. However, if the difference is small, PAOH and our method act better than the others. The reason that PAOH and our technique are able to lead the viewer to answer the Q1 faster is that we encode the degree of paper-nodes with specific colors and PAOH possesses additional information in row and column. According to our method properties such as nodes positioning and using colors in a meaningful way, the method enables the viewer to answer Q2 and Q3 which cannot be answered by using other methods. SimpleHypergraph.jl and ForceAtlas2 failed in finding the size of the largest team of authors of a single paper. In Bertin's matrix method, the viewer should add up the values of each column to obtain the number of authors for each paper and then answers Q4. By comparing the additional information in PAOH, a viewer can answer Q4. Finding the largest number is even easier with our layout because the problem is changed into checking the colors assigned to specific degree of paper-nodes which is a very easy visual task. For Q5, Q6, and Q8, the description is the same as that one which was stated for Q4. In these cases, the viewer focuses on the repetition number of specific colors (as they reveal very specific information about the paper-nodes). In our technique, the connection between authors and articles are shown by meaningful colors. As a result, these connections are clear using our method which the others lack this property. Additional information of PAOH and the color-using property of our method enables the viewer to answer Q9. Although, Mixed-coordinate Node-link Visualization method visualizes complicated networks, the trustworthy of the method are not as good as that ones which are obtained for networks with larger number of smaller hyperedges. Figure 3: Four visualization methods output for a data set with 33 authors and 48 articles ## 4 Conclusion The algorithm that we have proposed has several novelties that benefits the readability of the resulting layout. We combined both freely positioned nodes and nodes attached to the fixed circle, thus providing clear visual difference between the nodes corresponding to the authors and the papers. Moreover, we employ node bundling and special positioning of the pendant nodes to maximally de-clutter the visualization in center of the canvas. We use color coding to make it easy to visually estimate node degrees, also additional use of color coding on the links allows for estimation of the degrees of the paper-nodes that neighbour a given author-node. Furthermore, we have combined continuous and discrete optimization in a single algorithm for best visual results. Below we outline some areas for improvement on the proposed technique and topics for further investigation. Firstly, in Algorithm 2, one can have a multitude of strategies for choosing the two paper-nodes as the candidates for swapping in the discrete optimization stage of the algorithm. One such strategy is to choose the nodes one after another, and while doing it, we can have different criteria for these choices. For instance, we can choose the first paper-node randomly or greedily; take the paper-node with the highest energy or the paper-node with the highest share in the crossings. Note that we can make multiple copies of the system after choosing the first paper-node and compute different cases for different candidate choices of the second paper-node in parallel. Moreover, the choice of the two paper-nodes can be dependent on each other or independent. Furthermore, it is possible to choose a pair of paper-nodes at once rather than choose the two vertices in consecutive steps. However, currently in this part of the algorithm we choose the two paper-nodes consecutively, randomly, independently of each other, and according to their share in the number of crossings. The other mentioned cases remain for further investigation. Secondly, the cardinality of the discrete set of positions for placing semi-fixed coordinate nodes (paper-nodes) is equal to the number of paper-nodes after bundling. This means that paper-nodes are equally distanced on the oval. We believe that the layout may benefit from uneven distribution of the paper-nodes on the oval in some cases, although we do not have the corresponding implementation for it right now. Lastly, the approach that we have presented is adequate for the visualization of hypergraphs sharing similar distributions of node degrees and hyperedge cardinalities as ours. The question remaining for further investigation is how far our approach scales for other types of networks. Moreover, in our approach, we have a circular area and a circumference on which the nodes of some particular type are placed. We are currently investigating a novel visualization type where nodes corresponding to hyperedges of different cardinalities are placed on different concentric circles surrounding the area of author-nodes. The corresponding work, which is an extension of the current paper, is under preparation. ## Acknowledgements We gratefully acknowledge the assistance of professor Elena Bazanova for her helpful contributions in the editing phase of the paper.
2306.15944
Pb-Hash: Partitioned b-bit Hashing
Many hashing algorithms including minwise hashing (MinHash), one permutation hashing (OPH), and consistent weighted sampling (CWS) generate integers of $B$ bits. With $k$ hashes for each data vector, the storage would be $B\times k$ bits; and when used for large-scale learning, the model size would be $2^B\times k$, which can be expensive. A standard strategy is to use only the lowest $b$ bits out of the $B$ bits and somewhat increase $k$, the number of hashes. In this study, we propose to re-use the hashes by partitioning the $B$ bits into $m$ chunks, e.g., $b\times m =B$. Correspondingly, the model size becomes $m\times 2^b \times k$, which can be substantially smaller than the original $2^B\times k$. Our theoretical analysis reveals that by partitioning the hash values into $m$ chunks, the accuracy would drop. In other words, using $m$ chunks of $B/m$ bits would not be as accurate as directly using $B$ bits. This is due to the correlation from re-using the same hash. On the other hand, our analysis also shows that the accuracy would not drop much for (e.g.,) $m=2\sim 4$. In some regions, Pb-Hash still works well even for $m$ much larger than 4. We expect Pb-Hash would be a good addition to the family of hashing methods/applications and benefit industrial practitioners. We verify the effectiveness of Pb-Hash in machine learning tasks, for linear SVM models as well as deep learning models. Since the hashed data are essentially categorical (ID) features, we follow the standard practice of using embedding tables for each hash. With Pb-Hash, we need to design an effective strategy to combine $m$ embeddings. Our study provides an empirical evaluation on four pooling schemes: concatenation, max pooling, mean pooling, and product pooling. There is no definite answer which pooling would be always better and we leave that for future study.
Ping Li, Weijie Zhao
2023-06-28T06:05:47Z
http://arxiv.org/abs/2306.15944v1
# Pb-Hash: Partitioned b-bit Hashing ###### Abstract Many hashing algorithms including minwise hashing (MinHash), one permutation hashing (OPH), and consistent weighted sampling (CWS) generate integers of \(B\) bits. With \(k\) hashes for each data vector, the storage would be \(B\times k\) bits; and when used for large-scale learning, the model size would be \(2^{B}\times k\), which can be expensive. A standard strategy is to use only the lowest \(b\) bits out of the \(B\) bits and somewhat increase \(k\), the number of hashes. In this study, we propose to re-use the hashes by partitioning the \(B\) bits into \(m\) chunks, e.g., \(b\times m=B\). Correspondingly, the model size becomes \(m\times 2^{b}\times k\), which can be substantially smaller than the original \(2^{B}\times k\). There are multiple reasons why the proposed "partitioned b-bit hashing" (Pb-Hash) can be desirable: (1) Generating hashes can be expensive for industrial-scale systems especially for many user-facing applications. Thus, engineers may hope to make use of each hash as much as possible, instead of generating more hashes (i.e., by increasing the \(k\)). (2) To protect user privacy, the hashes might be artificially "polluted" and the differential privacy (DP) budget is proportional to \(k\). (3) After hashing, the original data are not necessarily stored and hence it might not be even possible to generate more hashes. (4) One special scenario is that we can also apply Pb-Hash to the original categorical (ID) features, not just limited to hashed data. Our theoretical analysis reveals that by partitioning the hash values into \(m\) chunks, the accuracy would drop. In other words, using \(m\) chunks of \(B/m\) bits would not be as accurate as directly using \(B\) bits. This is due to the correlation from re-using the same hash. On the other hand, our analysis also shows that the accuracy would not drop much for (e.g.,) \(m=2\sim 4\). In some regions, Pb-Hash still works well even for \(m\) much larger than 4. We expect Pb-Hash would be a good addition to the family of hashing methods/applications and benefit industrial practitioners. We verify the effectiveness of Pb-Hash in machine learning tasks, for linear SVM models as well as deep learning models. Since the hashed data are essentially categorical (ID) features, we follow the standard practice of using embedding tables for each hash. With Pb-Hash, we need to design an effective strategy to combine \(m\) embeddings. Our study provides an empirical evaluation on four pooling schemes: concatenation, max pooling, mean pooling, and product pooling. There is no definite answer which pooling would be always better and we leave that for future study. Introduction In this paper, we focus on effectively re-using hashes and developing the theory to explain some of the interesting empirical observations. Typically, for each data vector, applying some hashing method \(k\) times generates \(k\) integers of \(B\) bits, where \(B\) can be (very) large. For example, with the celebrated minwise hashing (Broder, 1997; Broder et al., 1997, 1998; Li and Church, 2005; Li and Konig, 2010), we generate a permutation of length \(D\), where \(D\) is the data dimension, and apply the same permutation to all data vectors (which are assumed to be binary). For each data vector, the location of the first non-zero entry after the permutation is the hashed value. Then we repeat the permutation process \(k\) times to generate \(k\) hash values for each data vector. For vector \(u\), we denote its \(k\) hashes as \(h_{j}(u)\), \(j=1,2,...,k\). For vector \(v\), we similarly have \(h_{j}(v)\). It is known that the collision probability is \(Pr(h_{j}(u)=h_{j}(v))=J\), where for minwise hashing \(J\) is the Jaccard similarity between two binary vectors \(u\) and \(v\), i.e., \(J=\frac{\sum_{i=1}^{D}1\{u_{i}\neq 0\text{ and }v_{i}\neq 0\}}{\sum_{i=1}^{D}1\{u_{i}\neq 0 \text{ or }v_{i}\neq 0\}}\). When we use (e.g.,) minwise hashes for building machine learning models, we need to treat the hash values as categorical features and expand them as one-hot representations. For example, if \(D=4\), then the minwise hash values are between 0 and 3. Supposed \(k=3\) hashes are {3, 1, 2}, we will encode them as a \(2^{2}\times 3=12\)-dimensional binary vector: \([1,0,0,0,\ 0,1,0,\ 0,1,0,0]\) as the feature vector fed to the model. Let \(D=2^{B}\). This scheme can easily generate extremely high-dimensional data vectors and excessively large model sizes. A common strategy is to only use the lowest \(b\) bits for each hash value, a method called "b-bit minwise hashing" (Li and Konig, 2010). It can be a drastic reduction from \(2^{B}\) is \(2^{b}\), for example, \(B=32\) and \(b=10\). Typically, we will have to increase \(k\) the number of hashes to compensate the loss of accuracy due to the use of only \(b\) bits. ### Collision Probability of \(b\)-bit Hashing and the Basic Assumption Denote \(h_{j}^{(b)}(u)\) and \(h_{j}^{(b)}(v)\) as the lowest \(b\) bits of \(h_{j}(u)\) and \(h_{j}(v)\), respectively. Theorem 1 describes the collision probability of minwise hashing \(Pr\left(h_{j}^{(b)}(u)=h_{j}^{(b)}(v)\right)\) by assuming \(D=2^{B}\) is large. **Theorem 1**.: _(Li and Konig, 2010)_ \(Pr\left(h_{j}(u)=h_{j}(v)\right)=J\) is the collision probability of minwise hashing. Assume \(D\) is large. Denote \(f_{1}=\sum_{i=1}^{D}1\{u_{i}\neq 0\}\), \(f_{2}=\sum_{i=1}^{D}1\{v_{i}\neq 0\}\). Then_ \[P_{b}=Pr\left(h_{j}^{(b)}(u)=h_{j}^{(b)}(v)\right)=C_{1,b}+(1-C_{2,b})J \tag{1}\] _where_ \[C_{1,b} =A_{1,b}\frac{r_{2}}{r_{1}+r_{2}}+A_{2,b}\frac{r_{1}}{r_{1}+r_{2 }},\ \ \ \ C_{2,b}=A_{1,b}\frac{r_{1}}{r_{1}+r_{2}}+A_{2,b}\frac{r_{2}}{r_{1}+r_{2}},\] \[A_{1,b} =\frac{r_{1}[1-r_{1}]^{2^{b}-1}}{1-[1-r_{1}]^{2^{b}}},\ \ \ \ A_{2,b}=\frac{r_{2}[1-r_{2}]^{2^{b}-1}}{1-[1-r_{2}]^{2^{b}}},\ \ \ \ r_{1}=\frac{f_{1}}{D},\ \ \ \ r_{2}=\frac{f_{2}}{D}\] The result in Theorem 1 was obtained via conducting careful and tedious summations of the individual probability terms. Interestingly, if \(r_{1},r_{2}\to 0\), then \(A_{1,b}=A_{2,b}=\lim_{r\to 0}\frac{r[1-r]^{2^{b}-1}}{1-[1-r2^{b}]}=\frac{1}{2^{b}}\), \(C_{1,b}=C_{2,b}=\frac{1}{2^{b}}\) and \(P_{b}=\frac{1}{2^{b}}+\left(1-\frac{1}{2^{b}}\right)J=J+(1-J)\,\frac{1}{2^{b}}\). This (much) simplified probability has an intuitive interpretation using (approximate) conditional probabilities: \(h_{j}(u)=h_{j}(v)\) with probability \(J\). If \(h_{j}(u)\neq h_{j}(v)\) (which occurs with probability \((1-J)\), there is still a roughly \(\frac{1}{2^{b}}\) probability to have \(h_{j}^{(b)}(u)=h_{j}^{(b)}(v)\), because the space is of size \(2^{b}\). In fact, one can also resort to the commonly used "re-hash" idea to explicitly map \(h_{j}(u)\) uniformly into \([0,1,2,...,2^{b}-1]\). Therefore, in this paper, we make the following basic assumption: **Basic Assumption:** Apply the hash function \(h\) to two data vectors \(u\) and \(v\) to obtain \(h(u)\) and \(h(v)\), respectively, where \(h(.)\in[0,1,2,...,2^{B}-1]\). The collision probability is \(Pr\left(h(u)=h(v)\right)=J\). \(h^{(b)}(u)\) and \(h^{(b)}(v)\) denote the values by taking \(b\) bits of \(h(u)\) and \(h(v)\), respectively, with \[P_{b}=Pr\left(h^{(b)}(u)=h^{(b)}(v)\right)=c_{b}+(1-c_{b})J,\ \ \ \ c_{b}=\frac{1}{2^{b}} \tag{2}\] We call it an "assumption" because, when the original space is large, the "re-hash" trick typically can only be done approximately, for example, through universal hashing (Carter and Wegman, 1977). There is also an obvious "descrepancy" that, in (2), we actually need \(b\rightarrow\infty\) in order to have \(Pr\left(h^{(b)}(u)=h^{(b)}(v)\right)=J\). But here for simplicity we just assume that, when \(b=B\), we have \(Pr\left(h^{(B)}(u)=h^{(B)}(v)\right)=J\). Because \(B\) is typically large, we do not worry much about the discrepancy. Otherwise the analysis would be too complicated, just like Theorem 1. The basic assumption (2) allows us to derive a simple unbiased estimator of the basic similarity \(J\): \[\hat{J_{b}}=\frac{\hat{P}_{b}-c_{b}}{1-c_{b}},\ \ \ \ Var\left(\hat{J_{b}} \right)=\frac{Var(\hat{P}_{b})}{(1-c_{b})^{2}}=\frac{P_{b}(1-P_{b})}{(1-c_{b })^{2}}. \tag{3}\] where the variance \(Var\left(\hat{J_{b}}\right)\) assumes only one sample, because the sample size \(k\) will usually be canceled out in the comparison. When \(b=B\), the variance of \(\hat{J}\) would be simply \(J(1-J)\), i.e., the variance of the Bernoulli trial. We can compute the ratio of the variances to assess the loss of accuracy due to taking only \(b\) bits: \[R_{b}=\frac{Var(\hat{J_{b}})}{Var(\hat{J})}=\frac{P_{b}(1-P_{b})}{(1-c_{b})^{2 }}\frac{1}{J(1-J)}=1+\frac{c_{b}}{1-c_{b}}\frac{1}{J}=1+\frac{1}{(2^{b}-1)J} \tag{4}\] Here \(R_{b}\) (where \(R_{b}\rightarrow\infty\) as \(J\to 0\)) can be viewed as the multiplier needed for increasing the sample size by using only \(b\) bits. In real-world applications, typically only a tiny fraction of data vector pairs have relatively large similarity (\(J\)) values. For the majority of the pairs, the \(J\) values are very small. For example, when \(J=0.1\) and \(b=1\), we have \(R_{b}=11\). In other words, if we keep only 1 bit per hash and increase the number of hashes by a factor of 11, then the variance would remain the same. ### Motivations for Re-using Hashes and Pb-Hash: Partitioned b-bit Hashing Instead of using fewer bits and generating more hashes, in this paper, we study the strategy of re-using the hashes. The idea is simple. For a \(B\)-bit hash value, we break the bits into \(m\) chunks: \(b_{1}\), \(b_{2}\),..., \(b_{m}\) with \(\sum_{i=1}^{m}b_{i}=B\). It is often convenient to simply let \(b_{1}=b_{2}=...=b_{m}=b\) and \(m\times b=B\). The dimensionality is (substantially) reduced from \(2^{B}\) to \(m\times 2^{b}\). In many scenarios, this strategy can be desirable. In industrial large-scale systems, the cost for generating hashes can often be considerable especially for serving (for example, in many user-facing applications). Thus, it is always desired if we can generate fewer hashes for better efficiency. From the perspective of privacy protection, it is also crucial to reduce \(k\) the number of hashes, because typically the needed privacy budget "\(\epsilon\)" (in the \((\epsilon,\delta)\)-DP language (Dwork et al., 2006)) is proportional to \(k\). There is another strong motivation in that we may not be able to generate more hashes in some situations. For example, in some applications, the original data are not necessarily stored after hashing. Interestingly, we can also directly apply the Pb-Hash idea to the original categorical (ID) features. In large-scale recommender systems (Fan et al., 2019; Zhao et al., 2019; Shi et al., 2020), the use of ID features is dominating. For companies which do not have infrastructure to handle ID features of billion or even just million categories, they can apply Ph-Hash to reduce the model dimensions. Figure 1 is an illustration of the idea of Pb-Hash with training for large ID data. Basically, we can first apply a random permutation on the IDs, then break the bits into \(m\) chunks so that one can substantially reduce the embedding size, for example, from the original size of \(2^{B}\) to \(m\times 2^{b}\) with \(B=m\times b\). The number of parameters will be substantially reduced. We will need a strategy to merge these \(m\) embedding tables. The obvious choices are concatenation, mean, max, and product. Note that for this application, our Pb-Hash includes the so-called "QR-hash" (Shi et al., 2020) as a special case (which uses \(m=2\)). ## 2 Theoretical Analysis of Pb-Hash Recall the **Basic Assumption**: \(P_{b}=Pr\left(h^{(b)}(u)=h^{(b)}(v)\right)=c_{b}+(1-c_{b})J\), \(c_{b}=\frac{1}{2^{b}}\). With Pb-Hash, the basic idea is to break the total \(B\) bits into \(m\) chunks. Let \(\sum_{i=1}^{m}b_{i}=B\), and later we can assume \(b_{1}=b_{2}=...=b_{m}\) to simplify the expressions. Then, we have the following expectations: \[E\left(\hat{P}_{b_{i}}\right)=c_{b_{i}}+(1-c_{b_{i}})J \tag{5}\] \[E\left(\sum_{i=1}^{m}\hat{P}_{b_{i}}\right)=\sum_{i=1}^{m}c_{b_{ i}}+J\sum_{i=1}^{m}(1-c_{b_{i}}). \tag{6}\] Figure 1: An visual illustration for the embedding table lookup and Pb-Hash lookup. which allows us to write down an unbiased estimator of \(J\): \[\hat{J}_{m}=\frac{\sum_{i=1}^{m}\hat{P}_{b_{i}}}{\sum_{i=1}^{m}(1-c_{b_{i}})}- \frac{\sum_{i=1}^{m}c_{b_{i}}}{\sum_{i=1}^{m}(1-c_{b_{i}})}. \tag{7}\] **Theorem 2**.: \[E\left(\hat{J}_{m}\right)=J,\] (8) \[Var\left(\hat{J}_{m}\right)=\frac{\sum_{i=1}^{m}P_{b_{i}}(1-P_{b _{i}})+\sum_{i\neq i^{\prime}}\left(P_{b_{i}+b_{i^{\prime}}}-P_{b_{i}}P_{b_{i^ {\prime}}}\right)}{\left(\sum_{i=1}^{m}(1-c_{b_{i}})\right)^{2}}.\] (9) _where_ \[c_{b_{i}}=\frac{1}{2^{b_{i}}},\ \ P_{b_{i}}=c_{b_{i}}+(1-c_{b_{i}})J, \ \ P_{b_{i}+b_{i^{\prime}}}=c_{b_{i}+b_{i^{\prime}}}+(1-c_{b_{i}+b_{i^{\prime}} })J\] (10) **Proof of Theorem 2**. Firstly, it is easy to show that \[E\left(\hat{J}_{m}\right)=J,\ \ \ \ Var\left(\hat{J}_{m}\right)=Var\left( \sum_{i=1}^{m}\hat{P}_{b_{i}}\right)/\left(\sum_{i=1}^{m}(1-c_{b_{i}})\right) ^{2}.\] Then we expand the variance of the sum: \[Var\left(\sum_{i=1}^{m}\hat{P}_{b_{i}}\right)= \sum_{i=1}^{m}Var\left(\hat{P}_{b_{i}}\right)+\sum_{i\neq i^{ \prime}}Cov\left(\hat{P}_{b_{i}},\hat{P}_{b_{i^{\prime}}}\right)\] \[= \sum_{i=1}^{m}P_{b_{i}}(1-P_{b_{i}})+\sum_{i\neq i^{\prime}}\left( P_{b_{i}+b_{i^{\prime}}}-P_{b_{i}}P_{b_{i^{\prime}}}\right).\] Here we have used the **Basic Assumption.** \(\square\) The key in the analysis of Pb-Hash is the covariance term \(Cov\left(\hat{P}_{b_{i}},\hat{P}_{b_{i^{\prime}}}\right)\), which in the independence case would be just zero. With Pb-Hash, however, the covariance is always non-negative. This is the reason why the accuracy of using \(m\) chunks of \(b\)-bits from the same hash value would not be as good as using \(m\) independent \(b\)-bits (i.e., \(m\) independent hashes). **Lemma 3**.: \[P_{b_{1}+b_{2}}-P_{b_{1}}P_{b_{2}}\geq 0\] (11) _is a concave function in \(J\in[0,1]\). Its maximum is \(\frac{1}{4}\left(1-\frac{1}{2^{b_{1}}}\right)\left(1-\frac{1}{2^{b_{2}}}\right)\), attained at \(J=1/2\)._ **Proof of Lemma 3** \[f(J)=P_{b_{1}+b_{2}}-P_{b_{1}}P_{b_{2}}= J+(1-J)\frac{1}{2^{b_{1}+b_{2}}}-\left(J+(1-J)\frac{1}{2^{b_{1}}} \right)\left(J+(1-J)\frac{1}{2^{b_{2}}}\right)\] \[f^{\prime\prime}(J)=-\left(1-\frac{1}{2^{b_{1}}}\right)\left(1-\frac{1}{2^{b_{2 }}}\right)\leq 0\] This means that \(f(J)\) is a concave function in \(J\in[0,1]\). Also, we have \[f(0)=\frac{1}{2^{b_{1}+b_{2}}}-\frac{1}{2^{b_{1}}}\frac{1}{2^{b_{2}}}=0,\ \ \ \ f(1)=1-1=0\] Therefore, we must have \(f(J)\geq 0\). Furthermore, by setting \(f^{\prime}(J)=0\), we can see that the maximum value of \(f(J)\) is attained at \(J=1/2\). Figure 2 verifies the results in Lemma 3, with \(P_{2b}-P_{b}^{2}\) (left panel) and \(P_{2b}-P_{1}P_{2b-1}\) (right panel). It is interesting that in both cases, the maximums are attained at \(J=1/2\), as predicted. To simplify the expression and better visualize the results, we consider \(b_{1}=b_{2}=...=b_{m}=b\) and \(b\times m=B\). Then we have \[\hat{J}_{m}=\frac{\sum_{i=1}^{m}\hat{P}_{b_{i}}}{m(1-c_{b})}-\frac{c_{b}}{1-c_ {b}}, \tag{12}\] and \[Var\left(\hat{J}_{m}\right)=\frac{P_{b}(1-P_{b})+(m-1)\left(P_{2b}-P_{b}^{2} \right)}{m(1-c_{b})^{2}}=\frac{1}{m}\frac{P_{b}(1-P_{b})}{(1-c_{b})^{2}}+\frac {m-1}{m}\frac{P_{2b}-P_{b}^{2}}{(1-c_{b})^{2}}. \tag{13}\] We can again compare the variance of \(Var\left(\hat{J}_{m}\right)\) with, \(J(1-J)\), which is the variance of \(\hat{J}\) using all the bits: \[R_{m,b}=\frac{Var\left(\hat{J}_{m}\right)}{J(1-J)}=\frac{P_{b}(1-P_{b})+(m-1) \left(P_{2b}-P_{b}^{2}\right)}{m(1-c_{b})^{2}J(1-J)},\ \ m\times b=B. \tag{14}\] When \(R_{m,b}\) is close to \(1\), it means that Pb-Hash does not lose accuracy as much. Recall that, if we have hashed values for building learning models, the model size is \(2^{B}\times k\), where \(k\) is the number of hashes. By Pb-Hash, we can (substantially) reduce the model size to be \(m\times 2^{b}\times k\). In practice, the ID features can have very high cardinality, for example, a million (i.e., \(B=20\)) or billion (i.e., \(B=30\)). Figure 3 implies that, as long as \(B\) is not too small, we do not expect a significant loss of accuracy if \(m=2\sim 4\). Figure 2: Plots to verify Lemma 3 that \(P_{b_{1}+b_{2}}-P_{b_{1}}P_{b_{2}}\geq 0\). Left panel: \(P_{2b}-P_{b}^{2}\). Right panel: \(P_{2b}-P_{1}P_{2b-1}\). It is interesting that in both cases, the maximums are attained at \(J=1/2\). ## 3 Applications and Experiments Recall that in our **Basic Assumption**, we have not specified which particular hashing method is used. For the applications and experiments, we focus on minwise hashing (MinHash) for binary (0/1) data, and consistent weighted sampling (CWS) for general non-negative data. ### Minwise Hashing (MinHash) on Binary (0/1) Data The binary Jaccard similarity, also known as the "resemblance", is a similarity metric widely used in machine learning and web applications. It is defined for two binary (0/1) data vectors, denoted as \(u\) and \(v\), where each vector belongs to the set \(\{0,1\}^{D}\). The Jaccard similarity is calculated as: \[J(u,v)=\frac{\sum_{i=1}^{D}1\{u_{i}=v_{i}=1\}}{\sum_{i=1}^{D}1\{u_{i}+v_{i}\geq 1\}}. \tag{15}\] In this context, the vectors \(u\) and \(v\) can be interpreted as sets of items, represented by the positions of non-zero entries. However, computing pairwise Jaccard similarity becomes computationally expensive as the data size increases in industrial applications with massive datasets. To address this challenge and enable large-scale search and learning, the "minwise hashing" (MinHash) algorithm is introduced (Broder, 1997; Broder et al., 1997, 1998; Li and Church, 2005; Li and Konig, 2010) as a standard hashing technique for approximating the Jaccard similarity in massive binary datasets. MinHash has found applications in various domains, including near neighbor search, duplicate detection, malware detection, clustering, large-scale learning, social networks, and computer vision (Charikar, 2002; Fetterly et al., 2003; Das et al., 2007; Buehrer and Chellapilla, 2008; Bendersky and Croft, Figure 3: Plots for \(B\in\{30,24,18,12\}\) to illustrate the variance ratio \(R_{m,b}\) in (14). 2009; Chierichetti et al., 2009; Pandey et al., 2009; Lee et al., 2010; Deng et al., 2012; Chum and Matas, 2012; He et al., 2013; Tamersoy et al., 2014; Shrivastava and Li, 2014; Zhu et al., 2017; Nargesian et al., 2018; Wang et al., 2019; Lemiesz, 2021; Feng and Deng, 2021; Jia et al., 2021). MinHash produces integer outputs. For efficient storage and utilization of the hash values in large-scale applications, Li and Konig (2010) proposed a variant called "\(b\)-bit MinHash". This method only retains the lowest \(b\) bits of the hashed integers, providing a memory-efficient and convenient approach for similarity search and machine learning tasks. Over the years, \(b\)-bit MinHash has become the standard implementation of MinHash (Li et al., 2011, 2015; Shah and Meinshausen, 2017; Yu and Weber, 2022). Additionally, we should mention "circulant MinHash" (C-MinHash) (Li and Li, 2022). C-MinHash employs a single circular permutation, which enhances hashing efficiency and perhaps surprisingly improves estimation accuracy. Figure 4 depicts the use case of Pb-Hash on minwise hashing, for verifying the theoretical results in Theorem 2. Figure 4: We use the “Words” dataset (Li and Church, 2005). The vector, denoted by “UNITED”, stores whether each of \(D=2^{16}\) documents contains the word “UNITED”. We use minwise hashing to estimate the Jaccard similarity between the word-pair, e.g., “UNITED–STATES”, with \(k=1\) to 1000 hashes. For each hash, we apply Pb-Hash with \(m\in\{1,2,4,8,16\}\). We simulate each case \(10^{4}\) times in order to reliably estimate the biases and variances. The left upper panel plots the biases for each \(m\) and \(k\). The biases are very small (and the bias\({}^{2}\), which will be on the scale as the variance, will be much smaller.). For “UNITED–STATES”, the variance curves all overlap in the right upper panel. Thus, we zoom in the plot and present the much magnified portion in the right bottom panel. We can see that, even at such as fine scale, the theoretical variances match the empirical simulations very well. In the left bottom panel, we provide the variance curves on another word-pair “LOW–PAY”. Again, the empirical and theoretical curves match quite well. These experiments verify the accuracy of Theorem 2, even though it was based on the “Basic Assumption”. ### Consistent Weighted Sampling (CWS) and Linear SVM MinHash and OPH are techniques designed to process binary data, representing unweighted sets. In the literature, to tackle the real-valued data (weighted sets), the weighted Jaccard similarity is defined as follows: \[J(u,v)=\frac{\sum_{i=1}^{D}\min\{u_{i},v_{i}\}}{\sum_{i=1}^{D}\max\{u_{i},v_{i} \}},\] where \(u,v\in\mathbb{R}_{+}^{D}\) are two non-negative data vectors. In contrast to binary data, weighted data often carries more detailed information. Consequently, the weighted Jaccard similarity measure has garnered significant attention and has been extensively studied and applied across various domains, such as theory, databases, machine learning, and information retrieval (Kleinberg and Tardos, 1999; Charikar, 2002; Fetterly et al., 2003; Gollapudi and Sharma, 2009; Bollegala et al., 2011; Delgado et al., 2014; Schubert et al., 2014; Wang et al., 2014; Fu et al., 2015; Pewny et al., 2015; Manzoor et al., 2016; Raff and Nicholas, 2017; Tymoshenko and Moschitti, 2018; Bag et al., 2019; Pouget-Abadie et al., 2019; Yang et al., 2019; Zhu et al., 2019; Fuchs et al., 2020; Lei et al., 2020; Li et al., 2022; Zheng et al., 2023). This extended similarity metric enables the analysis and comparison of weighted sets, facilitating a deeper understanding of the underlying data. ChatGPT The weighted Jaccard similarity has emerged as a potential non-linear kernel, especially in the realm of large-scale classification and regression tasks (Li, 2017). It has been demonstrated to surpass the widely used RBF (Radial Basis Function) kernel in terms of performance across numerous tasks and datasets. The weighted Jaccard similarity's ability to capture intricate relationships within the datasets it is applied to makes it a promising choice for achieving superior results in various machine learning applications. In line with this, several large-scale hashing algorithms have been developed to efficiently estimate or approximate the weighted Jaccard similarity. A series of studies (Kleinberg and Tardos, 1999; Charikar, 2002; Shrivastava, 2016; Ertl, 2018; Li and Li, 2021) have proposed and refined hashing algorithms based on the rejection sampling technique, which proves to be efficient for dense data vectors. Furthermore, researchers such as Gollapudi and Panigrahy (2006); Manasse et al. (2010); Ioffe (2010) have introduced consistent weighted sampling (CWS), offering a complexity of \(O(Kf)\) similar to that of MinHash. CWS operates effectively on relatively sparse data. To improve upon these methods, Li et al. (2021) presented Extremal Sampling (ES) based on the extremal stochastic process. Moreover, Li et al. (2019) extended the concept of "binning + densification" from OPH to CWS and proposed Bin-wise Consistent Weighted Sampling (BCWS) with a complexity of \(O(f)\). BCWS provides a significant speedup of approximately \(K\)-fold compared to standard CWS. These advancements in large-scale hashing algorithms facilitate efficient computations and estimations of the weighted Jaccard similarity for diverse datasets. We report the results of Pb-Hash on CWS in Figure 5 and Figure 6. Figure 5: For the “WebspamN1” dataset (a character 1-gram dataset), we apply CWS and keep \(B=8\) bits for each hash value. We choose \(b\in\{1,2,4,8\}\) to run the linear SVM classifier. The left panel shows that when \(b=1\) and \(b=2\), we observe a substantial loss of accuracy. In the right panel, we zoom in to show the Pb-Hash results (i.e., dashed curves). We can see that \(m=2\) and \(m=4\) barely lose any accuracy (\(m=2\) is slightly better than \(m=4\)). Figure 6: For the “Dailysports” dataset, we apply CWS and keep \(B=12\) bits for each hash value. We choose \(b\in\{1,2,3,4,6,12\}\) to run the linear SVM classifier. The left panel shows that when \(b=1\) and \(b=2\), we observe a substantial loss of accuracy. In the right panel, we zoom in to show the Pb-Hash results (i.e., dashed curves). We can see with \(m=2\sim 4\) the loss of accuracy is small. ### CWS and Neural Nets Next we conduct experiments with using CWS hashes for training neural nets. We first break the hash bits into \(m\) chunks (for \(m=1,2,4,8\)). For each chunk, we connect it with an embedding of size \(16\). We can simply concatenate all \(m\) embeddings, but to reduce the number of parameters and speed up training, we experiment 3 other pooling options: product ("Prod"), mean ("Mean"), and maximum ("Max"). Figure 7 presents the experimental results. ## 4 Conclusion The idea of Pb-Hash, i.e., breaking the bits of one hash value into \(m\) chunks, is a very natural one after the work on \(b\)-bit minwise hashing (Li and Konig, 2010). At that time, Pb-Hash did not seem to have obvious advantages compared to \(b\)-bit hashing, because re-generating independent hashes would be always more accurate than re-using the hashes. In recent years, because of the privacy constraint (Li and Li, 2023), we have started to realize the importance of re-using the hashes. Furthermore, with hashing algorithms used in deep neural nets, the hashed value (i.e., new ID features) is typically connected to an embedding layer and hence there is a strong motivation to break the hash bits into chunks to reduce the embedding size. Also, it is natural to apply Pb-Hash to the original ID features (not the new features obtained via hashing). Figure 7: For the “Webspam” (3-gram) dataset, we apply CWS and keep \(B=16\) bits for each hash value. For every hash, we apply Pb-Hash with \(m\in\{1,2,4,8\}\). We connect every (sub)-hash to an embedding of size \(16\). Next we aggregate \(m\) embeddings via four different pooling strategies: concatenate, mean, product, and max. Then we connect the pooled embeddings with one hidden layer of size \(256\).